Updates from: 01/06/2021 04:04:45
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/add-ropc-policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/add-ropc-policy.md
@@ -304,7 +304,7 @@ Use your favorite API development application to generate an API call, and revie
The actual POST request looks like the following example: ```https
-POST /<tenant-name>.onmicrosoft.com/oauth2/v2.0/token?B2C_1A_ROPC_Auth HTTP/1.1
+POST /<tenant-name>.onmicrosoft.com/oauth2/v2.0/token?p=B2C_1A_ROPC_Auth HTTP/1.1
Host: <tenant-name>.b2clogin.com Content-Type: application/x-www-form-urlencoded
@@ -367,4 +367,4 @@ Azure AD B2C meets OAuth 2.0 standards for public client resource owner password
## Next steps
-Download working samples that have been configured for use with Azure AD B2C from GitHub, [for Android](https://aka.ms/aadb2cappauthropc) and [for iOS](https://aka.ms/aadb2ciosappauthropc).
\ No newline at end of file
+Download working samples that have been configured for use with Azure AD B2C from GitHub, [for Android](https://aka.ms/aadb2cappauthropc) and [for iOS](https://aka.ms/aadb2ciosappauthropc).
active-directory-b2c https://docs.microsoft.com/en-us/azure/active-directory-b2c/identity-provider-salesforce https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-salesforce.md
@@ -9,7 +9,7 @@ manager: celestedg
ms.service: active-directory ms.workload: identity ms.topic: how-to
-ms.date: 12/07/2020
+ms.date: 01/05/2021
ms.custom: project-no-code ms.author: mimart ms.subservice: B2C
@@ -44,10 +44,12 @@ To use a Salesforce account in Azure Active Directory B2C (Azure AD B2C), you ne
1. **API Name** 1. **Contact Email** - The contact email for Salesforce 1. Under **API (Enable OAuth Settings)**, select **Enable OAuth Settings**
-1. In **Callback URL**, enter `https://your-tenant-name.b2clogin.com/your-tenant-name.onmicrosoft.com/oauth2/authresp`. Replace `your-tenant-name` with the name of your tenant. You need to use all lowercase letters when entering your tenant name even if the tenant is defined with uppercase letters in Azure AD B2C.
-1. In the **Selected OAuth Scopes**, select **Access your basic information (id, profile, email, address, phone)**, and **Allow access to your unique identifier (openid)**.
-1. Select **Require Secret for Web Server Flow**.
-1. Select **Configure ID Token**, and then select **Include Standard Claims**.
+ 1. In **Callback URL**, enter `https://your-tenant-name.b2clogin.com/your-tenant-name.onmicrosoft.com/oauth2/authresp`. Replace `your-tenant-name` with the name of your tenant. You need to use all lowercase letters when entering your tenant name even if the tenant is defined with uppercase letters in Azure AD B2C.
+ 1. In the **Selected OAuth Scopes**, select **Access your basic information (id, profile, email, address, phone)**, and **Allow access to your unique identifier (openid)**.
+ 1. Select **Require Secret for Web Server Flow**.
+1. Select **Configure ID Token**
+ 1. Set the **Token Valid for** 5 minutes.
+ 1. Select **Include Standard Claims**.
1. Click **Save**. 1. Copy the values of **Consumer Key** and **Consumer Secret**. You will need both of them to configure Salesforce as an identity provider in your tenant. **Client secret** is an important security credential.
@@ -59,10 +61,10 @@ To use a Salesforce account in Azure Active Directory B2C (Azure AD B2C), you ne
1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**. 1. Select **Identity providers**, and then select **New OpenID Connect provider**. 1. Enter a **Name**. For example, enter *Salesforce*.
-1. For **Metadata url**, enter the following URL replacing `{org}` with your Salesforce organization:
+1. For **Metadata url**, enter the URL of the [Salesforce OpenID Connect Configuration document](https://help.salesforce.com/articleView?id=remoteaccess_using_openid_discovery_endpoint.htm). For a sandbox, login.salesforce.com is replaced with test.salesforce.com. For a community, login.salesforce.com is replaced with the community URL, such as username.force.com/.well-known/openid-configuration. The URL must be HTTPS.
```
- https://{org}.my.salesforce.com/.well-known/openid-configuration
+ https://login.salesforce.com/.well-known/openid-configuration
``` 1. For **Client ID**, enter the application ID that you previously recorded.
@@ -76,7 +78,7 @@ To use a Salesforce account in Azure Active Directory B2C (Azure AD B2C), you ne
- **Display name**: *name* - **Given name**: *given_name* - **Surname**: *family_name*
- - **Email**: *preferred_username*
+ - **Email**: *email*
1. Select **Save**. ::: zone-end
@@ -117,8 +119,7 @@ You can define a Salesforce account as a claims provider by adding it to the **C
<DisplayName>Salesforce</DisplayName> <Protocol Name="OpenIdConnect" /> <Metadata>
- <!-- Update the {org} below to your Salesforce organization -->
- <Item Key="METADATA">https://{org}.my.salesforce.com/.well-known/openid-configuration</Item>
+ <Item Key="METADATA">https://login.salesforce.com/.well-known/openid-configuration</Item>
<Item Key="response_types">code</Item> <Item Key="response_mode">form_post</Item> <Item Key="scope">openid id profile email</Item>
@@ -150,7 +151,7 @@ You can define a Salesforce account as a claims provider by adding it to the **C
</ClaimsProvider> ```
-4. Set **METADATA** URI `{org}` with your Salesforce organization.
+4. The **METADATA** is set to the URL of the [Salesforce OpenID Connect Configuration document](https://help.salesforce.com/articleView?id=remoteaccess_using_openid_discovery_endpoint.htm). For a sandbox, login.salesforce.com is replaced with test.salesforce.com. For a community, login.salesforce.com is replaced with the community URL, such as username.force.com/.well-known/openid-configuration. The URL must be HTTPS.
5. Set **client_id** to the application ID from the application registration. 6. Save the file.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/app-provisioning/application-provisioning-quarantine-status https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/application-provisioning-quarantine-status.md
@@ -15,9 +15,12 @@ ms.reviewer: arvinh
# Application provisioning in quarantine status
-The Azure AD provisioning service monitors the health of your configuration and places unhealthy apps in a "quarantine" state. If most or all of the calls made against the target system consistently fail because of an error, for example invalid admin credentials, the provisioning job is marked as in quarantine.
+The Azure AD provisioning service monitors the health of your configuration. It also places unhealthy apps in a "quarantine" state. If most, or all, of the calls made against the target system consistently fail then the provisioning job is marked as in quarantine. An example of a failure is an error received because of invalid admin credentials.
-While in quarantine, the frequency of incremental cycles is gradually reduced to once per day. The provisioning job is removed from quarantine after all errors are fixed and the next sync cycle starts. If the provisioning job stays in quarantine for more than four weeks, the provisioning job is disabled (stops running).
+While in quarantine:
+- The frequency of incremental cycles is gradually reduced to once per day.
+- The provisioning job is removed from quarantine after all errors are fixed and the next sync cycle starts.
+- If the provisioning job stays in quarantine for more than four weeks, the provisioning job is disabled (stops running).
## How do I know if my application is in quarantine?
@@ -27,9 +30,9 @@ There are three ways to check whether an application is in quarantine:
![Provisioning status bar showing quarantine status](./media/application-provisioning-quarantine-status/progress-bar-quarantined.png) -- In the Azure portal, navigate to **Azure Active Directory** > **Audit Logs** > filter on **Activity: Quarantine** and review the quarantine history. While the view in the progress bar as described above shows whether provisioning is currently in quarantine, the audit logs allow you to see the quarantine history for an application.
+- In the Azure portal, navigate to **Azure Active Directory** > **Audit Logs** > filter on **Activity: Quarantine** and review the quarantine history. The view in the progress bar as described above shows whether provisioning is currently in quarantine. The audit logs show the quarantine history for an application.
-- Use the Microsoft Graph request [Get synchronizationJob](/graph/api/synchronization-synchronizationjob-get?tabs=http&view=graph-rest-beta) to programmatically get the status of the provisioning job:
+- Use the Microsoft Graph request [Get synchronizationJob](/graph/api/synchronization-synchronizationjob-get?tabs=http&view=graph-rest-beta&preserve-view=true) to programmatically get the status of the provisioning job:
```microsoft-graph GET https://graph.microsoft.com/beta/servicePrincipals/{id}/synchronization/jobs/{jobId}/
@@ -37,43 +40,52 @@ There are three ways to check whether an application is in quarantine:
- Check your email. When an application is placed in quarantine, a one-time notification email is sent. If the quarantine reason changes, an updated email is sent showing the new reason for quarantine. If you don't see an email:
- - Make sure you have specified a valid **Notification Email** in the provisioning configuration for the application.
- - Make sure there is no spam filtering on the notification email inbox.
- - Make sure you have not unsubscribed from emails.
- - Check for emails from azure-noreply@microsoft.com
+ - Make sure you've specified a valid **Notification Email** in the provisioning configuration for the application.
+ - Make sure there's no spam filtering on the notification email inbox.
+ - Make sure you haven't unsubscribed from emails.
+ - Check for emails from `azure-noreply@microsoft.com`
## Why is my application in quarantine? |Description|Recommended Action| |---|---|
-|**SCIM Compliance issue:** An HTTP/404 Not Found response was returned rather than the expected HTTP/200 OK response. In this case the Azure AD provisioning service has made a request to the target application and received an unexpected response.|Check the admin credentials section to see if the application requires specifying the tenant URL and ensure that the URL is correct. If you don't see an issue, please contact the application developer to ensure that their service is SCIM-compliant. https://tools.ietf.org/html/rfc7644#section-3.4.2 |
-|**Invalid credentials:** When attempting to authorize access to the target application we received a response from the target application that indicates the credentials provided are invalid.|Please navigate to the admin credentials section of the provisioning configuration UI and authorize access again with valid credentials. If the application is in the gallery, review the application configuration tutorial for any additional steps required.|
+|**SCIM Compliance issue:** An HTTP/404 Not Found response was returned rather than the expected HTTP/200 OK response. In this case, the Azure AD provisioning service has made a request to the target application and received an unexpected response.|Check the admin credentials section. See if the application requires specifying the tenant URL and that the URL is correct. If you don't see an issue, contact the application developer to ensure that their service is SCIM-compliant. https://tools.ietf.org/html/rfc7644#section-3.4.2 |
+|**Invalid credentials:** When attempting to authorize, access to the target application, we received a response from the target application that indicates the credentials provided are invalid.|Navigate to the admin credentials section of the provisioning configuration UI and authorize access again with valid credentials. If the application is in the gallery, review the application configuration tutorial for anymore required steps.|
|**Duplicate roles:** Roles imported from certain applications like Salesforce and Zendesk must be unique. |Navigate to the application [manifest](../develop/reference-app-manifest.md) in the Azure portal and remove the duplicate role.| A Microsoft Graph request to get the status of the provisioning job shows the following reason for quarantine:- - `EncounteredQuarantineException` indicates that invalid credentials were provided. The provisioning service is unable to establish a connection between the source system and the target system.
+- `EncounteredEscrowProportionThreshold` indicates that provisioning exceeded the escrow threshold. This condition occurs when more than 40% of provisioning events failed. For more information, see escrow threshold details below.
+- `QuarantineOnDemand` means that we've detected an issue with your application and have manually set it to quarantine.
-- `EncounteredEscrowProportionThreshold` indicates that provisioning exceeded the escrow threshold. This condition occurs when more than 60% of provisioning events failed.
+**Escrow thresholds**
+
+If the proportional escrow threshold is met, the provisioning job will go into quarantine. This logic is subject to change, but works roughly as described below:
+
+A job can go into quarantine regardless of failure counts for issues such as admin credentials or SCIM compliance. However, in general, 5,000 failures are the minimum to start evaluating whether to quarantine because of too many failures. For example, a job with 4,000 failures wouldn't go into quarantine. But, a job with 5,000 failures would trigger an evaluation. An evaluation uses the following criteria:
+- If more than 40% of provisioning events fail, or there are more than 40,000 failures, the provisioning job will go into quarantine. Reference failures won't be counted as part of the 40% threshold or 40,000 threshold. For example, failure to update a manager or a group member is a reference failure.
+- A job where 45,000 users were unsuccessfully provisioned would lead to quarantine as it exceeds the 40,000 threshold.
+- A job where 30,000 users failed provisioning and 5,000 were successful would lead to quarantine as it exceeds the 40% threshold and 5,000 minimum.
+- A job with 20,000 failures and 100,000 success wouldn't go into quarantine because it doe not exceed the 40% failure threshold or the 40,000 failure max.
+- There's an absolute threshold of 60,000 failures that accounts for both reference and non-reference failures. For example, 40,000 users failed to be provisioned and 21,000 manager updates failed. The total is 61,000 failures and exceeds the 60,000 limit.
-- `QuarantineOnDemand` means that we've detected an issue with your application and have manually set it to quarantine. ## How do I get my application out of quarantine? First, resolve the issue that caused the application to be placed in quarantine. -- Check the application's provisioning settings to make sure you've [entered valid Admin Credentials](../app-provisioning/configure-automatic-user-provisioning-portal.md#configuring-automatic-user-account-provisioning). Azure AD must be able to establish a trust with the target application. Ensure that you have entered valid credentials and your account has the necessary permissions.
+- Check the application's provisioning settings to make sure you've [entered valid Admin Credentials](../app-provisioning/configure-automatic-user-provisioning-portal.md#configuring-automatic-user-account-provisioning). Azure AD must establish a trust with the target application. Ensure that you have entered valid credentials and your account has the necessary permissions.
-- Review the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to further investigate what errors are causing quarantine and address the error. Access the provisioning logs in the Azure portal by going to **Azure Active Directory** &gt; **Enterprise Apps** &gt; **Provisioning logs (preview)** in the **Activity** section.
+- Review the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to further investigate what errors are causing quarantine and address the error. Go to **Azure Active Directory** &gt; **Enterprise Apps** &gt; **Provisioning logs (preview)** in the **Activity** section.
After you've resolved the issue, restart the provisioning job. Certain changes to the application's provisioning settings, such as attribute mappings or scoping filters, will automatically restart provisioning for you. The progress bar on the application's **Provisioning** page indicates when provisioning last started. If you need to restart the provisioning job manually, use one of the following methods: - Use the Azure portal to restart the provisioning job. On the application's **Provisioning** page under **Settings**, select **Clear state and restart synchronization** and set **Provisioning Status** to **On**. This action fully restarts the provisioning service, which can take some time. A full initial cycle will run again, which clears escrows, removes the app from quarantine, and clears any watermarks. -- Use Microsoft Graph to [restart the provisioning job](/graph/api/synchronization-synchronizationjob-restart?tabs=http&view=graph-rest-beta). You'll have full control over what you restart. You can choose to clear escrows (to restart the escrow counter that accrues toward quarantine status), clear quarantine (to remove the application from quarantine), or clear watermarks. Use the following request:
+- Use Microsoft Graph to [restart the provisioning job](/graph/api/synchronization-synchronizationjob-restart?tabs=http&view=graph-rest-beta&preserve-view=true). You'll have full control over what you restart. You can choose to clear escrows (to restart the escrow counter that accrues toward quarantine status), clear quarantine (to remove the application from quarantine), or clear watermarks. Use the following request:
```microsoft-graph POST /servicePrincipals/{id}/synchronization/jobs/{jobId}/restart ```
-Replace "{id}" with the value of the Application ID, and replace "{jobId}" with the [ID of the synchronization job](/graph/api/resources/synchronization-configure-with-directory-extension-attributes?tabs=http&view=graph-rest-beta#list-synchronization-jobs-in-the-context-of-the-service-principal).
+Replace "{ID}" with the value of the Application ID, and replace "{jobId}" with the [ID of the synchronization job](/graph/api/resources/synchronization-configure-with-directory-extension-attributes?tabs=http&view=graph-rest-beta&preserve-view=true#list-synchronization-jobs-in-the-context-of-the-service-principal).
active-directory https://docs.microsoft.com/en-us/azure/active-directory/app-provisioning/known-issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/known-issues.md
@@ -9,7 +9,7 @@ ms.service: active-directory
ms.subservice: app-provisioning ms.workload: identity ms.topic: troubleshooting
-ms.date: 12/14/2020
+ms.date: 01/05/2021
ms.reviewer: arvinh ---
@@ -78,6 +78,10 @@ The [time](./application-provisioning-when-will-provisioning-finish-specific-use
The app provisioning service isn't aware of changes made in external apps. So, no action is taken to roll back. The app provisioning service relies on changes made in Azure AD.
+**Switching from sync all to sync assigned not working**
+
+After changing scope from 'Sync All' to 'Sync Assigned', please make sure to also perform a restart to ensure that the change takes effect. You can do the restart from the UI.
+ **Provisioning cycle continues until completion** When setting provisioning `enabled = off`, or hitting stop, the current provisioning cycle will continue running until completion. The service will stop executing any future cycles until you turn provisioning on again.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/conditional-access/concept-conditional-access-grant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/concept-conditional-access-grant.md
@@ -99,6 +99,7 @@ This setting applies to the following iOS and Android apps:
- Microsoft Word - Microsoft Yammer - Microsoft Whiteboard
+- Microsoft 365 Admin
**Remarks**
active-directory https://docs.microsoft.com/en-us/azure/active-directory/conditional-access/howto-conditional-access-policy-admin-mfa https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/howto-conditional-access-policy-admin-mfa.md
@@ -28,6 +28,7 @@ Microsoft recommends you require MFA on the following roles at a minimum:
* Global administrator * Helpdesk administrator * Password administrator
+* Privileged Role Administrator
* Security administrator * SharePoint administrator * User administrator
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-optional-claims https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/active-directory-optional-claims.md
@@ -9,7 +9,7 @@ ms.service: active-directory
ms.subservice: develop ms.topic: how-to ms.workload: identity
-ms.date: 11/30/2020
+ms.date: 1/04/2021
ms.author: ryanwi ms.reviewer: paulgarn, hirsin, keyam ms.custom: aaddev
@@ -62,7 +62,7 @@ The set of optional claims available by default for applications to use are list
| `ztdid` | Zero-touch Deployment ID | JWT | | The device identity used for [Windows AutoPilot](/windows/deployment/windows-autopilot/windows-10-autopilot) | | `email` | The addressable email for this user, if the user has one. | JWT, SAML | MSA, Azure AD | This value is included by default if the user is a guest in the tenant. For managed users (the users inside the tenant), it must be requested through this optional claim or, on v2.0 only, with the OpenID scope. For managed users, the email address must be set in the [Office admin portal](https://portal.office.com/adminportal/home#/users).| | `acct` | Users account status in tenant | JWT, SAML | | If the user is a member of the tenant, the value is `0`. If they are a guest, the value is `1`. |
-| `groups`| Optional formatting for group claims |JWT, SAML| |Used in conjunction with the GroupMembershipClaims setting in the [application manifest](reference-app-manifest.md), which must be set as well. For details see [Group claims](#configuring-groups-optional-claims) below. For more information about group claims, see [How to configure group claims](../hybrid/how-to-connect-fed-group-claims.md)
+| `groups`| Optional formatting for group claims |JWT, SAML| |Used with the GroupMembershipClaims setting in the [application manifest](reference-app-manifest.md), which must be set as well. For details see [Group claims](#configuring-groups-optional-claims) below. For more information about group claims, see [How to configure group claims](../hybrid/how-to-connect-fed-group-claims.md)
| `upn` | UserPrincipalName | JWT, SAML | | An identifer for the user that can be used with the username_hint parameter. Not a durable identifier for the user and should not be used to uniquely identity user information (for example, as a database key). Instead, use the user object ID (`oid`) as a database key. Users signing in with an [alternate login ID](../authentication/howto-authentication-use-email-signin.md) should not be shown their User Principal Name (UPN). Instead, use the following ID token claims for displaying sign-in state to the user: `preferred_username` or `unique_name` for v1 tokens and `preferred_username` for v2 tokens. Although this claim is automatically included, you can specify it as an optional claim to attach additional properties to modify its behavior in the guest user case. | | `idtyp` | Token type | JWT access tokens | Special: only in app-only access tokens | Value is `app` when the token is an app-only token. This is the most accurate way for an API to determine if a token is an app token or an app+user token.|
@@ -81,7 +81,17 @@ These claims are always included in v1.0 Azure AD tokens, but not included in v2
| `in_corp` | Inside Corporate Network | Signals if the client is logging in from the corporate network. If they're not, the claim isn't included. | Based off of the [trusted IPs](../authentication/howto-mfa-mfasettings.md#trusted-ips) settings in MFA. | | `family_name` | Last Name | Provides the last name, surname, or family name of the user as defined in the user object. <br>"family_name":"Miller" | Supported in MSA and Azure AD. Requires the `profile` scope. | | `given_name` | First name | Provides the first or "given" name of the user, as set on the user object.<br>"given_name": "Frank" | Supported in MSA and Azure AD. Requires the `profile` scope. |
-| `upn` | User Principal Name | An identifer for the user that can be used with the username_hint parameter. Not a durable identifier for the user and should not be used to uniquely identity user information (for example, as a database key). Instead, use the user object ID (`oid`) as a database key. Users signing in with an [alternate login ID](../authentication/howto-authentication-use-email-signin.md) should not be shown their User Principal Name (UPN). Instead, use the following ID token claims for displaying sign-in state to the user: `preferred_username` or `unique_name` for v1 tokens and `preferred_username` for v2 tokens. | See [additional properties](#additional-properties-of-optional-claims) below for configuration of the claim. Requires the `profile` scope.|
+| `upn` | User Principal Name | An identifer for the user that can be used with the username_hint parameter. Not a durable identifier for the user and should not be used to uniquely identity user information (for example, as a database key). Instead, use the user object ID (`oid`) as a database key. Users signing in with an [alternate login ID](../authentication/howto-authentication-use-email-signin.md) should not be shown their User Principal Name (UPN). Instead, use the following `preferred_username` claim for displaying sign-in state to the user. | See [additional properties](#additional-properties-of-optional-claims) below for configuration of the claim. Requires the `profile` scope.|
++
+**Table 4: v1.0-only optional claims**
+
+Some of the improvements of the v2 token format are available to apps that use the v1 token format, as they help improve security and reliability. These will not take effect for ID tokens requested from the v2 endpoint, nor access tokens for APIs that use the v2 token format.
+
+| JWT Claim | Name | Description | Notes |
+|---------------|---------------------------------|-------------|-------|
+|`aud` | Audience | Always present in JWTs, but in v1 access tokens it can be emitted in a variety of ways, which can be hard to code against when performing token validation. Use the [additional properties for this claim](#additional-properties-of-optional-claims) to ensure it's always set to a GUID in v1 access tokens. | v1 JWT access tokens only|
+|`preferred_username` | Preferred username | Provides the preferred username claim within v1 tokens. This makes it easier for apps to provide username hints and show human readable display names, regardless of their token type. It's recommended that you use this optional claim instead of using e.g. `upn` or `unique_name`. | v1 ID tokens and access tokens |
### Additional properties of optional claims
@@ -93,7 +103,9 @@ Some optional claims can be configured to change the way the claim is returned.
|----------------|--------------------------|-------------| | `upn` | | Can be used for both SAML and JWT responses, and for v1.0 and v2.0 tokens. | | | `include_externally_authenticated_upn` | Includes the guest UPN as stored in the resource tenant. For example, `foo_hometenant.com#EXT#@resourcetenant.com` |
-| | `include_externally_authenticated_upn_without_hash` | Same as above, except that the hash marks (`#`) are replaced with underscores (`_`), for example `foo_hometenant.com_EXT_@resourcetenant.com` |
+| | `include_externally_authenticated_upn_without_hash` | Same as above, except that the hash marks (`#`) are replaced with underscores (`_`), for example `foo_hometenant.com_EXT_@resourcetenant.com`|
+| `aud` | | In v1 access tokens, this is used to change the format of the `aud` claim. This has no effect in v2 tokens or ID tokens, where the `aud` claim is always the client ID. Use this to ensure that your API can more easily perform audience validation. Like all optional claims that affect the access token, the resource in the request must set this optional claim, since resources own the access token.|
+| | `use_guid` | Emits the client ID of the resource (API) in GUID format as the `aud` claim instead of an appid URI or GUID. So if a resource's client ID is `bb0a297b-6a42-4a55-ac40-09a501456577`, any app that requests an access token for that resource will receive an access token with `aud` : `bb0a297b-6a42-4a55-ac40-09a501456577`.|
#### Additional properties example
active-directory https://docs.microsoft.com/en-us/azure/active-directory/develop/tutorial-blazor-webassembly https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/tutorial-blazor-webassembly.md
@@ -14,9 +14,7 @@ ms.date: 10/16/2020
# Tutorial: Sign in users and call a protected API from a Blazor WebAssembly app
-In this tutorial, you build a Blazor WebAssembly app that signs in users and gets data from Microsoft Graph by using the Microsoft identity platform and registering your app in Azure Active Directory (Azure AD).
-
-We also have a [tutorial for Blazor Server](tutorial-blazor-server.md).
+In this tutorial, you build a Blazor WebAssembly app that signs in users and gets data from Microsoft Graph by using the Microsoft identity platform and registering your app in Azure Active Directory (Azure AD).
In this tutorial:
@@ -25,6 +23,10 @@ In this tutorial:
> * Create a new Blazor WebAssembly app configured to use Azure Active Directory (Azure AD) for [authentication and authorization](authentication-vs-authorization.md) using the Microsoft identity platform > * Retrieve data from a protected web API, in this case [Microsoft Graph](/graph/overview)
+This tutorial uses .NET Core 3.1. The .NET docs contain instructions on [how to secure a Blazor WebAssembly app](https://docs.microsoft.com/aspnet/core/blazor/security/webassembly/graph-api) using ASP.NET Core 5.0.
+
+We also have a [tutorial for Blazor Server](tutorial-blazor-server.md).
+ ## Prerequisites * [.NET Core 3.1 SDK](https://dotnet.microsoft.com/download/dotnet-core/3.1)
@@ -71,9 +73,11 @@ In your browser, navigate to `https://localhost:5001`, and log in using an Azure
The components of this template that enable logins with Azure AD using the Microsoft identity platform are explained in the [ASP.NET doc on this topic](/aspnet/core/blazor/security/webassembly/standalone-with-azure-active-directory#authentication-package).
-## Retrieving data from Microsoft Graph
+## Retrieving data from a protected API (Microsoft Graph)
+
+[Microsoft Graph](/graph/overview) contains APIs that provide access to Microsoft 365 data for your users, and it supports the tokens issued by the Microsoft identity platform, which makes it a good protected API to use as an example. In this section, you add code to call Microsoft Graph and display the user's emails on the application's "Fetch data" page.
-[Microsoft Graph](/graph/overview) offers a range of APIs that provide access to Microsoft 365 data of users in your tenant. By using the Microsoft identity platform as the identity provider for your app, you have easier access to this information since Microsoft Graph directly supports the tokens issued by the Microsoft identity platform. In this section, you add code can display the signed in user's emails on the application's "Fetch data" page.
+This section is written using a common approach to calling a protected API using a named client. The same method can be used for other protected APIs you want to call. However, if you do plan to call Microsoft Graph from your application you can use the Graph SDK to reduce boilerplate. The .NET docs contain instructions on [how to use the Graph SDK](https://docs.microsoft.com/aspnet/core/blazor/security/webassembly/graph-api?view=aspnetcore-5.0).
Before you start, log out of your app since you'll be making changes to the required permissions, and your current token won't work. If you haven't already, run your app again and select **Log out** before updating the code below.
@@ -239,4 +243,4 @@ After granting consent, navigate to the "Fetch data" page to read some email.
## Next steps > [!div class="nextstepaction"]
-> [Microsoft identity platform best practices and recommendations](./identity-platform-integration-checklist.md)
\ No newline at end of file
+> [Microsoft identity platform best practices and recommendations](./identity-platform-integration-checklist.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/devices/troubleshoot-hybrid-join-windows-legacy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/troubleshoot-hybrid-join-windows-legacy.md
@@ -43,6 +43,7 @@ This article provides you with troubleshooting guidance on how to resolve potent
- You can also get multiple entries for a device on the user info tab because of a reinstallation of the operating system or a manual re-registration. - The initial registration / join of devices is configured to perform an attempt at either sign-in or lock / unlock. There could be 5-minute delay triggered by a task scheduler task. - Make sure [KB4284842](https://support.microsoft.com/help/4284842) is installed, in case of Windows 7 SP1 or Windows Server 2008 R2 SP1. This update prevents future authentication failures due to customer's access loss to protected keys after changing password.
+- Hybrid Azure AD join may fail after an user has their UPN changed, breaking the Seamless SSO authentication process. During the join process you may see that it is still sending the old UPN to Azure AD, unless, browser session cookies are cleared or user explicitly signs-out and removes old UPN.
## Step 1: Retrieve the registration status
active-directory https://docs.microsoft.com/en-us/azure/active-directory/enterprise-users/licensing-groups-resolve-problems https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/licensing-groups-resolve-problems.md
@@ -64,7 +64,6 @@ To see which users and groups are consuming licenses, select a product. Under **
Consider the following example. A user has a license for Office 365 Enterprise *E1* assigned directly, with all the plans enabled. The user has been added to a group that has the Office 365 Enterprise *E3* product assigned to it. The E3 product contains service plans that can't overlap with the plans that are included in E1, so the group license assignment fails with the ΓÇ£Conflicting service plansΓÇ¥ error. In this example, the conflicting service plans are: -- SharePoint Online (Plan 2) conflicts with SharePoint Online (Plan 1). - Exchange Online (Plan 2) conflicts with Exchange Online (Plan 1). To solve this conflict, you need to disable two of the plans. You can disable the E1 license that's directly assigned to the user. Or, you need to modify the entire group license assignment and disable the plans in the E3 license. Alternatively, you might decide to remove the E1 license from the user if it's redundant in the context of the E3 license.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/hybrid/how-to-connect-health-agent-install https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-health-agent-install.md
@@ -278,6 +278,17 @@ After you install the appropriate agent *setup.exe* file, you can register the a
```
+> [!NOTE]
+> To register against sovereign clouds, use the following command lines:
+>
+> ```powershell
+> Register-AzureADConnectHealthADFSAgent -UserPrincipalName upn-of-the-user
+> Register-AzureADConnectHealthADDSAgent -UserPrincipalName upn-of-the-user
+> Register-AzureADConnectHealthSyncAgent -UserPrincipalName upn-of-the-user
+> ```
+>
++ These commands accept `Credential` as a parameter to complete the registration noninteractively or to complete the registration on a machine that runs Server Core. Keep in mind that: * You can capture `Credential` in a PowerShell variable that's passed as a parameter. * You can provide any Azure AD identity that has permissions to register the agents and that does *not* have multifactor authentication enabled.
active-directory https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/delete-application-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/delete-application-portal.md
@@ -8,7 +8,7 @@ ms.service: active-directory
ms.subservice: app-mgmt ms.topic: quickstart ms.workload: identity
-ms.date: 12/28/2020
+ms.date: 1/5/2021
ms.author: kenwith ---
@@ -16,6 +16,8 @@ ms.author: kenwith
This quickstart uses the Azure portal to delete an application that was added to your Azure Active Directory (Azure AD) tenant.
+Learn more about SSO and Azure, see [What is Single Sign-On (SSO)](what-is-single-sign-on.md).
+ ## Prerequisites To delete an application from your Azure AD tenant, you need:
active-directory https://docs.microsoft.com/en-us/azure/active-directory/roles/custom-create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/custom-create.md
@@ -8,7 +8,7 @@ ms.service: active-directory
ms.workload: identity ms.subservice: roles ms.topic: how-to
-ms.date: 11/04/2020
+ms.date: 01/05/2021
ms.author: curtand ms.reviewer: vincesm ms.custom: it-pro
@@ -52,17 +52,26 @@ First, you must [download the Azure AD Preview PowerShell module](https://www.po
To install the Azure AD PowerShell module, use the following commands: ``` PowerShell
-Install-Module AzureADPreview
-Import-Module AzureADPreview
+install-module azureadpreview
+import-module azureadpreview
``` To verify that the module is ready to use, use the following command: ``` PowerShell
-Get-Module AzureADPreview
- ModuleType Version Name ExportedCommands
- ---------- --------- ---- ----------------
- Binary 2.0.2.31 azuread {Add-AzureADAdministrati...}
+get-module azureadpreview
+
+ ModuleType Version Name ExportedCommands
+ ---------- --------- ---- ----------------
+ Binary 2.0.0.115 azureadpreview {Add-AzureADAdministrati...}
+```
+
+### Connect to Azure
+
+To connect to Azure Active Directory, use the following command:
+
+``` PowerShell
+Connect-AzureAD
``` ### Create the custom role
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/andromedascm-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/andromedascm-tutorial.md
@@ -9,7 +9,7 @@ ms.service: active-directory
ms.subservice: saas-app-tutorial ms.workload: identity ms.topic: tutorial
-ms.date: 04/16/2019
+ms.date: 12/28/2020
ms.author: jeedes --- # Tutorial: Azure Active Directory integration with Andromeda
@@ -21,9 +21,6 @@ Integrating Andromeda with Azure AD provides you with the following benefits:
* You can enable your users to be automatically signed-in to Andromeda (Single Sign-On) with their Azure AD accounts. * You can manage your accounts in one central location - the Azure portal.
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
- ## Prerequisites To configure Azure AD integration with Andromeda, you need the following items:
@@ -42,59 +39,39 @@ In this tutorial, you configure and test Azure AD single sign-on in a test envir
To configure the integration of Andromeda into Azure AD, you need to add Andromeda from the gallery to your list of managed SaaS apps.
-**To add Andromeda from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **Andromeda**, select **Andromeda** from result panel then click **Add** button to add the application.
-
- ![Andromeda in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with Andromeda based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in Andromeda needs to be established.
-
-To configure and test Azure AD single sign-on with Andromeda, you need to complete the following building blocks:
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Andromeda** in the search box.
+1. Select **Andromeda** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure Andromeda Single Sign-On](#configure-andromeda-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create Andromeda test user](#create-andromeda-test-user)** - to have a counterpart of Britta Simon in Andromeda that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
-### Configure Azure AD single sign-on
+## Configure and test Azure AD SSO for Andromeda
-In this section, you enable Azure AD single sign-on in the Azure portal.
+Configure and test Azure AD SSO with Andromeda using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Andromeda.
-To configure Azure AD single sign-on with Andromeda, perform the following steps:
+To configure and test Azure AD SSO with Andromeda, perform the following steps:
-1. In the [Azure portal](https://portal.azure.com/), on the **Andromeda** application integration page, select **Single sign-on**.
- ![Configure single sign-on link](common/select-sso.png)
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
+2. **[Configure Andromeda SSO](#configure-andromeda-sso)** - to configure the Single Sign-On settings on application side.
+ 1. **[Create Andromeda test user](#create-andromeda-test-user)** - to have a counterpart of Britta Simon in Andromeda that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+## Configure Azure AD SSO
- ![Single sign-on select mode](common/select-saml-option.png)
+Follow these steps to enable Azure AD SSO in the Azure portal.
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
+1. In the Azure portal, on the **Andromeda** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
-4. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, perform the following steps:
-
- ![Screenshot shows the Basic SAML Configuration, where you can enter Identifier, Reply U R L, and select Save.](common/idp-intiated.png)
+1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields:
a. In the **Identifier** text box, type a URL using the following pattern: `https://<tenantURL>.ngcxpress.com/`
@@ -109,159 +86,135 @@ To configure Azure AD single sign-on with Andromeda, perform the following steps
In the **Sign-on URL** text box, type a URL using the following pattern: `https://<tenantURL>.ngcxpress.com/SAMLLogon.aspx`
- > [!NOTE]
- > These values are not real. You will update the value with the actual Identifier, Reply URL, and Sign-On URL which is explained later in the tutorial.
+ > [!NOTE]
+ > These values are not real. You will update the value with the actual Identifier, Reply URL, and Sign-On URL which is explained later in the tutorial.
6. Andromeda application expects the SAML assertions in a specific format. Configure the following claims for this application. You can manage the values of these attributes from the **User Attributes** section on application integration page. On the **Set up Single Sign-On with SAML** page, click **Edit** button to open **User Attributes** dialog.
- ![Screenshot shows User attributes such as givenname user.givenname and emailaddress user.mail.](common/edit-attribute.png)
+ ![Screenshot shows User attributes such as givenname user.givenname and emailaddress user.mail.](common/edit-attribute.png)
- > [!Important]
- > Clear out the NameSpace definitions while setting these up.
+ > [!Important]
+ > Clear out the NameSpace definitions while setting these up.
7. In the **User Claims** section on the **User Attributes** dialog, edit the claims by using **Edit icon** or add the claims by using **Add new claim** to configure SAML token attribute as shown in the image above and perform the following steps:
- | Name | Source Attribute|
- | ------ | -----------|
- | role | App specific role |
- | type | App Type |
- | company | CompanyName |
+ | Name | Source Attribute|
+ | ------ | -----------|
+ | role | App specific role |
+ | type | App Type |
+ | company | CompanyName |
> [!NOTE]
- > There are not real values. These values are only for demo purpose, please use your organization roles.
+ > Andromeda expects roles for users assigned to the application. Please set up these roles in Azure AD so that users can be assigned the appropriate roles. To understand how to configure roles in Azure AD, see [here](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps#app-roles-ui).
- 1. Click **Add new claim** to open the **Manage user claims** dialog.
+ a. Click **Add new claim** to open the **Manage user claims** dialog.
- ![Screenshot shows User claims with options to Add new claim and save.](common/new-save-attribute.png)
+ ![Screenshot shows User claims with options to Add new claim and save.](common/new-save-attribute.png)
- ![Screenshot shows Manage user claims where you can enter values described I this step.](common/new-attribute-details.png)
+ ![Screenshot shows Manage user claims where you can enter values described I this step.](common/new-attribute-details.png)
- 1. In the **Name** textbox, type the attribute name shown for that row.
+ b. In the **Name** textbox, type the attribute name shown for that row.
- 1. Leave the **Namespace** blank.
+ c. Leave the **Namespace** blank.
- 1. Select Source as **Attribute**.
+ d. Select Source as **Attribute**.
- 1. From the **Source attribute** list, type the attribute value shown for that row.
+ e. From the **Source attribute** list, type the attribute value shown for that row.
- 1. Click **Ok**
+ f. Click **Ok**
- 1. Click **Save**.
+ g. Click **Save**.
8. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Certificate (Base64)** from the given options as per your requirement and save it on your computer.
- ![The Certificate download link](common/certificatebase64.png)
+ ![The Certificate download link](common/certificatebase64.png)
9. On the **Set up Andromeda** section, copy the appropriate URL(s) as per your requirement.
- ![Copy configuration URLs](common/copy-configuration-urls.png)
-
- 1. Login URL
-
- 1. Azure AD Identifier
-
- 1. Logout URL
-
-### Configure Andromeda Single Sign-On
-
-1. Sign-on to your Andromeda company site as administrator.
-
-2. On the top of the menubar click **Admin** and navigate to **Administration**.
-
- ![Andromeda admin](./media/andromedascm-tutorial/tutorial_andromedascm_admin.png)
-
-3. On the left side of tool bar under **Interfaces** section, click **SAML Configuration**.
-
- ![Andromeda saml](./media/andromedascm-tutorial/tutorial_andromedascm_saml.png)
-
-4. On the **SAML Configuration** section page, perform the following steps:
-
- ![Andromeda config](./media/andromedascm-tutorial/tutorial_andromedascm_config.png)
-
- 1. Check **Enable SSO with SAML**.
-
- 1. Under **Andromeda Information** section, copy the **SP Identity** value and paste it into the **Identifier** textbox of **Basic SAML Configuration** section.
-
- 1. Copy the **Consumer URL** value and paste it into the **Reply URL** textbox of **Basic SAML Configuration** section.
-
- 1. Copy the **Logon URL** value and paste it into the **Sign-on URL** textbox of **Basic SAML Configuration** section.
-
- 1. Under **SAML Identity Provider** section, type your IDP Name.
-
- 1. In the **Single Sign On End Point** textbox, paste the value of **Login URL** which, you have copied from the Azure portal.
-
- 1. Open the downloaded **Base64 encoded certificate** from Azure portal in notepad, paste it into the **X 509 Certificate** textbox.
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
- 1. Map the following attributes with the respective value to facilitate SSO login from Azure AD. The **User ID** attribute is required for logging in. For provisioning, **Email**, **Company**, **UserType**, and **Role** are required. In this section, we define attributes mapping (name and values) which correlate to those defined within Azure portal
-
- ![Andromeda attbmap](./media/andromedascm-tutorial/tutorial_andromedascm_attbmap.png)
-
- 1. Click **Save**.
### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
+In this section, you'll create a test user in the Azure portal called B.Simon.
- ![The "Users and groups" and "All users" links](common/users.png)
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
-2. Select **New user** at the top of the screen.
+### Assign the Azure AD test user
- ![New user Button](common/new-user.png)
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Andromeda.
-3. In the User properties, perform the following steps.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Andromeda**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you have setup the roles as explained in the above, you can select it from the **Select a role** dropdown.
+1. In the **Add Assignment** dialog, click the **Assign** button.
- ![The User dialog box](common/user-properties.png)
+### Configure Andromeda SSO
- a. In the **Name** field enter **BrittaSimon**.
+1. Sign-on to your Andromeda company site as administrator.
- b. In the **User name** field type `brittasimon@yourcompanydomain.extension`. For example, BrittaSimon@contoso.com
+2. On the top of the menubar click **Admin** and navigate to **Administration**.
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
+ ![Andromeda admin](./media/andromedascm-tutorial/tutorial_andromedascm_admin.png)
- d. Click **Create**.
+3. On the left side of tool bar under **Interfaces** section, click **SAML Configuration**.
-### Assign the Azure AD test user
+ ![Andromeda saml](./media/andromedascm-tutorial/tutorial_andromedascm_saml.png)
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to Andromeda.
+4. On the **SAML Configuration** section page, perform the following steps:
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Andromeda**.
+ ![Andromeda config](./media/andromedascm-tutorial/tutorial_andromedascm_config.png)
- ![Enterprise applications blade](common/enterprise-applications.png)
+ a. Check **Enable SSO with SAML**.
-2. In the applications list, select **Andromeda**.
+ b. Under **Andromeda Information** section, copy the **SP Identity** value and paste it into the **Identifier** textbox of **Basic SAML Configuration** section.
- ![The Andromeda link in the Applications list](common/all-applications.png)
+ c. Copy the **Consumer URL** value and paste it into the **Reply URL** textbox of **Basic SAML Configuration** section.
-3. In the menu on the left, select **Users and groups**.
+ d. Copy the **Logon URL** value and paste it into the **Sign-on URL** textbox of **Basic SAML Configuration** section.
- ![The "Users and groups" link](common/users-groups-blade.png)
+ e. Under **SAML Identity Provider** section, type your IDP Name.
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
+ f. In the **Single Sign On End Point** textbox, paste the value of **Login URL** which, you have copied from the Azure portal.
- ![The Add Assignment pane](common/add-assign-user.png)
+ g. Open the downloaded **Base64 encoded certificate** from Azure portal in notepad, paste it into the **X 509 Certificate** textbox.
+
+ h. Map the following attributes with the respective value to facilitate SSO login from Azure AD. The **User ID** attribute is required for logging in. For provisioning, **Email**, **Company**, **UserType**, and **Role** are required. In this section, we define attributes mapping (name and values) which correlate to those defined within Azure portal
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
+ ![Andromeda attbmap](./media/andromedascm-tutorial/tutorial_andromedascm_attbmap.png)
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
+ i. Click **Save**.
-7. In the **Add Assignment** dialog click the **Assign** button.
### Create Andromeda test user In this section, a user called Britta Simon is created in Andromeda. Andromeda supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Andromeda, a new one is created after authentication. If you need to create a user manually, contact [Andromeda Client support team](https://www.ngcsoftware.com/support/).
-### Test single sign-on
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Andromeda Sign on URL where you can initiate the login flow.
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+* Go to Andromeda Sign-on URL directly and initiate the login flow from there.
-When you click the Andromeda tile in the Access Panel, you should be automatically signed in to the Andromeda for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+#### IDP initiated:
-## Additional Resources
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Andromeda for which you set up the SSO
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+You can also use Microsoft My Apps to test the application in any mode. When you click the Andromeda tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Andromeda for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)\ No newline at end of file
+Once you configure Andromeda you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/appinux-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/appinux-tutorial.md
@@ -9,7 +9,7 @@ ms.service: active-directory
ms.subservice: saas-app-tutorial ms.workload: identity ms.topic: tutorial
-ms.date: 11/06/2019
+ms.date: 12/28/2020
ms.author: jeedes ---
@@ -21,7 +21,6 @@ In this tutorial, you'll learn how to integrate Appinux with Azure Active Direct
* Enable your users to be automatically signed-in to Appinux with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
## Prerequisites
@@ -36,15 +35,13 @@ In this tutorial, you configure and test Azure AD SSO in a test environment.
* Appinux supports **SP** initiated SSO - * Appinux supports **Just In Time** user provisioning - ## Adding Appinux from the gallery To configure the integration of Appinux into Azure AD, you need to add Appinux from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**.
@@ -52,11 +49,11 @@ To configure the integration of Appinux into Azure AD, you need to add Appinux f
1. Select **Appinux** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for Appinux
+## Configure and test Azure AD SSO for Appinux
Configure and test Azure AD SSO with Appinux using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Appinux.
-To configure and test Azure AD SSO with Appinux, complete the following building blocks:
+To configure and test Azure AD SSO with Appinux, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
@@ -69,9 +66,9 @@ To configure and test Azure AD SSO with Appinux, complete the following building
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Appinux** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **Appinux** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
@@ -105,6 +102,9 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
| `wanshort` | `http://appinux.com/windowsaccountname2` | `extractmailprefix([userprincipalname])` | | `nameidentifier` | `http://schemas.xmlsoap.org/ws/2005/05/identity/claims` | `user.employeeid` |
+ > [!NOTE]
+ > Appinux expects roles for users assigned to the application. Please set up these roles in Azure AD so that users can be assigned the appropriate roles. To understand how to configure roles in Azure AD, see [here](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps#app-roles-ui).
+ 1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer. ![The Certificate download link](common/metadataxml.png)
@@ -132,15 +132,9 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **Appinux**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. If you have setup the roles as explained in the above, you can select it from the **Select a role** dropdown.
1. In the **Add Assignment** dialog, click the **Assign** button. ## Configure Appinux SSO
@@ -156,16 +150,15 @@ In this section, a user called Britta Simon is created in Appinux. Appinux suppo
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the Appinux tile in the Access Panel, you should be automatically signed in to the Appinux for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+* Click on **Test this application** in Azure portal. This will redirect to Appinux Sign-on URL where you can initiate the login flow.
-## Additional resources
+* Go to Appinux Sign-on URL directly and initiate the login flow from there.
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the Appinux tile in the My Apps, this will redirect to Appinux Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md) -- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+## Next steps
-- [Try Appinux with Azure AD](https://aad.portal.azure.com/)\ No newline at end of file
+Once you configure Appinux you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/appneta-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/appneta-tutorial.md
@@ -9,7 +9,7 @@ ms.service: active-directory
ms.subservice: saas-app-tutorial ms.workload: identity ms.topic: tutorial
-ms.date: 11/06/2019
+ms.date: 12/28/2020
ms.author: jeedes ---
@@ -21,7 +21,6 @@ In this tutorial, you'll learn how to integrate AppNeta Performance Monitor with
* Enable your users to be automatically signed-in to AppNeta Performance Monitor with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
## Prerequisites
@@ -36,7 +35,6 @@ In this tutorial, you configure and test Azure AD SSO in a test environment.
* AppNeta Performance Monitor supports **SP** initiated SSO - * AppNeta Performance Monitor supports **Just In Time** user provisioning > [!NOTE]
@@ -47,7 +45,7 @@ In this tutorial, you configure and test Azure AD SSO in a test environment.
To configure the integration of AppNeta Performance Monitor into Azure AD, you need to add AppNeta Performance Monitor from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**.
@@ -55,11 +53,11 @@ To configure the integration of AppNeta Performance Monitor into Azure AD, you n
1. Select **AppNeta Performance Monitor** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for AppNeta Performance Monitor
+## Configure and test Azure AD SSO for AppNeta Performance Monitor
Configure and test Azure AD SSO with AppNeta Performance Monitor using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in AppNeta Performance Monitor.
-To configure and test Azure AD SSO with AppNeta Performance Monitor, complete the following building blocks:
+To configure and test Azure AD SSO with AppNeta Performance Monitor, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
@@ -72,9 +70,9 @@ To configure and test Azure AD SSO with AppNeta Performance Monitor, complete th
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **AppNeta Performance Monitor** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **AppNeta Performance Monitor** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
@@ -83,9 +81,6 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
a. In the **Sign on URL** text box, type a URL using the following pattern: `https://<subdomain>.pm.appneta.com`
- b. In the **Identifier (Entity ID)** text box, type a value:
- `PingConnect`
- > [!NOTE] > The Sign-on URL value is not real. Update this value with the actual Sign-On URL. Contact [AppNeta Performance Monitor Client support team](mailto:support@appneta.com) to get this value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
@@ -107,7 +102,7 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
| | | > [!NOTE]
- > **groups** refers to the security group in Appneta which is mapped to a **Role** in Azure AD. Please refer to [this](../develop/active-directory-enterprise-app-role-management.md) doc which explains how to create custom roles in Azure AD.
+ > **groups** refers to the security group in Appneta which is mapped to a **Role** in Azure AD. Please refer to [this](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps#app-roles-ui) doc which explains how to create custom roles in Azure AD.
1. Click **Add new claim** to open the **Manage user claims** dialog.
@@ -150,17 +145,10 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **AppNeta Performance Monitor**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. If you have setup the roles as explained in the above, you can select it from the **Select a role** dropdown.
1. In the **Add Assignment** dialog, click the **Assign** button.- ## Configure AppNeta Performance Monitor SSO To configure single sign-on on **AppNeta Performance Monitor** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [AppNeta Performance Monitor support team](mailto:support@appneta.com). They set this setting to have the SAML SSO connection set properly on both sides.
@@ -174,16 +162,15 @@ In this section, a user called Britta Simon is created in AppNeta Performance Mo
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the AppNeta Performance Monitor tile in the Access Panel, you should be automatically signed in to the AppNeta Performance Monitor for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+* Click on **Test this application** in Azure portal. This will redirect to AppNeta Performance Monitor Sign-on URL where you can initiate the login flow.
-## Additional resources
+* Go to AppNeta Performance Monitor Sign-on URL directly and initiate the login flow from there.
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the AppNeta Performance Monitor tile in the My Apps, this will redirect to AppNeta Performance Monitor Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md) -- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+## Next steps
-- [Try AppNeta Performance Monitor with Azure AD](https://aad.portal.azure.com/)\ No newline at end of file
+Once you configure AppNeta Performance Monitor you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/apptio-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/apptio-tutorial.md
@@ -9,7 +9,7 @@ ms.service: active-directory
ms.subservice: saas-app-tutorial ms.workload: identity ms.topic: tutorial
-ms.date: 08/29/2019
+ms.date: 11/03/2020
ms.author: jeedes ---
@@ -21,8 +21,6 @@ In this tutorial, you'll learn how to integrate Apptio with Azure Active Directo
* Enable your users to be automatically signed-in to Apptio with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
@@ -43,18 +41,18 @@ In this tutorial, you configure and test Azure AD SSO in a test environment.
To configure the integration of Apptio into Azure AD, you need to add Apptio from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **Apptio** in the search box. 1. Select **Apptio** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for Apptio
+## Configure and test Azure AD SSO for Apptio
Configure and test Azure AD SSO with Apptio using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Apptio.
-To configure and test Azure AD SSO with Apptio, complete the following building blocks:
+To configure and test Azure AD SSO with Apptio, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
@@ -67,9 +65,9 @@ To configure and test Azure AD SSO with Apptio, complete the following building
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Apptio** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **Apptio** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
@@ -78,15 +76,15 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
In the **Identifier** text box, type a URL: `urn:federation:apptio`
-1. The role claim is pre-configured so you don't have to configure it but you still need to create them in Azure AD using this [article](../develop/active-directory-enterprise-app-role-management.md).
+1. The role claim is pre-configured so you don't have to configure it but you still need to create them in Azure AD using this [article](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps#app-roles-ui).
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
- ![The Certificate download link](common/metadataxml.png)
+ ![The Certificate download link](common/metadataxml.png)
1. On the **Set up Apptio** section, copy the appropriate URL(s) based on your requirement.
- ![Copy configuration URLs](common/copy-configuration-urls.png)
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
### Create an Azure AD test user
@@ -107,15 +105,9 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **Apptio**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. If you have setup the roles as explained in the above, you can select it from the **Select a role** dropdown.
1. In the **Add Assignment** dialog, click the **Assign** button. ## Configure Apptio SSO
@@ -128,16 +120,13 @@ In this section, you create a user called B.Simon in Apptio. Work with [Apptio
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
-
-When you click the Apptio tile in the Access Panel, you should be automatically signed in to the Apptio for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+In this section, you test your Azure AD single sign-on configuration with following options.
-## Additional resources
+* Click on Test this application in Azure portal and you should be automatically signed in to the Apptio for which you set up the SSO
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the Apptio tile in the My Apps, you should be automatically signed in to the Apptio for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md) -- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+## Next steps
-- [Try Apptio with Azure AD](https://aad.portal.azure.com/)\ No newline at end of file
+Once you configure Apptio you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/arc-facilities-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/arc-facilities-tutorial.md
@@ -9,7 +9,7 @@ ms.service: active-directory
ms.subservice: saas-app-tutorial ms.workload: identity ms.topic: tutorial
-ms.date: 09/05/2019
+ms.date: 12/08/2020
ms.author: jeedes ---
@@ -21,7 +21,6 @@ In this tutorial, you'll learn how to integrate ARC Facilities with Azure Active
* Enable your users to be automatically signed-in to ARC Facilities with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
## Prerequisites
@@ -45,7 +44,7 @@ In this tutorial, you configure and test Azure AD SSO in a test environment.
To configure the integration of ARC Facilities into Azure AD, you need to add ARC Facilities from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**.
@@ -69,9 +68,9 @@ To configure and test Azure AD SSO with ARC Facilities, complete the following b
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **ARC Facilities** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **ARC Facilities** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
@@ -95,6 +94,9 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
d. Click **Save**.
+ > [!NOTE]
+ > ARC Facilities expects roles for users assigned to the application. Please set up these roles in Azure AD so that users can be assigned the appropriate roles. To understand how to configure roles in Azure AD, see [here](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps#app-roles-ui).
+ 1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer. ![The Certificate download link](common/certificatebase64.png)
@@ -122,15 +124,9 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **ARC Facilities**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
1. In the **Add Assignment** dialog, click the **Assign** button. ## Configure ARC Facilities SSO
@@ -143,16 +139,13 @@ In this section, a user called Britta Simon is created in ARC Facilities. ARC Fa
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
-
-When you click the ARC Facilities tile in the Access Panel, you should be automatically signed in to the ARC Facilities for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+In this section, you test your Azure AD single sign-on configuration with following options.
-## Additional resources
+* Click on Test this application in Azure portal and you should be automatically signed in to the ARC Facilities for which you set up the SSO
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the ARC Facilities tile in the My Apps, you should be automatically signed in to the ARC Facilities for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md) -- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+## Next steps
-- [Try ARC Facilities with Azure AD](https://aad.portal.azure.com/)\ No newline at end of file
+Once you configure ARC Facilities you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/arc-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/arc-tutorial.md
@@ -9,7 +9,7 @@ ms.service: active-directory
ms.subservice: saas-app-tutorial ms.workload: identity ms.topic: tutorial
-ms.date: 10/21/2019
+ms.date: 12/16/2020
ms.author: jeedes ---
@@ -21,8 +21,6 @@ In this tutorial, you'll learn how to integrate Arc Publishing - SSO with Azure
* Enable your users to be automatically signed-in to Arc Publishing - SSO with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
@@ -34,8 +32,6 @@ To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment. -- * Arc Publishing - SSO supports **SP and IDP** initiated SSO * Arc Publishing - SSO supports **Just In Time** user provisioning
@@ -44,7 +40,7 @@ In this tutorial, you configure and test Azure AD SSO in a test environment.
To configure the integration of Arc Publishing - SSO into Azure AD, you need to add Arc Publishing - SSO from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**.
@@ -52,11 +48,11 @@ To configure the integration of Arc Publishing - SSO into Azure AD, you need to
1. Select **Arc Publishing - SSO** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for Arc Publishing - SSO
+## Configure and test Azure AD SSO for Arc Publishing - SSO
Configure and test Azure AD SSO with Arc Publishing - SSO using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Arc Publishing - SSO.
-To configure and test Azure AD SSO with Arc Publishing - SSO, complete the following building blocks:
+To configure and test Azure AD SSO with Arc Publishing - SSO, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
@@ -69,9 +65,9 @@ To configure and test Azure AD SSO with Arc Publishing - SSO, complete the follo
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Arc Publishing - SSO** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **Arc Publishing - SSO** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
@@ -106,7 +102,7 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
| groups | user.assignedroles | > [!NOTE]
- > Here the **groups** attribute is mapped with **user.assignedroles**. These are custom roles created in Azure AD to map the group names back in application. You can find more guidance [here](../develop/active-directory-enterprise-app-role-management.md) on how to create custom roles in Azure AD
+ > Here the **groups** attribute is mapped with **user.assignedroles**. These are custom roles created in Azure AD to map the group names back in application. You can find more guidance [here](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps#app-roles-ui) on how to create custom roles in Azure AD.
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
@@ -135,15 +131,9 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **Arc Publishing - SSO**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. If you have setup the roles as explained in the above, you can select it from the **Select a role** dropdown.
1. In the **Add Assignment** dialog, click the **Assign** button. ## Configure Arc Publishing - SSO SSO
@@ -159,16 +149,21 @@ In this section, a user called Britta Simon is created in Arc Publishing - SSO.
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Arc Publishing - SSO Sign on URL where you can initiate the login flow.
+
+* Go to Arc Publishing - SSO Sign-on URL directly and initiate the login flow from there.
-When you click the Arc Publishing - SSO tile in the Access Panel, you should be automatically signed in to the Arc Publishing - SSO for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+#### IDP initiated:
-## Additional resources
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Arc Publishing - SSO for which you set up the SSO
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+You can also use Microsoft My Apps to test the application in any mode. When you click the Arc Publishing - SSO tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Arc Publishing - SSO for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md) -- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+## Next steps
-- [Try Arc Publishing - SSO with Azure AD](https://aad.portal.azure.com/)\ No newline at end of file
+Once you configure Arc Publishing - SSO you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/catchpoint-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/catchpoint-tutorial.md
@@ -9,7 +9,7 @@ ms.service: active-directory
ms.subservice: saas-app-tutorial ms.workload: identity ms.topic: tutorial
-ms.date: 02/27/2020
+ms.date: 12/16/2020
ms.author: jeedes ---
@@ -21,8 +21,6 @@ In this tutorial, you learn how to integrate Catchpoint with Azure Active Direct
* Enable automatic Catchpoint sign-in for users with Azure AD accounts. * Manage your accounts in one central location: the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
@@ -36,20 +34,19 @@ In this tutorial, you configure and test Azure AD SSO in a test environment.
* Catchpoint supports SP-initiated and IDP-initiated SSO. * Catchpoint supports just-in-time (JIT) user provisioning.
-* After you configure Catchpoint, you can enforce session control. This precaution protects against exfiltration and infiltration of your organization's sensitive data in real time. Session control is an extension of Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
## Add Catchpoint from the gallery To configure the integration of Catchpoint into Azure AD, add Catchpoint to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) with a work, school, or personal Microsoft account.
+1. Sign in to the Azure portal with a work, school, or personal Microsoft account.
1. On the left pane, select the **Azure Active Directory** service. 1. Go to **Enterprise Applications** and then select **All Applications**. 1. To add a new application, select **New application**. 1. In the **Add from the gallery** section, type **Catchpoint** in the search box. 1. Select **Catchpoint** from the results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for Catchpoint
+## Configure and test Azure AD SSO for Catchpoint
For SSO to work, you need to link an Azure AD user with a user in Catchpoint. For this tutorial, we'll configure a test user called **B.Simon**.
@@ -66,10 +63,10 @@ Complete the following sections:
Follow these steps in the Azure portal to enable Azure AD SSO:
-1. Sign in to the [Azure portal](https://portal.azure.com/).
+1. Sign in to the Azure portal.
1. On the **Catchpoint** application integration page, find the **Manage** section and select **single sign-on**. 1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set Up Single Sign-On with SAML** page, select the pen icon to edit the **Basic SAML Configuration** settings.
+1. On the **Set Up Single Sign-On with SAML** page, select the pencil icon to edit the **Basic SAML Configuration** settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
@@ -99,7 +96,7 @@ Follow these steps in the Azure portal to enable Azure AD SSO:
| namespace | user.assignedrole | > [!NOTE]
- > The `namespace` claim needs to be mapped with the account name. This account name should be set up with a role in Azure AD to be passed back in SAML response. For more information about roles in Azure AD, see [Configure the role claim issued in the SAML token for enterprise applications](../develop/active-directory-enterprise-app-role-management.md).
+ > The `namespace` claim needs to be mapped with the account name. This account name should be set up with a role in Azure AD to be passed back in SAML response. For more information about roles in Azure AD, see [Configure the role claim issued in the SAML token for enterprise applications](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps#app-roles-ui).
1. Go to the **Set Up Single Sign-On with SAML** page. In the **SAML Signing Certificate** section, find **Certificate (Base64)**. Select **Download** to save the certificate to your computer.
@@ -128,15 +125,9 @@ In this section, you enable B.Simon to use Azure single sign-on by granting acce
1. In the Azure portal, select **Enterprise Applications** > **All applications**. 1. In the applications list, select **Catchpoint**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, and then select **Users and groups** in the **Add Assignment** dialog box.-
- ![The "Add user" link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog box, select **B.Simon** from the list of users. Click **Select** at the bottom of the screen.
-1. If you expect a role value in the SAML assertion, look in the **Select Role** dialog box and choose the user's role from the list. Click the **Select** button at the bottom of the screen.
+1. If you have setup the roles as explained in the above, you can select it from the **Select a role** dropdown.
1. In the **Add Assignment** dialog box, select **Assign**. ## Configure Catchpoint SSO
@@ -168,23 +159,26 @@ Catchpoint supports just-in-time user provisioning, which is enabled by default.
## Test SSO
-In this section, you test your Azure AD single sign-on configuration by using the My Apps portal.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you select the Catchpoint tile in the My Apps portal, you should be automatically signed in to the Catchpoint app with SSO configured. For more information about the My Apps portal, see [Sign in and start apps from the My Apps portal](../user-help/my-apps-portal-end-user-access.md).
+#### SP initiated:
-> [!NOTE]
-> When you're signed in to the Catchpoint application through the login page, after providing **Catchpoint Credentials**, enter the valid **Namespace** value in the **Company Credentials(SSO)** field and select **Login**.
->
-> ![Catchpoint configuration](./media/catchpoint-tutorial/loginimage.png)
+* Click on **Test this application** in Azure portal. This will redirect to Catchpoint Sign on URL where you can initiate the login flow.
+
+* Go to Catchpoint Sign-on URL directly and initiate the login flow from there.
-## Additional resources
+#### IDP initiated:
-- [List of tutorials on how to integrate SaaS apps with Azure Active Directory](./tutorial-list.md)
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Catchpoint for which you set up the SSO
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+You can also use Microsoft My Apps to test the application in any mode. When you click the Catchpoint tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Catchpoint for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md) -- [Try Catchpoint with Azure AD](https://aad.portal.azure.com/)
+> [!NOTE]
+> When you're signed in to the Catchpoint application through the login page, after providing **Catchpoint Credentials**, enter the valid **Namespace** value in the **Company Credentials(SSO)** field and select **Login**.
+>
+> ![Catchpoint configuration](./media/catchpoint-tutorial/loginimage.png)
+
+## Next steps
-- [What is session control in Microsoft Cloud App Security?](/cloud-app-security/proxy-intro-aad)\ No newline at end of file
+After you configure Catchpoint, you can enforce session control. This precaution protects against exfiltration and infiltration of your organization's sensitive data in real time. Session control is an extension of Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/certent-equity-management-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/certent-equity-management-tutorial.md
@@ -9,7 +9,7 @@ ms.service: active-directory
ms.subservice: saas-app-tutorial ms.workload: identity ms.topic: tutorial
-ms.date: 01/03/2020
+ms.date: 12/16/2020
ms.author: jeedes ---
@@ -21,8 +21,6 @@ In this tutorial, you'll learn how to integrate Certent Equity Management with A
* Enable your users to be automatically signed-in to Certent Equity Management with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
@@ -40,7 +38,7 @@ In this tutorial, you configure and test Azure AD SSO in a test environment.
To configure the integration of Certent Equity Management into Azure AD, you need to add Certent Equity Management from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**.
@@ -48,26 +46,26 @@ To configure the integration of Certent Equity Management into Azure AD, you nee
1. Select **Certent Equity Management** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for Certent Equity Management
+## Configure and test Azure AD SSO for Certent Equity Management
Configure and test Azure AD SSO with Certent Equity Management using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Certent Equity Management.
-To configure and test Azure AD SSO with Certent Equity Management, complete the following building blocks:
+To configure and test Azure AD SSO with Certent Equity Management, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
- * **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
- * **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
1. **[Configure Certent Equity Management SSO](#configure-certent-equity-management-sso)** - to configure the single sign-on settings on application side.
- * **[Create Certent Equity Management test user](#create-certent-equity-management-test-user)** - to have a counterpart of B.Simon in Certent Equity Management that is linked to the Azure AD representation of user.
+ 1. **[Create Certent Equity Management test user](#create-certent-equity-management-test-user)** - to have a counterpart of B.Simon in Certent Equity Management that is linked to the Azure AD representation of user.
1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Certent Equity Management** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **Certent Equity Management** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
@@ -95,7 +93,7 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
| ROLE | user.assignedroles | > [!NOTE]
- > Please click [here](../develop/active-directory-enterprise-app-role-management.md) to know how to configure **Role** in Azure AD.
+ > Please click [here](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps#app-roles-ui) to know how to configure **Role** in Azure AD.
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
@@ -124,15 +122,9 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **Certent Equity Management**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. If you have setup the roles as explained in the above, you can select it from the **Select a role** dropdown.
1. In the **Add Assignment** dialog, click the **Assign** button. ## Configure Certent Equity Management SSO
@@ -145,16 +137,13 @@ In this section, you create a user called Britta Simon in Certent Equity Managem
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
-
-When you click the Certent Equity Management tile in the Access Panel, you should be automatically signed in to the Certent Equity Management for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+In this section, you test your Azure AD single sign-on configuration with following options.
-## Additional resources
+* Click on Test this application in Azure portal and you should be automatically signed in to the Certent Equity Management for which you set up the SSO
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the Certent Equity Management tile in the My Apps, you should be automatically signed in to the Certent Equity Management for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md) -- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+## Next steps
-- [Try Certent Equity Management with Azure AD](https://aad.portal.azure.com/)\ No newline at end of file
+Once you configure Certent Equity Management you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/colortokens-ztna-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/colortokens-ztna-tutorial.md
@@ -9,7 +9,7 @@ ms.service: active-directory
ms.subservice: saas-app-tutorial ms.workload: identity ms.topic: tutorial
-ms.date: 05/15/2020
+ms.date: 12/16/2020
ms.author: jeedes ---
@@ -21,8 +21,6 @@ In this tutorial, you'll learn how to integrate ColorTokens ZTNA with Azure Acti
* Enable your users to be automatically signed-in to ColorTokens ZTNA with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
@@ -35,24 +33,23 @@ To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment. * ColorTokens ZTNA supports **SP** initiated SSO
-* Once you configure ColorTokens ZTNA you can enforce session control, which protect exfiltration and infiltration of your organizationΓÇÖs sensitive data in real-time. Session control extend from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
## Adding ColorTokens ZTNA from the gallery To configure the integration of ColorTokens ZTNA into Azure AD, you need to add ColorTokens ZTNA from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **ColorTokens ZTNA** in the search box. 1. Select **ColorTokens ZTNA** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for ColorTokens ZTNA
+## Configure and test Azure AD SSO for ColorTokens ZTNA
Configure and test Azure AD SSO with ColorTokens ZTNA using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in ColorTokens ZTNA.
-To configure and test Azure AD SSO with ColorTokens ZTNA, complete the following building blocks:
+To configure and test Azure AD SSO with ColorTokens ZTNA, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
@@ -65,9 +62,9 @@ To configure and test Azure AD SSO with ColorTokens ZTNA, complete the following
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **ColorTokens ZTNA** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **ColorTokens ZTNA** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
@@ -76,7 +73,6 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
a. In the **Sign on URL** text box, type a URL using the following pattern: `https://<tenantname>.spectrum.colortokens.com` - > [!NOTE] > These values are not real. Update these values with the actual Sign on URL, Identifier and Reply URL. Contact [ColorTokens ZTNA Client support team](mailto:support@colortokens.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
@@ -92,7 +88,7 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
| Group | user.groups | > [!NOTE]
- > Click [here](../develop/active-directory-enterprise-app-role-management.md) to know how to create roles in Azure AD.
+ > Click [here](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps#app-roles-ui) to know how to create roles in Azure AD.
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
@@ -121,15 +117,9 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **ColorTokens ZTNA**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. If you have setup the roles as explained in the above, you can select it from the **Select a role** dropdown.
1. In the **Add Assignment** dialog, click the **Assign** button. ## Configure ColorTokens ZTNA SSO
@@ -142,20 +132,15 @@ In this section, you create a user called Britta Simon in ColorTokens ZTNA. Work
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
-
-When you click the ColorTokens ZTNA tile in the Access Panel, you should be automatically signed in to the ColorTokens ZTNA for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
-
-## Additional resources
+In this section, you test your Azure AD single sign-on configuration with following options.
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+* Click on **Test this application** in Azure portal. This will redirect to ColorTokens ZTNA Sign-on URL where you can initiate the login flow.
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+* Go to ColorTokens ZTNA Sign-on URL directly and initiate the login flow from there.
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+* You can use Microsoft My Apps. When you click the ColorTokens ZTNA tile in the My Apps, this will redirect to ColorTokens ZTNA Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [Try ColorTokens ZTNA with Azure AD](https://aad.portal.azure.com/) -- [What is session control in Microsoft Cloud App Security?](/cloud-app-security/proxy-intro-aad)
+## Next steps
-- [How to protect ColorTokens ZTNA with advanced visibility and controls](/cloud-app-security/proxy-intro-aad)\ No newline at end of file
+Once you configure ColorTokens ZTNA you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/dome9arc-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/dome9arc-tutorial.md
@@ -9,7 +9,7 @@ ms.service: active-directory
ms.subservice: saas-app-tutorial ms.workload: identity ms.topic: tutorial
-ms.date: 10/17/2019
+ms.date: 12/16/2020
ms.author: jeedes ---
@@ -21,8 +21,6 @@ In this tutorial, you'll learn how to integrate Check Point CloudGuard Dome9 Arc
* Enable your users to be automatically signed-in to Check Point CloudGuard Dome9 Arc with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
@@ -43,18 +41,18 @@ In this tutorial, you configure and test Azure AD SSO in a test environment.
To configure the integration of Check Point CloudGuard Dome9 Arc into Azure AD, you need to add Check Point CloudGuard Dome9 Arc from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **Check Point CloudGuard Dome9 Arc** in the search box. 1. Select **Check Point CloudGuard Dome9 Arc** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for Check Point CloudGuard Dome9 Arc
+## Configure and test Azure AD SSO for Check Point CloudGuard Dome9 Arc
Configure and test Azure AD SSO with Check Point CloudGuard Dome9 Arc using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Check Point CloudGuard Dome9 Arc.
-To configure and test Azure AD SSO with Check Point CloudGuard Dome9 Arc, complete the following building blocks:
+To configure and test Azure AD SSO with Check Point CloudGuard Dome9 Arc, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
@@ -67,18 +65,15 @@ To configure and test Azure AD SSO with Check Point CloudGuard Dome9 Arc, comple
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Check Point CloudGuard Dome9 Arc** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **Check Point CloudGuard Dome9 Arc** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png) 1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields:
- a. In the **Identifier** text box, type a URL:
- `https://secure.dome9.com/`
-
- b. In the **Reply URL** text box, type a URL using the following pattern:
+ In the **Reply URL** text box, type a URL using the following pattern:
`https://secure.dome9.com/sso/saml/<yourcompanyname>` 1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
@@ -100,7 +95,7 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
| memberof | user.assignedroles | >[!NOTE]
- >Click [here](./apptio-tutorial.md) to know how to create roles in Azure AD.
+ >Click [here](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps#app-roles-ui) to know how to create roles in Azure AD.
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
@@ -129,15 +124,9 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **Check Point CloudGuard Dome9 Arc**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. If you have setup the roles as explained in the above, you can select it from the **Select a role** dropdown.
1. In the **Add Assignment** dialog, click the **Assign** button. ## Configure Check Point CloudGuard Dome9 Arc SSO
@@ -209,16 +198,21 @@ To enable Azure AD users to sign in to Check Point CloudGuard Dome9 Arc, they mu
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Check Point CloudGuard Dome9 Arc Sign on URL where you can initiate the login flow.
+
+* Go to Check Point CloudGuard Dome9 Arc Sign-on URL directly and initiate the login flow from there.
-When you click the Check Point CloudGuard Dome9 Arc tile in the Access Panel, you should be automatically signed in to the Check Point CloudGuard Dome9 Arc for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+#### IDP initiated:
-## Additional resources
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Check Point CloudGuard Dome9 Arc for which you set up the SSO
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+You can also use Microsoft My Apps to test the application in any mode. When you click the Check Point CloudGuard Dome9 Arc tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Check Point CloudGuard Dome9 Arc for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md) -- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+## Next steps
-- [Try Check Point CloudGuard Dome9 Arc with Azure AD](https://aad.portal.azure.com/)\ No newline at end of file
+Once you configure Check Point CloudGuard Dome9 Arc you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/dotcom-monitor-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/dotcom-monitor-tutorial.md
@@ -9,7 +9,7 @@ ms.service: active-directory
ms.subservice: saas-app-tutorial ms.workload: identity ms.topic: tutorial
-ms.date: 12/26/2019
+ms.date: 12/16/2020
ms.author: jeedes ---
@@ -21,8 +21,6 @@ In this tutorial, you'll learn how to integrate Dotcom-Monitor with Azure Active
* Enable your users to be automatically signed-in to Dotcom-Monitor with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
@@ -42,33 +40,33 @@ In this tutorial, you configure and test Azure AD SSO in a test environment.
To configure the integration of Dotcom-Monitor into Azure AD, you need to add Dotcom-Monitor from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **Dotcom-Monitor** in the search box. 1. Select **Dotcom-Monitor** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for Dotcom-Monitor
+## Configure and test Azure AD SSO for Dotcom-Monitor
Configure and test Azure AD SSO with Dotcom-Monitor using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Dotcom-Monitor.
-To configure and test Azure AD SSO with Dotcom-Monitor, complete the following building blocks:
+To configure and test Azure AD SSO with Dotcom-Monitor, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
- * **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
- * **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
1. **[Configure Dotcom Monitor SSO](#configure-dotcom-monitor-sso)** - to configure the single sign-on settings on application side.
- * **[Create Dotcom Monitor test user](#create-dotcom-monitor-test-user)** - to have a counterpart of B.Simon in Dotcom-Monitor that is linked to the Azure AD representation of user.
+ 1. **[Create Dotcom Monitor test user](#create-dotcom-monitor-test-user)** - to have a counterpart of B.Simon in Dotcom-Monitor that is linked to the Azure AD representation of user.
1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Dotcom-Monitor** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **Dotcom-Monitor** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
@@ -91,7 +89,7 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
| Roles | user.assignedroles | > [!NOTE]
- > You can find more guidance [here](../develop/active-directory-enterprise-app-role-management.md) on how to create custom roles in Azure AD.
+ > You can find more guidance [here](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps#app-roles-ui) on how to create custom roles in Azure AD.
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
@@ -120,15 +118,9 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **Dotcom-Monitor**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. If you have setup the roles as explained in the above, you can select it from the **Select a role** dropdown.
1. In the **Add Assignment** dialog, click the **Assign** button. ## Configure Dotcom-Monitor SSO
@@ -141,16 +133,15 @@ In this section, a user called B.Simon is created in Dotcom-Monitor. Dotcom-Moni
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the Dotcom-Monitor tile in the Access Panel, you should be automatically signed in to the Dotcom-Monitor for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+* Click on **Test this application** in Azure portal. This will redirect to Dotcom-Monitor Sign-on URL where you can initiate the login flow.
-## Additional resources
+* Go to Dotcom-Monitor Sign-on URL directly and initiate the login flow from there.
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the Dotcom-Monitor tile in the My Apps, this will redirect to Dotcom-Monitor Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md) -- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+## Next steps
-- [Try Dotcom-Monitor with Azure AD](https://aad.portal.azure.com/)\ No newline at end of file
+Once you configure Dotcom-Monitor you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/g-suite-provisioning-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/g-suite-provisioning-tutorial.md
@@ -20,13 +20,6 @@ This tutorial describes the steps you need to perform in both G Suite and Azure
> [!NOTE] > This tutorial describes a connector built on top of the Azure AD User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
-> [!NOTE]
-> The G Suite connector was recently updated on October 2019. Changes made to the G Suite connector include:
->
-> * Added support for additional G Suite user and group attributes.
-> * Updated G Suite target attribute names to match what is defined [here](https://developers.google.com/admin-sdk/directory).
-> * Updated default attribute mappings.
- > [!NOTE] > This article contains references to the term *whitelist*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
@@ -58,15 +51,15 @@ Before configuring G Suite for automatic user provisioning with Azure AD, you wi
1. Sign in to the [G Suite Admin console](https://admin.google.com/) with your administrator account, and then select **Security**. If you don't see the link, it might be hidden under the **More Controls** menu at the bottom of the screen.
- ![G Suite Security](./media/google-apps-provisioning-tutorial/gapps-security.png)
+ ![G Suite Security](./media/g-suite-provisioning-tutorial/gapps-security.png)
2. On the **Security** page, select **API Reference**.
- ![G Suite API](./media/google-apps-provisioning-tutorial/gapps-api.png)
+ ![G Suite API](./media/g-suite-provisioning-tutorial/gapps-api.png)
3. Select **Enable API access**.
- ![G Suite API Enabled](./media/google-apps-provisioning-tutorial/gapps-api-enabled.png)
+ ![G Suite API Enabled](./media/g-suite-provisioning-tutorial/gapps-api-enabled.png)
> [!IMPORTANT] > For every user that you intend to provision to G Suite, their user name in Azure AD **must** be tied to a custom domain. For example, user names that look like bob@contoso.onmicrosoft.com are not accepted by G Suite. On the other hand, bob@contoso.com is accepted. You can change an existing user's domain by following the instructions [here](../fundamentals/add-custom-domain.md).
@@ -75,15 +68,15 @@ Before configuring G Suite for automatic user provisioning with Azure AD, you wi
a. In the [G Suite Admin Console](https://admin.google.com/), select **Domains**.
- ![G Suite Domains](./media/google-apps-provisioning-tutorial/gapps-domains.png)
+ ![G Suite Domains](./media/g-suite-provisioning-tutorial/gapps-domains.png)
b. Select **Add a domain or a domain alias**.
- ![G Suite Add Domain](./media/google-apps-provisioning-tutorial/gapps-add-domain.png)
+ ![G Suite Add Domain](./media/g-suite-provisioning-tutorial/gapps-add-domain.png)
c. Select **Add another domain**, and then type in the name of the domain that you want to add.
- ![G Suite Add Another](./media/google-apps-provisioning-tutorial/gapps-add-another.png)
+ ![G Suite Add Another](./media/g-suite-provisioning-tutorial/gapps-add-another.png)
d. Select **Continue and verify domain ownership**. Then follow the steps to verify that you own the domain name. For comprehensive instructions on how to verify your domain with Google, see [Verify your site ownership](https://support.google.com/webmasters/answer/35179).
@@ -91,11 +84,11 @@ Before configuring G Suite for automatic user provisioning with Azure AD, you wi
5. Next, determine which admin account you want to use to manage user provisioning in G Suite. Navigate to **Admin Roles**.
- ![G Suite Admin](./media/google-apps-provisioning-tutorial/gapps-admin.png)
+ ![G Suite Admin](./media/g-suite-provisioning-tutorial/gapps-admin.png)
6. For the **Admin role** of that account, edit the **Privileges** for that role. Make sure to enable all **Admin API Privileges** so that this account can be used for provisioning.
- ![G Suite Admin Privileges](./media/google-apps-provisioning-tutorial/gapps-admin-privileges.png)
+ ![G Suite Admin Privileges](./media/g-suite-provisioning-tutorial/gapps-admin-privileges.png)
## Step 3. Add G Suite from the Azure AD application gallery
@@ -121,9 +114,9 @@ This section guides you through the steps to configure the Azure AD provisioning
1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**. Users will need to login to portal.azure.com and will not be able to use aad.portal.azure.com
- ![Enterprise applications blade](./media/google-apps-provisioning-tutorial/enterprise-applications.png)
+ ![Enterprise applications blade](./media/g-suite-provisioning-tutorial/enterprise-applications.png)
- ![All applications blade](./media/google-apps-provisioning-tutorial/all-applications.png)
+ ![All applications blade](./media/g-suite-provisioning-tutorial/all-applications.png)
2. In the applications list, select **G Suite**.
@@ -133,7 +126,7 @@ This section guides you through the steps to configure the Azure AD provisioning
![Screenshot of the Manage options with the Provisioning option called out.](common/provisioning.png)
- ![Get started blade](./media/google-apps-provisioning-tutorial/get-started.png)
+ ![Get started blade](./media/g-suite-provisioning-tutorial/get-started.png)
4. Set the **Provisioning Mode** to **Automatic**.
@@ -141,11 +134,11 @@ This section guides you through the steps to configure the Azure AD provisioning
5. Under the **Admin Credentials** section, click on **Authorize**. You will be redirected to a Google authorization dialog box in a new browser window.
- ![G Suite authorize](./media/google-apps-provisioning-tutorial/authorize-1.png)
+ ![G Suite authorize](./media/g-suite-provisioning-tutorial/authorize-1.png)
6. Confirm that you want to give Azure AD permissions to make changes to your G Suite tenant. Select **Accept**.
- ![G Suite Tenant Auth](./media/google-apps-provisioning-tutorial/gapps-auth.png)
+ ![G Suite Tenant Auth](./media/g-suite-provisioning-tutorial/gapps-auth.png)
7. In the Azure portal, click **Test Connection** to ensure Azure AD can connect to G Suite. If the connection fails, ensure your G Suite account has Admin permissions and try again. Then try the **Authorize** step again.
@@ -271,7 +264,13 @@ Once you've configured provisioning, use the following resources to monitor your
1. Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully 2. Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
-3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## Change log
+
+* 10/17/2020 - Added support for additional G Suite user and group attributes.
+* 10/17/2020 - Updated G Suite target attribute names to match what is defined [here](https://developers.google.com/admin-sdk/directory).
+* 10/17/2020 - Updated default attribute mappings.
## Additional resources
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/greenhouse-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/greenhouse-tutorial.md
@@ -9,31 +9,30 @@ ms.service: active-directory
ms.subservice: saas-app-tutorial ms.workload: identity ms.topic: tutorial
-ms.date: 02/18/2019
+ms.date: 11/25/2020
ms.author: jeedes --- # Tutorial: Azure Active Directory integration with Greenhouse
-In this tutorial, you learn how to integrate Greenhouse with Azure Active Directory (Azure AD).
-Integrating Greenhouse with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate Greenhouse with Azure Active Directory (Azure AD). When you integrate Greenhouse with Azure AD, you can:
-* You can control in Azure AD who has access to Greenhouse.
-* You can enable your users to be automatically signed-in to Greenhouse (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to Greenhouse.
+* Enable your users to be automatically signed-in to Greenhouse with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
-To configure Azure AD integration with Greenhouse, you need the following items:
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Greenhouse single sign-on (SSO) enabled subscription.
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* Greenhouse single sign-on enabled subscription
+> [!NOTE]
+> This integration is also available to use from Azure AD US Government Cloud environment. You can find this application in the Azure AD US Government Cloud Application Gallery and configure it in the same way as you do from public cloud.
## Scenario description
-In this tutorial, you configure and test Azure AD single sign-on in a test environment.
+In this tutorial, you configure and test Azure AD SSO in a test environment.
* Greenhouse supports **SP** initiated SSO
@@ -41,60 +40,39 @@ In this tutorial, you configure and test Azure AD single sign-on in a test envir
To configure the integration of Greenhouse into Azure AD, you need to add Greenhouse from the gallery to your list of managed SaaS apps.
-**To add Greenhouse from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Greenhouse** in the search box.
+1. Select **Greenhouse** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-4. In the search box, type **Greenhouse**, select **Greenhouse** from result panel then click **Add** button to add the application.
- ![Greenhouse in the results list](common/search-new-app.png)
+## Configure and test Azure AD SSO for Greenhouse
-## Configure and test Azure AD single sign-on
+Configure and test Azure AD SSO with Greenhouse using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Greenhouse.
-In this section, you configure and test Azure AD single sign-on with Greenhouse based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in Greenhouse needs to be established.
+To configure and test Azure AD SSO with Greenhouse, perform the following steps:
-To configure and test Azure AD single sign-on with Greenhouse, you need to complete the following building blocks:
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
+2. **[Configure Greenhouse SSO](#configure-greenhouse-sso)** - to configure the Single Sign-On settings on application side.
+ 1. **[Create Greenhouse test user](#create-greenhouse-test-user)** - to have a counterpart of Britta Simon in Greenhouse that is linked to the Azure AD representation of user.
+3. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure Greenhouse Single Sign-On](#configure-greenhouse-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create Greenhouse test user](#create-greenhouse-test-user)** - to have a counterpart of Britta Simon in Greenhouse that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+## Configure Azure AD SSO
-### Configure Azure AD single sign-on
+Follow these steps to enable Azure AD SSO in the Azure portal.
-In this section, you enable Azure AD single sign-on in the Azure portal.
-
-To configure Azure AD single sign-on with Greenhouse, perform the following steps:
-
-1. In the [Azure portal](https://portal.azure.com/), on the **Greenhouse** application integration page, select **Single sign-on**.
-
- ![Configure single sign-on link](common/select-sso.png)
-
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
-
- ![Single sign-on select mode](common/select-saml-option.png)
-
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
+1. In the Azure portal, on the **Greenhouse** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png) 4. On the **Basic SAML Configuration** section, perform the following steps:
- ![Greenhouse Domain and URLs single sign-on information](common/sp-identifier.png)
- a. In the **Sign on URL** text box, type a URL using the following pattern: `https://<companyname>.greenhouse.io`
@@ -112,66 +90,57 @@ To configure Azure AD single sign-on with Greenhouse, perform the following step
![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
- b. Azure Ad Identifier
+### Create an Azure AD test user
- c. Logout URL
+In this section, you'll create a test user in the Azure portal called B.Simon.
-### Configure Greenhouse Single Sign-On
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
-To configure single sign-on on **Greenhouse** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Greenhouse support team](https://www.greenhouse.io/contact). They set this setting to have the SAML SSO connection set properly on both sides.
-
-### Create an Azure AD test user
-
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type **brittasimon\@yourcompanydomain.extension**
- For example, BrittaSimon@contoso.com
+### Assign the Azure AD test user
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Greenhouse.
- d. Click **Create**.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Greenhouse**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
-### Assign the Azure AD test user
+## Configure Greenhouse SSO
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to Greenhouse.
+1. In a different web browser window, sign into Greenhouse website as an administrator.
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Greenhouse**.
+1. Go to the **Configure > Dev Center > Single Sign-On**.
- ![Enterprise applications blade](common/enterprise-applications.png)
+ ![screenshot for the sso page](./media/greenhouse-tutorial/configure.png)
-2. In the applications list, select **Greenhouse**.
+1. Perform the following steps in the Single Sign-On page.
- ![The Greenhouse link in the Applications list](common/all-applications.png)
+ ![screenshot for the sso configuration page](./media/greenhouse-tutorial/sso-page.png)
-3. In the menu on the left, select **Users and groups**.
+ a. Copy **SSO Assertion Consumer URL** value, paste this value into the **Reply URL** text box in the **Basic SAML Configuration** section in the Azure portal.
- ![The "Users and groups" link](common/users-groups-blade.png)
+ b. In the **Entity ID/Issuer** textbox, paste the **Azure AD Identifier** value which you have copied from the Azure portal.
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
+ c. In the **Single Sign-On URL** textbox, paste the **Login URL** value which you have copied from the Azure portal.
- ![The Add Assignment pane](common/add-assign-user.png)
+ d. Open the downloaded **Federation Metadata XML** from the Azure portal into Notepad and paste the content into the **IdP Certificate Fingerprint** textbox.
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
+ e. Select the **Name Identifier Format** value from the dropdown.
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
+ f. Click **Begin Testing**.
-7. In the **Add Assignment** dialog click the **Assign** button.
+ >[!NOTE]
+ >Alternatively you can also upload the **Federation Metadata XML** file by clicking on the **Choose File** option.
### Create Greenhouse test user
@@ -184,17 +153,13 @@ In order to enable Azure AD users to log into Greenhouse, they must be provision
1. Log in to your **Greenhouse** company site as an administrator.
-2. In the menu on the top, click **Configure**, and then click **Users**.
+2. Go to the **Configure > Users > New Users**
- ![Users](./media/greenhouse-tutorial/ic790791.png "Users")
+ ![Users](./media/greenhouse-tutorial/create-user-1.png "Users")
-3. Click **New Users**.
+4. In the **Add New Users** section, perform the following steps:
- ![New User](./media/greenhouse-tutorial/ic790792.png "New User")
-
-4. In the **Add New User** section, perform the following steps:
-
- ![Add New User](./media/greenhouse-tutorial/ic790793.png "Add New User")
+ ![Add New User](./media/greenhouse-tutorial/create-user-2.png "Add New User")
a. In the **Enter user emails** textbox, type the email address of a valid Azure Active Directory account you want to provision.
@@ -203,16 +168,17 @@ In order to enable Azure AD users to log into Greenhouse, they must be provision
>[!NOTE] >The Azure Active Directory account holders will receive an email including a link to confirm the account before it becomes active.
-### Test single sign-on
+### Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+* Click on **Test this application** in Azure portal. This will redirect to Greenhouse Sign-on URL where you can initiate the login flow.
-When you click the Greenhouse tile in the Access Panel, you should be automatically signed in to the Greenhouse for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+* Go to Greenhouse Sign-on URL directly and initiate the login flow from there.
-## Additional Resources
+* You can use Microsoft My Apps. When you click the Greenhouse tile in the My Apps, this will redirect to Greenhouse Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md) -- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)\ No newline at end of file
+Once you configure Greenhouse you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/heybuddy-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/heybuddy-tutorial.md
@@ -9,7 +9,7 @@ ms.service: active-directory
ms.subservice: saas-app-tutorial ms.workload: identity ms.topic: tutorial
-ms.date: 10/23/2019
+ms.date: 12/16/2020
ms.author: jeedes ---
@@ -21,7 +21,6 @@ In this tutorial, you'll learn how to integrate HeyBuddy with Azure Active Direc
* Enable your users to be automatically signed-in to HeyBuddy with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
## Prerequisites
@@ -35,16 +34,17 @@ To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment. * HeyBuddy supports **SP** initiated SSO-- * HeyBuddy supports **Just In Time** user provisioning
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+ ## Adding HeyBuddy from the gallery To configure the integration of HeyBuddy into Azure AD, you need to add HeyBuddy from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**.
@@ -52,11 +52,11 @@ To configure the integration of HeyBuddy into Azure AD, you need to add HeyBuddy
1. Select **HeyBuddy** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for HeyBuddy
+## Configure and test Azure AD SSO for HeyBuddy
Configure and test Azure AD SSO with HeyBuddy using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in HeyBuddy.
-To configure and test Azure AD SSO with HeyBuddy, complete the following building blocks:
+To configure and test Azure AD SSO with HeyBuddy, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
@@ -69,9 +69,9 @@ To configure and test Azure AD SSO with HeyBuddy, complete the following buildin
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **HeyBuddy** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **HeyBuddy** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
@@ -80,11 +80,8 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
a. In the **Sign on URL** text box, type a URL using the following pattern: `https://api.heybuddy.com/auth/<ENTITY ID>`
- b. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
- `YourCompanyInstanceofHeyBuddy`
- > [!NOTE]
- > These values are not real. Update these values with the actual Sign-On URL and Identifier (Entity ID). The `Entity ID` in the Sign on url is auto generated for each organization. Contact [HeyBuddy Client support team](mailto:support@heybuddy.com) to get these values.
+ > The value is not real. Update the value with the actual Sign-On URL. The `Entity ID` in the Sign on url is auto generated for each organization. Contact [HeyBuddy Client support team](mailto:support@heybuddy.com) to get these values.
1. HeyBuddy application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
@@ -98,7 +95,7 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
| | | > [!NOTE]
- > Please refer to this [link](../develop/active-directory-enterprise-app-role-management.md) on how to configure and setup the roles for the application.
+ > Please refer to this [link](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps#app-roles-ui) on how to configure and setup the roles for the application.
1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
@@ -123,15 +120,9 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **HeyBuddy**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. If you have setup the roles as explained in the above, you can select it from the **Select a role** dropdown.
1. In the **Add Assignment** dialog, click the **Assign** button. ## Configure HeyBuddy SSO
@@ -147,16 +138,15 @@ In this section, a user called Britta Simon is created in HeyBuddy. HeyBuddy sup
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the HeyBuddy tile in the Access Panel, you should be automatically signed in to the HeyBuddy for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+* Click on **Test this application** in Azure portal. This will redirect to HeyBuddy Sign-on URL where you can initiate the login flow.
-## Additional resources
+* Go to HeyBuddy Sign-on URL directly and initiate the login flow from there.
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the HeyBuddy tile in the My Apps, this will redirect to HeyBuddy Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md) -- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+## Next steps
-- [Try HeyBuddy with Azure AD](https://aad.portal.azure.com/)\ No newline at end of file
+Once you configure HeyBuddy you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/kumolus-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/kumolus-tutorial.md
@@ -9,7 +9,7 @@ ms.service: active-directory
ms.subservice: saas-app-tutorial ms.workload: identity ms.topic: tutorial
-ms.date: 08/04/2020
+ms.date: 12/16/2020
ms.author: jeedes ---
@@ -66,7 +66,7 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
1. In the Azure portal, on the **Kumolus** application integration page, find the **Manage** section and select **single sign-on**. 1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
@@ -98,7 +98,7 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
| role | user.assignedroles | > [!NOTE]
- > Kumolus expects roles for users assigned to the application. Please set up these roles in Azure AD so that users can be assigned the appropriate roles. To understand how to configure roles in Azure AD, see [here](../develop/active-directory-enterprise-app-role-management.md).
+ > Kumolus expects roles for users assigned to the application. Please set up these roles in Azure AD so that users can be assigned the appropriate roles. To understand how to configure roles in Azure AD, see [here](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps#app-roles-ui).
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
@@ -129,7 +129,7 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the app's overview page, find the **Manage** section and select **Users and groups**. 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog. 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. If you have setup the roles as explained in the above, you can select it from the **Select a role** dropdown.
1. In the **Add Assignment** dialog, click the **Assign** button. ## Configure Kumolus SSO
@@ -156,6 +156,6 @@ In this section, you test your Azure AD single sign-on configuration with follow
You can also use Microsoft Access Panel to test the application in any mode. When you click the Kumolus tile in the Access Panel, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Kumolus for which you set up the SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
-## Next Steps
+## Next steps
Once you configure Kumolus you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/mapbox-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/mapbox-tutorial.md
@@ -9,7 +9,7 @@ ms.service: active-directory
ms.subservice: saas-app-tutorial ms.workload: identity ms.topic: tutorial
-ms.date: 08/20/2020
+ms.date: 12/16/2020
ms.author: jeedes ---
@@ -21,7 +21,6 @@ In this tutorial, you'll learn how to integrate Mapbox with Azure Active Directo
* Enable your users to be automatically signed-in to Mapbox with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
## Prerequisites
@@ -35,7 +34,6 @@ To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment. * Mapbox supports **IDP** initiated SSO
-* Once you configure Mapbox you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
> [!NOTE] > Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
@@ -44,18 +42,18 @@ In this tutorial, you configure and test Azure AD SSO in a test environment.
To configure the integration of Mapbox into Azure AD, you need to add Mapbox from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **Mapbox** in the search box. 1. Select **Mapbox** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for Mapbox
+## Configure and test Azure AD SSO for Mapbox
Configure and test Azure AD SSO with Mapbox using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Mapbox.
-To configure and test Azure AD SSO with Mapbox, complete the following building blocks:
+To configure and test Azure AD SSO with Mapbox, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
@@ -68,9 +66,9 @@ To configure and test Azure AD SSO with Mapbox, complete the following building
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Mapbox** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **Mapbox** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
@@ -88,7 +86,7 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
| | | > [!NOTE]
- > To understand how to configure roles in Azure AD, see [here](../develop/active-directory-enterprise-app-role-management.md).
+ > To understand how to configure roles in Azure AD, see [here](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps#app-roles-ui).
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Raw)** and select **Download** to download the certificate and save it on your computer.
@@ -117,15 +115,9 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **Mapbox**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. If you have setup the roles as explained in the above, you can select it from the **Select a role** dropdown.
1. In the **Add Assignment** dialog, click the **Assign** button. ## Configure Mapbox SSO
@@ -162,20 +154,13 @@ In this section, you create a user called Britta Simon in Mapbox. Work with [Ma
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
-
-When you click the Mapbox tile in the Access Panel, you should be automatically signed in to the Mapbox for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
-
-## Additional resources
--- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+In this section, you test your Azure AD single sign-on configuration with following options.
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+* Click on Test this application in Azure portal and you should be automatically signed in to the Mapbox for which you set up the SSO
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+* You can use Microsoft My Apps. When you click the Mapbox tile in the My Apps, you should be automatically signed in to the Mapbox for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [Try Mapbox with Azure AD](https://aad.portal.azure.com/) -- [What is session control in Microsoft Cloud App Security?](/cloud-app-security/proxy-intro-aad)
+## Next steps
-- [How to protect Mapbox with advanced visibility and controls](/cloud-app-security/proxy-intro-aad)\ No newline at end of file
+Once you configure Mapbox you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/netskope-cloud-security-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/netskope-cloud-security-tutorial.md
@@ -9,7 +9,7 @@ ms.service: active-directory
ms.subservice: saas-app-tutorial ms.workload: identity ms.topic: tutorial
-ms.date: 10/31/2019
+ms.date: 12/17/2020
ms.author: jeedes ---
@@ -21,8 +21,6 @@ In this tutorial, you'll learn how to integrate Netskope Administrator Console w
* Enable your users to be automatically signed-in to Netskope Administrator Console with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
@@ -36,50 +34,51 @@ In this tutorial, you configure and test Azure AD SSO in a test environment.
* Netskope Administrator Console supports **SP and IDP** initiated SSO
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
++ ## Adding Netskope Administrator Console from the gallery To configure the integration of Netskope Administrator Console into Azure AD, you need to add Netskope Administrator Console from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **Netskope Administrator Console** in the search box. 1. Select **Netskope Administrator Console** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for Netskope Administrator Console
+## Configure and test Azure AD SSO for Netskope Administrator Console
Configure and test Azure AD SSO with Netskope Administrator Console using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Netskope Administrator Console.
-To configure and test Azure AD SSO with Netskope Administrator Console, complete the following building blocks:
+To configure and test Azure AD SSO with Netskope Administrator Console, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
- * **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
- * **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
1. **[Configure Netskope Administrator Console SSO](#configure-netskope-administrator-console-sso)** - to configure the single sign-on settings on application side.
- * **[Create Netskope Administrator Console test user](#create-netskope-administrator-console-test-user)** - to have a counterpart of B.Simon in Netskope Administrator Console that is linked to the Azure AD representation of user.
+ 1. **[Create Netskope Administrator Console test user](#create-netskope-administrator-console-test-user)** - to have a counterpart of B.Simon in Netskope Administrator Console that is linked to the Azure AD representation of user.
1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Netskope Administrator Console** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **Netskope Administrator Console** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png) 1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields:
- a. In the **Identifier** text box, type a URL using the following pattern:
- `<OrgKey>`
-
- b. In the **Reply URL** text box, type a URL using the following pattern:
+ In the **Reply URL** text box, type a URL using the following pattern:
`https://<tenant_host_name>/saml/acs` > [!NOTE]
- > These values are not real. Update these values with the actual Identifier and Reply URL. You will get these values explained later in the tutorial.
+ > The value is not real. Update the value with the actual Reply URL. You will get the value explained later in the tutorial.
1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
@@ -100,7 +99,7 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
| admin-role | user.assignedroles | > [!NOTE]
- > Click [here](../develop/active-directory-enterprise-app-role-management.md) to know how to create roles in Azure AD.
+ > Click [here](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps#app-roles-ui) to know how to create roles in Azure AD.
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
@@ -129,15 +128,9 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **Netskope Administrator Console**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. If you have setup the roles as explained in the above, you can select it from the **Select a role** dropdown.
1. In the **Add Assignment** dialog, click the **Assign** button. ## Configure Netskope Administrator Console SSO
@@ -212,16 +205,20 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Netskope Administrator Console Sign on URL where you can initiate the login flow.
-When you click the Netskope Administrator Console tile in the Access Panel, you should be automatically signed in to the Netskope Administrator Console for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+* Go to Netskope Administrator Console Sign-on URL directly and initiate the login flow from there.
-## Additional resources
+#### IDP initiated:
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Netskope Administrator Console for which you set up the SSO
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+You can also use Microsoft My Apps to test the application in any mode. When you click the Netskope Administrator Console tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Netskope Administrator Console for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+## Next steps
-- [Try Netskope Administrator Console with Azure AD](https://aad.portal.azure.com/)\ No newline at end of file
+Once you configure Netskope Administrator Console you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/printerlogic-saas-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/printerlogic-saas-tutorial.md
@@ -9,7 +9,7 @@ ms.service: active-directory
ms.subservice: saas-app-tutorial ms.workload: identity ms.topic: tutorial
-ms.date: 12/16/2019
+ms.date: 12/18/2020
ms.author: jeedes ---
@@ -21,8 +21,6 @@ In this tutorial, you'll learn how to integrate PrinterLogic SaaS with Azure Act
* Enable your users to be automatically signed-in to PrinterLogic SaaS with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
@@ -41,18 +39,18 @@ In this tutorial, you configure and test Azure AD SSO in a test environment.
To configure the integration of PrinterLogic SaaS into Azure AD, you need to add PrinterLogic SaaS from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **PrinterLogic SaaS** in the search box. 1. Select **PrinterLogic SaaS** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for PrinterLogic SaaS
+## Configure and test Azure AD SSO for PrinterLogic SaaS
Configure and test Azure AD SSO with PrinterLogic SaaS using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in PrinterLogic SaaS.
-To configure and test Azure AD SSO with PrinterLogic SaaS, complete the following building blocks:
+To configure and test Azure AD SSO with PrinterLogic SaaS, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
@@ -65,9 +63,9 @@ To configure and test Azure AD SSO with PrinterLogic SaaS, complete the followin
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **PrinterLogic SaaS** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **PrinterLogic SaaS** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
@@ -98,7 +96,7 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
| Role | user.assignedroles | > [!NOTE]
- > Please click [here](../develop/active-directory-enterprise-app-role-management.md) to know how to configure Role in Azure AD
+ > Please click [here](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps#app-roles-ui) to know how to configure Role in Azure AD.
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
@@ -127,15 +125,9 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **PrinterLogic SaaS**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. If you have setup the roles as explained in the above, you can select it from the **Select a role** dropdown.
1. In the **Add Assignment** dialog, click the **Assign** button. ## Configure PrinterLogic SaaS SSO
@@ -148,16 +140,21 @@ In this section, a user called Britta Simon is created in PrinterLogic SaaS. Pri
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to AskYourTeam Sign on URL where you can initiate the login flow.
+
+* Go to AskYourTeam Sign-on URL directly and initiate the login flow from there.
-When you click the PrinterLogic SaaS tile in the Access Panel, you should be automatically signed in to the PrinterLogic SaaS for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+#### IDP initiated:
-## Additional resources
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the AskYourTeam for which you set up the SSO
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+You can also use Microsoft My Apps to test the application in any mode. When you click the AskYourTeam tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the AskYourTeam for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) -- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+## Next steps
-- [Try PrinterLogic SaaS with Azure AD](https://aad.portal.azure.com/)\ No newline at end of file
+Once you configure PrinterLogic SaaS you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/prodpad-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/prodpad-tutorial.md
@@ -9,7 +9,7 @@ ms.service: active-directory
ms.subservice: saas-app-tutorial ms.workload: identity ms.topic: tutorial
-ms.date: 10/28/2020
+ms.date: 12/18/2020
ms.author: jeedes ---
@@ -67,7 +67,7 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
1. In the Azure portal, on the **ProdPad** application integration page, find the **Manage** section and select **single sign-on**. 1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
@@ -93,7 +93,7 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
| User.ProdpadRole | user.assignedroles | > [!NOTE]
- > ProdPad expects roles for users assigned to the application. Please set up these roles in Azure AD so that users can be assigned the appropriate roles. To understand how to configure roles in Azure AD, see [here](../develop/active-directory-enterprise-app-role-management.md).
+ > ProdPad expects roles for users assigned to the application. Please set up these roles in Azure AD so that users can be assigned the appropriate roles. To understand how to configure roles in Azure AD, see [here](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps#app-roles-ui).
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
@@ -123,7 +123,7 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the app's overview page, find the **Manage** section and select **Users and groups**. 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog. 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. If you have setup the roles as explained in the above, you can select it from the **Select a role** dropdown.
1. In the **Add Assignment** dialog, click the **Assign** button. ## Configure ProdPad SSO
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/servicechannel-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/servicechannel-tutorial.md
@@ -9,7 +9,7 @@ ms.service: active-directory
ms.subservice: saas-app-tutorial ms.workload: identity ms.topic: tutorial
-ms.date: 08/29/2019
+ms.date: 12/18/2020
ms.author: jeedes ---
@@ -21,8 +21,6 @@ In this tutorial, you'll learn how to integrate ServiceChannel with Azure Active
* Enable your users to be automatically signed-in to ServiceChannel with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
@@ -41,18 +39,18 @@ In this tutorial, you configure and test Azure AD SSO in a test environment.
To configure the integration of ServiceChannel into Azure AD, you need to add ServiceChannel from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **ServiceChannel** in the search box. 1. Select **ServiceChannel** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for ServiceChannel
+## Configure and test Azure AD SSO for ServiceChannel
Configure and test Azure AD SSO with ServiceChannel using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in ServiceChannel.
-To configure and test Azure AD SSO with ServiceChannel, complete the following building blocks:
+To configure and test Azure AD SSO with ServiceChannel, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
@@ -65,9 +63,9 @@ To configure and test Azure AD SSO with ServiceChannel, complete the following b
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **ServiceChannel** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **ServiceChannel** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
@@ -80,9 +78,9 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
`https://<customer domain>.servicechannel.com/saml/acs` > [!NOTE]
- > These values are not real. Update these values with the actual Identifier and Reply URL. Here we suggest you to use the unique value of string in the Identifier. Contact [ServiceChannel Client support team](https://servicechannel.zendesk.com/hc/en-us) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Identifier and Reply URL. Here we suggest you to use the unique value of string in the Identifier. Contact [ServiceChannel Client support team](https://servicechannel.zendesk.com/hc/) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
-1. The role claim is pre-configured so you don't have to configure it but you still need to create them in Azure AD using this [article](../develop/active-directory-enterprise-app-role-management.md). You can refer ServiceChannel guide [here](https://servicechannel.zendesk.com/hc/articles/217514326-Azure-AD-Configuration-Example) for more guidance on claims.
+1. The role claim is pre-configured so you don't have to configure it but you still need to create them in Azure AD using this [article](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps#app-roles-ui). You can refer ServiceChannel guide [here](https://servicechannel.zendesk.com/hc/articles/217514326-Azure-AD-Configuration-Example) for more guidance on claims.
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
@@ -111,20 +109,14 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **ServiceChannel**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. If you have setup the roles as explained in the above, you can select it from the **Select a role** dropdown.
1. In the **Add Assignment** dialog, click the **Assign** button. ## Configure ServiceChannel SSO
-To configure single sign-on on **ServiceChannel** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [ServiceChannel support team](https://servicechannel.zendesk.com/hc/en-us). They set this setting to have the SAML SSO connection set properly on both sides.
+To configure single sign-on on **ServiceChannel** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [ServiceChannel support team](https://servicechannel.zendesk.com/hc/). They set this setting to have the SAML SSO connection set properly on both sides.
### Create ServiceChannel test user
@@ -132,16 +124,13 @@ Application supports Just in time user provisioning and after authentication use
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
-
-When you click the ServiceChannel tile in the Access Panel, you should be automatically signed in to the ServiceChannel for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+In this section, you test your Azure AD single sign-on configuration with following options.
-## Additional resources
+* Click on Test this application in Azure portal and you should be automatically signed in to the ServiceChannel for which you set up the SSO
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the ServiceChannel tile in the My Apps, you should be automatically signed in to the ServiceChannel for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md) -- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+## Next steps
-- [Try ServiceChannel with Azure AD](https://aad.portal.azure.com/)\ No newline at end of file
+Once you configure ServiceChannel you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/servicenow-provisioning-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/servicenow-provisioning-tutorial.md
@@ -45,7 +45,7 @@ The scenario outlined in this tutorial assumes that you already have the followi
1. Identify your ServiceNow instance name. You can find the instance name in the URL that you use to access ServiceNow. In the example below, the instance name is dev35214.
- ![ServiceNow Instance](media/servicenow-provisioning-tutorial/servicenow_instance.png)
+ ![ServiceNow Instance](media/servicenow-provisioning-tutorial/servicenow-instance.png)
2. Obtain credentials for an admin in ServiceNow. Navigate to the user profile in ServiceNow and verify that the user has the admin role.
@@ -89,7 +89,7 @@ This section guides you through the steps to configure the Azure AD provisioning
5. Under the **Admin Credentials** section, input your ServiceNow admin credentials and username. Click **Test Connection** to ensure Azure AD can connect to ServiceNow. If the connection fails, ensure your ServiceNow account has Admin permissions and try again.
- ![Screenshot shows the Service Provisioning page, where you can enter Admin Credentials.](./media/servicenow-provisioning-tutorial/provisioning.png)
+ ![Screenshot shows the Service Provisioning page, where you can enter Admin Credentials.](./media/servicenow-provisioning-tutorial/servicenow-provisioning.png)
6. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
@@ -137,11 +137,16 @@ Once you've configured provisioning, use the following resources to monitor your
`Details: Your ServiceNow instance name appears to be invalid. Please provide a current ServiceNow administrative user name and password along with the name of a valid ServiceNow instance.`
- This error indicates an issue communicating with the ServiceNow instance. Double-check to make sure that the following settings are *disabled* in ServiceNow:
+ This error indicates an issue communicating with the ServiceNow instance.
+
+ If you are having test connection issues try making the following settings as **disabled** in ServiceNow:
1. Select **System Security** > **High security settings** > **Require basic authentication for incoming SCHEMA requests**. 2. Select **System Properties** > **Web Services** > **Require basic authorization for incoming SOAP requests**.
+ ![Authorizing SOAP request](media/servicenow-provisioning-tutorial/servicenow-webservice.png)
+
+ If it resolves your issues then contact ServiceNow support and ask them to turn on SOAP debugging to help troubleshoot.
## Additional resources * [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/shmoopforschools-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/shmoopforschools-tutorial.md
@@ -9,7 +9,7 @@ ms.service: active-directory
ms.subservice: saas-app-tutorial ms.workload: identity ms.topic: tutorial
-ms.date: 08/12/2019
+ms.date: 12/18/2020
ms.author: jeedes ---
@@ -21,8 +21,6 @@ In this tutorial, you'll learn how to integrate Shmoop For Schools with Azure Ac
* Enable your users to be automatically signed-in to Shmoop For Schools with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
@@ -41,33 +39,33 @@ In this tutorial, you configure and test Azure AD SSO in a test environment.
To configure the integration of Shmoop For Schools into Azure AD, you need to add Shmoop For Schools from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **Shmoop For Schools** in the search box. 1. Select **Shmoop For Schools** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for Shmoop For Schools
+## Configure and test Azure AD SSO for Shmoop For Schools
Configure and test Azure AD SSO with Shmoop For Schools using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Shmoop For Schools.
-To configure and test Azure AD SSO with Shmoop For Schools, complete the following building blocks:
+To configure and test Azure AD SSO with Shmoop For Schools, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
- * **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
- * **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
2. **[Configure Shmoop For Schools SSO](#configure-shmoop-for-schools-sso)** - to configure the Single Sign-On settings on application side.
- * **[Create Shmoop For Schools test user](#create-shmoop-for-schools-test-user)** - to have a counterpart of B.Simon in Shmoop For Schools that is linked to the Azure AD representation of user.
+ 1. **[Create Shmoop For Schools test user](#create-shmoop-for-schools-test-user)** - to have a counterpart of B.Simon in Shmoop For Schools that is linked to the Azure AD representation of user.
3. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Shmoop For Schools** application integration page, find the **Manage** section and select **Single sign-on**.
+1. In the Azure portal, on the **Shmoop For Schools** application integration page, find the **Manage** section and select **Single sign-on**.
1. On the **Select a Single sign-on method** page, select **SAML**.
-1. On the **Set up Single Sign-On with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up Single Sign-On with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
@@ -86,15 +84,15 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
![image](common/default-attributes.png)
- > [!NOTE]
- > Shmoop for School supports two roles for users: **Teacher** and **Student**. Set up these roles in Azure AD so that users can be assigned the appropriate roles. To understand how to configure roles in Azure AD, see [here](../develop/active-directory-enterprise-app-role-management.md).
- 1. In addition to above, Shmoop For Schools application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirements. | Name | Source Attribute| | --------- | --------------- | | role | user.assignedroles |
+ > [!NOTE]
+ > Shmoop for School supports two roles for users: **Teacher** and **Student**. Set up these roles in Azure AD so that users can be assigned the appropriate roles. To understand how to configure roles in Azure AD, see [here](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps#app-roles-ui).
+ 1. On the **Set up Single Sign-On with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer. ![The Certificate download link](common/copy-metadataurl.png)
@@ -118,15 +116,9 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **Shmoop For Schools**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. If you have setup the roles as explained in the above, you can select it from the **Select a role** dropdown.
1. In the **Add Assignment** dialog, click the **Assign** button. ## Configure Shmoop For Schools SSO
@@ -142,16 +134,15 @@ In this section, a user called B.Simon is created in Shmoop For Schools. Shmoop
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the Shmoop For Schools tile in the Access Panel, you should be automatically signed in to the Shmoop For Schools for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+* Click on **Test this application** in Azure portal. This will redirect to Shmoop For Schools Sign-on URL where you can initiate the login flow.
-## Additional Resources
+* Go to Shmoop For Schools Sign-on URL directly and initiate the login flow from there.
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the Shmoop For Schools tile in the My Apps, this will redirect to Shmoop For Schools Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md) -- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+## Next steps
-- [Try Shmoop For Schools with Azure AD](https://aad.portal.azure.com/)\ No newline at end of file
+Once you configure Shmoop For Schools you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/teamzskill-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/teamzskill-tutorial.md
@@ -9,7 +9,7 @@ ms.service: active-directory
ms.subservice: saas-app-tutorial ms.workload: identity ms.topic: tutorial
-ms.date: 09/17/2020
+ms.date: 12/18/2020
ms.author: jeedes ---
@@ -66,7 +66,7 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
1. In the Azure portal, on the **TeamzSkill** application integration page, find the **Manage** section and select **single sign-on**. 1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
@@ -104,7 +104,7 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
| role | user.assignedroles | > [!NOTE]
- > TeamzSkill expects roles for users assigned to the application. Please set up these roles in Azure AD so that users can be assigned the appropriate roles. To understand how to configure roles in Azure AD, see [here](../develop/active-directory-enterprise-app-role-management.md).
+ > TeamzSkill expects roles for users assigned to the application. Please set up these roles in Azure AD so that users can be assigned the appropriate roles. To understand how to configure roles in Azure AD, see [here](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps#app-roles-ui).
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
@@ -135,7 +135,7 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the app's overview page, find the **Manage** section and select **Users and groups**. 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog. 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. If you have setup the roles as explained in the above, you can select it from the **Select a role** dropdown.
1. In the **Add Assignment** dialog, click the **Assign** button. ## Configure TeamzSkill SSO
@@ -177,6 +177,6 @@ In this section, you test your Azure AD single sign-on configuration with follow
You can also use Microsoft Access Panel to test the application in any mode. When you click the TeamzSkill tile in the Access Panel, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the TeamzSkill for which you set up the SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
-## Next Steps
+## Next steps
Once you configure TeamzSkill you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).\ No newline at end of file
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/tickitlms-learn-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/tickitlms-learn-tutorial.md
@@ -9,7 +9,7 @@ ms.service: active-directory
ms.subservice: saas-app-tutorial ms.workload: identity ms.topic: tutorial
-ms.date: 03/30/2020
+ms.date: 12/18/2020
ms.author: jeedes ---
@@ -21,8 +21,6 @@ In this tutorial, you'll learn how to integrate TickitLMS Learn with Azure Activ
* Enable your users to be automatically signed-in to TickitLMS Learn with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
@@ -35,13 +33,12 @@ To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment. * TickitLMS Learn supports **SP and IDP** initiated SSO
-* Once you configure TickitLMS Learn you can enforce session control, which protect exfiltration and infiltration of your organizationΓÇÖs sensitive data in real-time. Session control extend from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
## Adding TickitLMS Learn from the gallery To configure the integration of TickitLMS Learn into Azure AD, you need to add TickitLMS Learn from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**.
@@ -49,11 +46,11 @@ To configure the integration of TickitLMS Learn into Azure AD, you need to add T
1. Select **TickitLMS Learn** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for TickitLMS Learn
+## Configure and test Azure AD SSO for TickitLMS Learn
Configure and test Azure AD SSO with TickitLMS Learn using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in TickitLMS Learn.
-To configure and test Azure AD SSO with TickitLMS Learn, complete the following building blocks:
+To configure and test Azure AD SSO with TickitLMS Learn, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
@@ -66,9 +63,9 @@ To configure and test Azure AD SSO with TickitLMS Learn, complete the following
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **TickitLMS Learn** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **TickitLMS Learn** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
@@ -95,6 +92,9 @@ Follow these steps to enable Azure AD SSO in the Azure portal.
| department | user.department | | reportsto | user.reportsto |
+ > [!NOTE]
+ > TickitLMS Learn expects roles for users assigned to the application. Please set up these roles in Azure AD so that users can be assigned the appropriate roles. To understand how to configure roles in Azure AD, see [here](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps#app-roles-ui).
+ 1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer. ![The Certificate download link](common/copy-metadataurl.png)
@@ -117,15 +117,9 @@ In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **TickitLMS Learn**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. If you have setup the roles as explained in the above, you can select it from the **Select a role** dropdown.
1. In the **Add Assignment** dialog, click the **Assign** button. ## Configure TickitLMS Learn SSO
@@ -138,20 +132,21 @@ In this section, you create a user called Britta Simon in TickitLMS Learn. Work
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
-When you click the TickitLMS Learn tile in the Access Panel, you should be automatically signed in to the TickitLMS Learn for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+* Click on **Test this application** in Azure portal. This will redirect to TickitLMS Learn Sign on URL where you can initiate the login flow.
-## Additional resources
+* Go to TickitLMS Learn Sign-on URL directly and initiate the login flow from there.
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+#### IDP initiated:
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the TickitLMS Learn for which you set up the SSO
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+You can also use Microsoft My Apps to test the application in any mode. When you click the TickitLMS Learn tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the TickitLMS Learn for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [Try TickitLMS Learn with Azure AD](https://aad.portal.azure.com/) -- [What is session control in Microsoft Cloud App Security?](/cloud-app-security/proxy-intro-aad)
+## Next steps
-- [How to protect TickitLMS Learn with advanced visibility and controls](/cloud-app-security/proxy-intro-aad)\ No newline at end of file
+Once you configure TickitLMS Learn you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
\ No newline at end of file
app-service https://docs.microsoft.com/en-us/azure/app-service/tutorial-custom-container https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/tutorial-custom-container.md
@@ -13,7 +13,7 @@ zone_pivot_groups: app-service-containers-windows-linux
::: zone pivot="container-windows"
-[Azure App Service](overview.md) provides pre-defined application stacks on Windows like ASP.NET or Node.js, running on IIS. The preconfigured Windows environment locks down the operating system from administrative access, software installations, changes to the global assembly cache, and so on (see [Operating system functionality on Azure App Service](operating-system-functionality.md)). However, using a custom Windows container in App Service (Preview) lets you make OS changes that your app needs, so it's easy to migrate on-premises app that requires custom OS and software configuration. This tutorial demonstrates how to migrate to App Service an ASP.NET app that uses custom fonts installed in the Windows font library. You deploy a custom-configured Windows image from Visual Studio to [Azure Container Registry](../container-registry/index.yml), and then run it in App Service.
+[Azure App Service](overview.md) provides pre-defined application stacks on Windows like ASP.NET or Node.js, running on IIS. The preconfigured Windows environment locks down the operating system from administrative access, software installations, changes to the global assembly cache, and so on (see [Operating system functionality on Azure App Service](operating-system-functionality.md)). However, using a custom Windows container in App Service lets you make OS changes that your app needs, so it's easy to migrate on-premises app that requires custom OS and software configuration. This tutorial demonstrates how to migrate to App Service an ASP.NET app that uses custom fonts installed in the Windows font library. You deploy a custom-configured Windows image from Visual Studio to [Azure Container Registry](../container-registry/index.yml), and then run it in App Service.
![Shows the web app running in a Windows container.](media/tutorial-custom-container/app-running.png)
app-service https://docs.microsoft.com/en-us/azure/app-service/tutorial-python-postgresql-app https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/tutorial-python-postgresql-app.md
@@ -3,7 +3,7 @@ title: 'Tutorial: Deploy a Python Django app with Postgres'
description: Create a Python web app with a PostgreSQL database and deploy it to Azure. The tutorial uses the Django framework and the app is hosted on Azure App Service on Linux. ms.devlang: python ms.topic: tutorial
-ms.date: 11/02/2020
+ms.date: 01/04/2021
ms.custom: [mvc, seodec18, seo-python-october2019, cli-validate, devx-track-python, devx-track-azurecli] --- # Tutorial: Deploy a Django web app with PostgreSQL in Azure App Service
@@ -224,15 +224,12 @@ Django database migrations ensure that the schema in the PostgreSQL on Azure dat
1. In the SSH session, run the following commands (you can paste commands using **Ctrl**+**Shift**+**V**): ```bash
- # Change to the folder where the app code is deployed
- cd site/wwwroot
+ # Change to the app folder
+ cd $APP_PATH
- # Activate default virtual environment in App Service container
+ # Activate the venv (requirements.txt is installed automatically)
source /antenv/bin/activate
- # Install packages
- pip install -r requirements.txt
- # Run database migrations python manage.py migrate
@@ -240,6 +237,8 @@ Django database migrations ensure that the schema in the PostgreSQL on Azure dat
python manage.py createsuperuser ```
+ If you encounter any errors related to connecting to the database, check the values of the application settings created in the previous section.
+ 1. The `createsuperuser` command prompts you for superuser credentials. For the purposes of this tutorial, use the default username `root`, press **Enter** for the email address to leave it blank, and enter `Pollsdb1` for the password. 1. If you see an error that the database is locked, make sure that you ran the `az webapp settings` command in the previous section. Without those settings, the migrate command cannot communicate with the database, resulting in the error.
@@ -248,13 +247,13 @@ Having issues? Refer first to the [Troubleshooting guide](configure-language-pyt
### 4.4 Create a poll question in the app
-1. In a browser, open the URL `http://<app-name>.azurewebsites.net`. The app should display the message "No polls are available" because there are no specific polls yet in the database.
+1. In a browser, open the URL `http://<app-name>.azurewebsites.net`. The app should display the message "Polls app" and "No polls are available" because there are no specific polls yet in the database.
If you see "Application Error", then it's likely that you either didn't create the required settings in the previous step, [Configure environment variables to connect the database](#42-configure-environment-variables-to-connect-the-database), or that those value contain errors. Run the command `az webapp config appsettings list` to check the settings. You can also [check the diagnostic logs](#6-stream-diagnostic-logs) to see specific errors during app startup. For example, if you didn't create the settings, the logs will show the error, `KeyError: 'DBNAME'`. After updating the settings to correct any errors, give the app a minute to restart, then refresh the browser.
-1. Browse to `http://<app-name>.azurewebsites.net/admin`. Sign in using superuser credentials from the previous section (`root` and `Pollsdb1`). Under **Polls**, select **Add** next to **Questions** and create a poll question with some choices.
+1. Browse to `http://<app-name>.azurewebsites.net/admin`. Sign in using Django superuser credentials from the previous section (`root` and `Pollsdb1`). Under **Polls**, select **Add** next to **Questions** and create a poll question with some choices.
1. Browse again to `http://<app-name>.azurewebsites.net` to confirm that the questions are now presented to the user. Answer questions however you like to generate some data in the database.
@@ -280,7 +279,7 @@ In a terminal window, run the following commands. Be sure to follow the prompts
python3 -m venv venv source venv/bin/activate
-# Install packages
+# Install dependencies
pip install -r requirements.txt # Run Django migrations python manage.py migrate
@@ -298,7 +297,7 @@ py -3 -m venv venv
Set-ExecutionPolicy -Scope CurrentUser -ExecutionPolicy RemoteSigned -Force venv\scripts\activate
-# Install packages
+# Install dependencies
pip install -r requirements.txt # Run Django migrations python manage.py migrate
@@ -315,7 +314,7 @@ python manage.py runserver
py -3 -m venv venv venv\scripts\activate
-:: Install packages
+:: Install dependencies
pip install -r requirements.txt :: Run Django migrations python manage.py migrate
@@ -385,11 +384,8 @@ Because you made changes to the data model, you need to rerun database migration
Open an SSH session again in the browser by navigating to `https://<app-name>.scm.azurewebsites.net/webssh/host`. Then run the following commands: ```
-cd site/wwwroot
-
-# Activate default virtual environment in App Service container
+cd $APP_PATH
source /antenv/bin/activate
-# Run database migrations
python manage.py migrate ```
automation https://docs.microsoft.com/en-us/azure/automation/automation-send-email https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-send-email.md
@@ -3,7 +3,7 @@ title: Send an email from an Azure Automation runbook
description: This article tells how to send an email from within a runbook. services: automation ms.subservice: process-automation
-ms.date: 07/15/2019
+ms.date: 01/05/2021
ms.topic: conceptual ---
@@ -62,7 +62,7 @@ For other ways to create an Azure Key Vault and store a secret, see [Key Vault q
To use Azure Key Vault within a runbook, you must import the following modules into your Automation account:
-* [Az.Profile](https://www.powershellgallery.com/packages/Az.Profile)
+* [Az.Accounts](https://www.powershellgallery.com/packages/Az.Accounts)
* [Az.KeyVault](https://www.powershellgallery.com/packages/Az.KeyVault) For instructions, see [Import Az modules](shared-resources/modules.md#import-az-modules).
@@ -138,7 +138,7 @@ If you don't initially see your test email, check your **Junk** and **Spam** fol
1. When the runbook is no longer needed, select it in the runbook list and click **Delete**.
-2. Delete the Key Vault by using the [Remove-AzKeyVault](/powershell/module/az.keyvault/remove-azkeyvault?view=azps-3.7.0) cmdlet.
+2. Delete the Key Vault by using the [Remove-AzKeyVault](/powershell/module/az.keyvault/remove-azkeyvault) cmdlet.
```azurepowershell-interactive $VaultName = "<your KeyVault name>"
automation https://docs.microsoft.com/en-us/azure/automation/update-management/remove-vms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/update-management/remove-vms.md
@@ -3,7 +3,7 @@ title: Remove VMs from Azure Automation Update Management
description: This article tells how to remove machines managed with Update Management. services: automation ms.topic: conceptual
-ms.date: 09/09/2020
+ms.date: 01/05/2021
ms.custom: mvc --- # Remove VMs from Update Management
@@ -26,13 +26,20 @@ Sign in to the [Azure portal](https://portal.azure.com).
3. In the Azure portal, navigate to **Log Analytics workspaces**. Select your workspace from the list.
-4. In your Log Analytics workspace, select **Logs** and then and choose **Query explorer** from the top actions menu.
+4. In your Log Analytics workspace, select **Advanced settings** and then and choose **Computer Groups** from the left-hand menu.
-5. From **Query explorer** in the right-hand pane, expand **Saved Queries\Updates** and select the saved search query `MicrosoftDefaultComputerGroup` to edit it.
+5. From **Computer Groups** in the right-hand pane, select **Saved groups**.
-6. In the query editor, review the query and find the UUID for the VM. Remove the UUID for the VM and repeat the steps for any other VMs you want to remove.
+6. From the table, for the saved search query **Updates:MicrosoftDefaultComputerGroup**, click the **View Members** icon to run and view its members.
-7. Save the saved search when you're finished editing it by selecting **Save** from the top bar.
+7. In the query editor, review the query and find the UUID for the VM. Remove the UUID for the VM and repeat the steps for any other VMs you want to remove.
+
+8. Save the saved search when you're finished editing it by selecting **Save** from the top bar. When prompted, specify the following:
+
+ * **Name**: MicrosoftDefaultComputerGroup
+ * **Save as**: Function
+ * **Alias**: Updates__MicrosoftDefaultComputerGroup
+ * **Category**: Updates
>[!NOTE] >Machines are still shown after you have unenrolled them because we report on all machines assessed in the last 24 hours. After removing the machine, you need to wait 24 hours before they are no longer listed.
azure-app-configuration https://docs.microsoft.com/en-us/azure/azure-app-configuration/pull-key-value-devops-pipeline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/pull-key-value-devops-pipeline.md
@@ -99,6 +99,9 @@ echo "$env:myBuildSetting"
``` And the value will be printed to the console.
+> [!NOTE]
+> Azure Key Vault references within App Configuration will be resolved and set as [secret variables](/azure/devops/pipelines/process/variables#secret-variables). In Azure pipelines, secret variables are masked out from log. They are not passed into tasks as environment variables and must instead be passed as inputs.
+ ## Troubleshooting If an unexpected error occurs, debug logs can be enabled by setting the pipeline variable `system.debug` to `true`.
azure-arc https://docs.microsoft.com/en-us/azure/azure-arc/servers/manage-vm-extensions-cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/manage-vm-extensions-cli.md
@@ -1,7 +1,7 @@
--- title: Enable VM extension using Azure CLI description: This article describes how to deploy virtual machine extensions to Azure Arc enabled servers running in hybrid cloud environments using the Azure CLI.
-ms.date: 11/20/2020
+ms.date: 01/05/2021
ms.topic: conceptual ms.custom: devx-track-azurecli ---
@@ -24,10 +24,10 @@ az extension add --name connectedmachine
To enable a VM extension on your Arc enabled server, use [az connectedmachine extension create](/cli/azure/ext/connectedmachine/connectedmachine/extension#ext_connectedmachine_az_connectedmachine_extension_create) with the `--machine-name`, `--extension-name`, `--location`, `--type`, `settings`, and `--publisher` parameters.
-The following example enables the Log Analytics VM extension on an Arc enabled Linux server:
+The following example enables the Log Analytics VM extension on an Arc enabled server:
```azurecli
-az connectedmachine extension create --machine-name "myMachineName" --name "OmsAgentforLinux" --location "eastus" --type "CustomScriptExtension" --publisher "Microsoft.EnterpriseCloud.Monitoring" --settings "{\"workspaceId\":\"workspaceId"}" --protected-settings "{\workspaceKey\":"\workspaceKey"} --type-handler-version "1.10" --resource-group "myResourceGroup"
+az connectedmachine extension create --machine-name "myMachineName" --name "OmsAgentForLinux or MicrosoftMonitoringAgent" --location "eastus" --settings '{\"workspaceId\":\"myWorkspaceId\"}' --protected-settings '{\"workspaceKey\":\"myWorkspaceKey\"}' --resource-group "myResourceGroup" --type-handler-version "1.13" --type "OmsAgentForLinux or MicrosoftMonitoringAgent" --publisher "Microsoft.EnterpriseCloud.Monitoring"
``` The following example enables the Custom Script Extension on an Arc enabled server:
@@ -74,7 +74,7 @@ To remove an installed VM extension on your Arc enabled server, use [az connecte
For example, to remove the Log Analytics VM extension for Linux, run the following command: ```azurecli
-az connectedmachine extension delete --machine-name "myMachineName" --name "OmsAgentforLinux" --resource-group "myResourceGroup"
+az connectedmachine extension delete --machine-name "myMachineName" --name "OmsAgentForLinux" --resource-group "myResourceGroup"
``` ## Next steps
azure-arc https://docs.microsoft.com/en-us/azure/azure-arc/servers/manage-vm-extensions-powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/manage-vm-extensions-powershell.md
@@ -1,7 +1,7 @@
--- title: Enable VM extension using Azure PowerShell description: This article describes how to deploy virtual machine extensions to Azure Arc enabled servers running in hybrid cloud environments using Azure PowerShell.
-ms.date: 11/24/2020
+ms.date: 01/05/2021
ms.topic: conceptual ---
@@ -30,9 +30,11 @@ The following example enables the Log Analytics VM extension on a Arc enabled Li
```powershell PS C:\> $Setting = @{ "workspaceId" = "workspaceId" } PS C:\> $protectedSetting = @{ "workspaceKey" = "workspaceKey" }
-PS C:\> New-AzConnectedMachineExtension -Name OMSLinuxAgent -ResourceGroupName "myResourceGroup" -MachineName "myMachine" -Location "eastus" -Publisher "Microsoft.EnterpriseCloud.Monitoring" -TypeHandlerVersion "1.10" -Settings $Setting -ProtectedSetting $protectedSetting -ExtensionType "OmsAgentforLinux"
+PS C:\> New-AzConnectedMachineExtension -Name OMSLinuxAgent -ResourceGroupName "myResourceGroup" -MachineName "myMachine" -Location "eastus" -Publisher "Microsoft.EnterpriseCloud.Monitoring" -TypeHandlerVersion "1.10" -Settings $Setting -ProtectedSetting $protectedSetting -ExtensionType "OmsAgentForLinux"
```
+To enable the Log Analytics VM extension on an Arc enabled Windows server, change the value for the `-ExtensionType` parameter to `"MicrosoftMonitoringAgent"` in the previous example.
+ The following example enables the Custom Script Extension on an Arc enabled server: ```powershell
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-app-settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-app-settings.md
@@ -181,6 +181,14 @@ Specifies the maximum number of language worker processes, with a default value
|---|------------| |FUNCTIONS\_WORKER\_PROCESS\_COUNT|2|
+## PYTHON\_THREADPOOL\_THREAD\_COUNT
+
+Specifies the maximum number of threads that a Python language worker would use to execute function invocations, with a default value of `1` for Python version `3.8` and below. For Python version `3.9` and above, the value is set to `None`. Note that this setting does not guarantee the number of threads that would be set during executions. The setting allows Python to expand the number of threads to the specified value. The setting only applies to Python functions apps. Additionally, the setting applies to synchronous functions invocation and not for coroutines.
+
+|Key|Sample value|Max value|
+|---|------------|---------|
+|PYTHON\_THREADPOOL\_THREAD\_COUNT|2|32|
+ ## FUNCTIONS\_WORKER\_RUNTIME
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-create-first-kotlin-maven https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-create-first-kotlin-maven.md
@@ -27,9 +27,9 @@ To develop functions using Kotlin, you must have the following installed:
> [!IMPORTANT] > The JAVA_HOME environment variable must be set to the install location of the JDK to complete this quickstart.
-## Generate a new Functions project
+## Generate a new Azure Functions project
-In an empty folder, run the following command to generate the Functions project from a [Maven archetype](https://maven.apache.org/guides/introduction/introduction-to-archetypes.html).
+In an empty folder, run the following command to generate the Azure Functions project from a [Maven archetype](https://maven.apache.org/guides/introduction/introduction-to-archetypes.html).
# [bash](#tab/bash) ```bash
@@ -159,16 +159,16 @@ The deploy process to Azure Functions uses account credentials from the Azure CL
az login ```
-Deploy your code into a new Function app using the `azure-functions:deploy` Maven target.
+Deploy your code into a new function app using the `azure-functions:deploy` Maven target.
> [!NOTE]
-> When you use Visual Studio Code to deploy your Function app, remember to choose a non-free subscription, or you will get an error. You can watch your subscription on the left side of the IDE.
+> When you use Visual Studio Code to deploy your function app, remember to choose a non-free subscription, or you will get an error. You can watch your subscription on the left side of the IDE.
``` mvn azure-functions:deploy ```
-When the deploy is complete, you see the URL you can use to access your Azure function app:
+When the deploy is complete, you see the URL you can use to access your function app:
<pre> [INFO] Successfully deployed Function App with package.
@@ -193,7 +193,7 @@ Hello AzureFunctions!
## Make changes and redeploy
-Edit the `src/main.../Function.java` source file in the generated project to alter the text returned by your Function app. Change this line:
+Edit the `src/main.../Function.java` source file in the generated project to alter the text returned by your function app. Change this line:
```kotlin return request
@@ -226,7 +226,7 @@ Hi, AzureFunctionsTest
## Reference bindings
-To work with [Functions triggers and bindings](functions-triggers-bindings.md) other than HTTP trigger and Timer trigger, you need to install binding extensions. While not required by this article, you'll need to know how to do enable extensions when working with other binding types.
+To work with [Azure Functions triggers and bindings](functions-triggers-bindings.md) other than HTTP trigger and Timer trigger, you need to install binding extensions. While not required by this article, you'll need to know how to do enable extensions when working with other binding types.
[!INCLUDE [functions-extension-bundles](../../includes/functions-extension-bundles.md)]
@@ -234,7 +234,7 @@ To work with [Functions triggers and bindings](functions-triggers-bindings.md) o
You've created a Kotlin function app with a simple HTTP trigger and deployed it to Azure Functions. -- Review the [Java Functions developer guide](functions-reference-java.md) for more information on developing Java and Kotlin functions.
+- Review the [Java function developer guide](functions-reference-java.md) for more information on developing Java and Kotlin functions.
- Add additional functions with different triggers to your project using the `azure-functions:add` Maven target. - Write and debug functions locally with [Visual Studio Code](https://code.visualstudio.com/docs/java/java-azurefunctions), [IntelliJ](functions-create-maven-intellij.md), and [Eclipse](functions-create-maven-eclipse.md). -- Debug functions deployed in Azure with Visual Studio Code. See the Visual Studio Code [serverless Java applications](https://code.visualstudio.com/docs/java/java-serverless#_remote-debug-functions-running-in-the-cloud) documentation for instructions.\ No newline at end of file
+- Debug functions deployed in Azure with Visual Studio Code. See the Visual Studio Code [serverless Java applications](https://code.visualstudio.com/docs/java/java-serverless#_remote-debug-functions-running-in-the-cloud) documentation for instructions.
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-reference-python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-reference-python.md
@@ -294,87 +294,7 @@ Likewise, you can set the `status_code` and `headers` for the response message i
## Scaling and Performance
-It's important to understand how your functions perform and how that performance affects the way your function app gets scaled. This is particularly important when designing highly performant apps. The following are several factors to consider when designing, writing and configuring your functions apps.
-
-### Horizontal scaling
-By default, Azure Functions automatically monitors the load on your application and creates additional host instances for Python as needed. Functions uses built-in thresholds for different trigger types to decide when to add instances, such as the age of messages and queue size for QueueTrigger. These thresholds aren't user configurable. For more information, see [How the Consumption and Premium plans work](functions-scale.md#how-the-consumption-and-premium-plans-work).
-
-### Improving throughput performance
-
-A key to improving performance is understanding how your app uses resources and being able to configure your function app accordingly.
-
-#### Understanding your workload
-
-The default configurations are suitable for most of Azure Functions applications. However, you can improve the performance of your applications' throughput by employing configurations based on your workload profile. The first step is to understand the type of workload that you are running.
-
-| | I/O-bound workload | CPU-bound workload |
-|--| -- | -- |
-|**Function app characteristics**| <ul><li>App needs to handle many concurrent invocations.</li> <li> App processes a large number of I/O events, such as network calls and disk read/writes.</li> </ul>| <ul><li>App does long-running computations, such as image resizing.</li> <li>App does data transformation.</li> </ul> |
-|**Examples**| <ul><li>Web APIs</li><ul> | <ul><li>Data processing</li><li> Machine learning inference</li><ul>|
--
-> [!NOTE]
-> As real world functions workload are most of often a mix of I/O and CPU bound, we recommend to profile the workload under realistic production loads.
--
-#### Performance-specific configurations
-
-After understanding the workload profile of your function app, the following are configurations that you can use to improve the throughput performance of your functions.
-
-##### Async
-
-Because [Python is a single-threaded runtime](https://wiki.python.org/moin/GlobalInterpreterLock), a host instance for Python can process only one function invocation at a time. For applications that process a large number of I/O events and/or is I/O bound, you can improve performance significantly by running functions asynchronously.
-
-To run a function asynchronously, use the `async def` statement, which runs the function with [asyncio](https://docs.python.org/3/library/asyncio.html) directly:
-
-```python
-async def main():
- await some_nonblocking_socket_io_op()
-```
-Here is an example of a function with HTTP trigger that uses [aiohttp](https://pypi.org/project/aiohttp/) http client:
-
-```python
-import aiohttp
-
-import azure.functions as func
-
-async def main(req: func.HttpRequest) -> func.HttpResponse:
- async with aiohttp.ClientSession() as client:
- async with client.get("PUT_YOUR_URL_HERE") as response:
- return func.HttpResponse(await response.text())
-
- return func.HttpResponse(body='NotFound', status_code=404)
-```
--
-A function without the `async` keyword is run automatically in an asyncio thread-pool:
-
-```python
-# Runs in an asyncio thread-pool
-
-def main():
- some_blocking_socket_io()
-```
-
-In order to achieve the full benefit of running functions asynchronously, the I/O operation/library that is used in your code needs to have async implemented as well. Using synchronous I/O operations in functions that are defined as asynchronous **may hurt** the overall performance.
-
-Here are a few examples of client libraries that has implemented async pattern:
-- [aiohttp](https://pypi.org/project/aiohttp/) - Http client/server for asyncio -- [Streams API](https://docs.python.org/3/library/asyncio-stream.html) - High-level async/await-ready primitives to work with network connection-- [Janus Queue](https://pypi.org/project/janus/) - Thread-safe asyncio-aware queue for Python-- [pyzmq](https://pypi.org/project/pyzmq/) - Python bindings for ZeroMQ
-
-
-##### Use multiple language worker processes
-
-By default, every Functions host instance has a single language worker process. You can increase the number of worker processes per host (up to 10) by using the [FUNCTIONS_WORKER_PROCESS_COUNT](functions-app-settings.md#functions_worker_process_count) application setting. Azure Functions then tries to evenly distribute simultaneous function invocations across these workers.
-
-For CPU bound apps, you should set the number of language worker to be the same as or higher than the number of cores that are available per function app. To learn more, see [Available instance SKUs](functions-premium-plan.md#available-instance-skus).
-
-I/O-bound apps may also benefit from increasing the number of worker processes beyond the number of cores available. Keep in mind that setting the number of workers too high can impact overall performance due to the increased number of required context switches.
-
-The FUNCTIONS_WORKER_PROCESS_COUNT applies to each host that Functions creates when scaling out your application to meet demand.
-
+For scaling and performance best practices for Python function apps, please refer to the [Python scale and performance article](python-scale-performance-reference.md).
## Context
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/python-scale-performance-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/python-scale-performance-reference.md new file mode 100644
@@ -0,0 +1,175 @@
+---
+title: Improve throughput performance of Python apps in Azure Functions
+description: Learn how to develop Azure Functions apps using Python that are highly performant and scale well under load.
+ms.topic: article
+ms.date: 10/13/2020
+ms.custom: devx-track-python
+---
+# Improve throughput performance of Python apps in Azure Functions
+
+When developing for Azure Functions using Python, you need to understand how your functions perform and how that performance affects the way your function app gets scaled. The need is more important when designing highly performant apps. The main factors to consider when designing, writing, and configuring your functions apps are horizontal scaling and throughput performance configurations.
+
+## Horizontal scaling
+By default, Azure Functions automatically monitors the load on your application and creates additional host instances for Python as needed. Azure Functions uses built-in thresholds for different trigger types to decide when to add instances, such as the age of messages and queue size for QueueTrigger. These thresholds aren't user configurable. For more information, see [How the Consumption and Premium plans work](functions-scale.md#how-the-consumption-and-premium-plans-work).
+
+## Improving throughput performance
+
+The default configurations are suitable for most of Azure Functions applications. However, you can improve the performance of your applications' throughput by employing configurations based on your workload profile. The first step is to understand the type of workload that you are running.
+
+|| I/O-bound workload | CPU-bound workload |
+|--| -- | -- |
+|Function app characteristics| <ul><li>App needs to handle many concurrent invocations.</li> <li> App processes a large number of I/O events, such as network calls and disk read/writes.</li> </ul>| <ul><li>App does long-running computations, such as image resizing.</li> <li>App does data transformation.</li> </ul> |
+|Examples| <ul><li>Web APIs</li><ul> | <ul><li>Data processing</li><li> Machine learning inference</li><ul>|
+
+
+As real world function workloads are usually a mix of I/O and CPU bound, you should profile the app under realistic production loads.
++
+### Performance-specific configurations
+
+After understanding the workload profile of your function app, the following are configurations that you can use to improve the throughput performance of your functions.
+
+* [Async](#async)
+* [Multiple language worker](#use-multiple-language-worker-processes)
+* [Max workers within a language worker process](#set-up-max-workers-within-a-language-worker-process)
+* [Event loop](#managing-event-loop)
+* [Vertical Scaling](#vertical-scaling)
+++
+#### Async
+
+Because [Python is a single-threaded runtime](https://wiki.python.org/moin/GlobalInterpreterLock), a host instance for Python can process only one function invocation at a time by default. For applications that process a large number of I/O events and/or is I/O bound, you can improve performance significantly by running functions asynchronously.
+
+To run a function asynchronously, use the `async def` statement, which runs the function with [asyncio](https://docs.python.org/3/library/asyncio.html) directly:
+
+```python
+async def main():
+ await some_nonblocking_socket_io_op()
+```
+Here is an example of a function with HTTP trigger that uses [aiohttp](https://pypi.org/project/aiohttp/) http client:
+
+```python
+import aiohttp
+
+import azure.functions as func
+
+async def main(req: func.HttpRequest) -> func.HttpResponse:
+ async with aiohttp.ClientSession() as client:
+ async with client.get("PUT_YOUR_URL_HERE") as response:
+ return func.HttpResponse(await response.text())
+
+ return func.HttpResponse(body='NotFound', status_code=404)
+```
++
+A function without the `async` keyword is run automatically in an ThreadPoolExecutor thread pool:
+
+```python
+# Runs in an ThreadPoolExecutor threadpool. Number of threads is defined by PYTHON_THREADPOOL_THREAD_COUNT.
+# The example is intended to show how default synchronous function are handled.
+
+def main():
+ some_blocking_socket_io()
+```
+
+In order to achieve the full benefit of running functions asynchronously, the I/O operation/library that is used in your code needs to have async implemented as well. Using synchronous I/O operations in functions that are defined as asynchronous **may hurt** the overall performance. If the libraries you are using do not have async version implemented, you may still benefit from running your code asynchronously by [managing event loop](#managing-event-loop) in your app.
+
+Here are a few examples of client libraries that has implemented async pattern:
+- [aiohttp](https://pypi.org/project/aiohttp/) - Http client/server for asyncio
+- [Streams API](https://docs.python.org/3/library/asyncio-stream.html) - High-level async/await-ready primitives to work with network connection
+- [Janus Queue](https://pypi.org/project/janus/) - Thread-safe asyncio-aware queue for Python
+- [pyzmq](https://pypi.org/project/pyzmq/) - Python bindings for ZeroMQ
+
+##### Understanding async in python worker
+
+When you define `async` in front of a function signature, Python will mark the function as a coroutine. When calling the coroutine, it can be scheduled as a task into an event loop. When you call `await` in an async function, it registers a continuation into the event loop and allow event loop to process next task during the wait time.
+
+In our Python Worker, the worker shares the event loop with the customer's `async` function and it is capable for handling multiple requests concurrently. We strongly encourage our customers to make use of asyncio compatible libraries (e.g. [aiohttp](https://pypi.org/project/aiohttp/), [pyzmq](https://pypi.org/project/pyzmq/)). Employing these recommendations will greatly increase your function's throughput compared to those libraries implemented in synchronous fashion.
+
+> [!NOTE]
+> If your function is declared as `async` without any `await` inside its implementation, the performance of your function will be severely impacted since the event loop will be blocked which prohibit the python worker to handle concurrent requests.
+
+#### Use multiple language worker processes
+
+By default, every Functions host instance has a single language worker process. You can increase the number of worker processes per host (up to 10) by using the [FUNCTIONS_WORKER_PROCESS_COUNT](functions-app-settings.md#functions_worker_process_count) application setting. Azure Functions then tries to evenly distribute simultaneous function invocations across these workers.
+
+For CPU bound apps, you should set the number of language worker to be the same as or higher than the number of cores that are available per function app. To learn more, see [Available instance SKUs](functions-premium-plan.md#available-instance-skus).
+
+I/O-bound apps may also benefit from increasing the number of worker processes beyond the number of cores available. Keep in mind that setting the number of workers too high can impact overall performance due to the increased number of required context switches.
+
+The FUNCTIONS_WORKER_PROCESS_COUNT applies to each host that Functions creates when scaling out your application to meet demand.
+
+#### Set up max workers within a language worker process
+
+As mentioned in the async [section](#understanding-async-in-python-worker), the Python language worker treats functions and [coroutines](https://docs.python.org/3/library/asyncio-task.html#coroutines) differently. A coroutine is run within the same event loop that the language worker runs on. On the other hand, a function invocation is run within a [ThreadPoolExecutor](https://docs.python.org/3/library/concurrent.futures.html#threadpoolexecutor), that is maintained by the language worker, as a thread.
+
+You can set the value of maximum workers allowed for running sync functions using the [PYTHON_THREADPOOL_THREAD_COUNT](functions-app-settings.md#python_threadpool_thread_count) application setting. This value sets the `max_worker` argument of the ThreadPoolExecutor object, which lets Python use a pool of at most `max_worker` threads to execute calls asynchronously. The `PYTHON_THREADPOOL_THREAD_COUNT` applies to each worker that Functions host creates, and Python decides when to create a new thread or reuse the existing idle thread. For older Python versions(that is, `3.8`, `3.7`, and `3.6`), `max_worker` value is set to 1. For Python version `3.9` , `max_worker` is set to `None`.
+
+For CPU-bound apps, you should keep the setting to a low number, starting from 1 and increasing as you experiment with your workload. This suggestion is to reduce the time spent on context switches and allowing CPU-bound tasks to finish.
+
+For I/O-bound apps, you should see substantial gains by increasing the number of threads working on each invocation. the recommendation is to start with the Python default - the number of cores + 4 and then tweak based on the throughput values you are seeing.
+
+For mix workloads apps, you should balance both `FUNCTIONS_WORKER_PROCESS_COUNT` and `PYTHON_THREADPOOL_THREAD_COUNT` configurations to maximize the throughput. To understand what your function apps spend the most time on, we recommend to profile them and set the values according to the behavior they present. Also refer to this [section](#use-multiple-language-worker-processes) to learn about FUNCTIONS_WORKER_PROCESS_COUNT application settings.
+
+> [!NOTE]
+> Although these recommendations apply to both HTTP and non-HTTP triggered functions, you might need to adjust other trigger specific configurations for non-HTTP triggered functions to get the expected performance from your function apps. For more information about this, please refer to this [article](functions-best-practices.md).
++
+#### Managing event loop
+
+You should use asyncio compatible third-party libraries. If none of the third-party libraries meet your needs, you can also manage the event loops in Azure Functions. Managing event loops give you more flexibility in compute resource management, and it also makes it possible to wrap synchronous I/O libraries into coroutines.
+
+There are many useful Python official documents discussing the [Coroutines and Tasks](https://docs.python.org/3/library/asyncio-task.html) and [Event Loop](https://docs.python.org/3.8/library/asyncio-eventloop.html) by using the built in **asyncio** library.
+
+Take the following [requests](https://github.com/psf/requests) library as an example, this code snippet uses the **asyncio** library to wrap the `requests.get()` method into a coroutine, running multiple web requests to SAMPLE_URL concurrently.
++
+```python
+import asyncio
+import json
+import logging
+
+import azure.functions as func
+from time import time
+from requests import get, Response
++
+async def invoke_get_request(eventloop: asyncio.AbstractEventLoop) -> Response:
+ # Wrap requests.get function into a coroutine
+ single_result = await eventloop.run_in_executor(
+ None, # using the default executor
+ get, # each task call invoke_get_request
+ 'SAMPLE_URL' # the url to be passed into the requests.get function
+ )
+ return single_result
+
+async def main(req: func.HttpRequest) -> func.HttpResponse:
+ logging.info('Python HTTP trigger function processed a request.')
+
+ eventloop = asyncio.get_event_loop()
+
+ # Create 10 tasks for requests.get synchronous call
+ tasks = [
+ asyncio.create_task(
+ invoke_get_request(eventloop)
+ ) for _ in range(10)
+ ]
+
+ done_tasks, _ = await asyncio.wait(tasks)
+ status_codes = [d.result().status_code for d in done_tasks]
+
+ return func.HttpResponse(body=json.dumps(status_codes),
+ mimetype='application/json')
+```
+#### Vertical scaling
+For more processing units especially in CPU-bound operation, you might be able to get this by upgrading to premium plan with higher specifications. With higher processing units, you can adjust the number of worker process count according to the number of cores available and achieve higher degree of parallelism.
+
+## Next steps
+
+For more information about Azure Functions Python development, see the following resources:
+
+* [Azure Functions Python developer guide](functions-reference-python.md)
+* [Best practices for Azure Functions](functions-best-practices.md)
+* [Azure Functions developer reference](functions-reference.md)
+
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/set-runtime-version https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/set-runtime-version.md
@@ -10,9 +10,13 @@ ms.date: 07/22/2020
A function app runs on a specific version of the Azure Functions runtime. There are three major versions: [1.x, 2.x, and 3.x](functions-versions.md). By default, function apps are created in version 3.x of the runtime. This article explains how to configure a function app in Azure to run on the version you choose. For information about how to configure a local development environment for a specific version, see [Code and test Azure Functions locally](functions-run-local.md).
+The way that you manually target a specific version depends on whether you are running Windows or Linux.
+ ## Automatic and manual version updates
-Azure Functions lets you target a specific version of the runtime by using the `FUNCTIONS_EXTENSION_VERSION` application setting in a function app. The function app is kept on the specified major version until you explicitly choose to move to a new version. If you specify only the major version, the function app is automatically updated to new minor versions of the runtime when they become available. New minor versions shouldn't introduce breaking changes.
+_This section doesn't apply when running your function app [on Linux](#manual-version-updates-on-linux)._
+
+Azure Functions lets you target a specific version of the runtime on Windows by using the `FUNCTIONS_EXTENSION_VERSION` application setting in a function app. The function app is kept on the specified major version until you explicitly choose to move to a new version. If you specify only the major version, the function app is automatically updated to new minor versions of the runtime when they become available. New minor versions shouldn't introduce breaking changes.
If you specify a minor version (for example, "2.0.12345"), the function app is pinned to that specific version until you explicitly change it. Older minor versions are regularly removed from the production environment. After this occurs, your function app runs on the latest version instead of the version set in `FUNCTIONS_EXTENSION_VERSION`. Because of this, you should quickly resolve any issues with your function app that require a specific minor version, so that you can instead target the major version. Minor version removals are announced in [App Service announcements](https://github.com/Azure/app-service-announcements/issues).
@@ -33,6 +37,8 @@ A change to the runtime version causes a function app to restart.
## View and update the current runtime version
+_This section doesn't apply when running your function app [on Linux](#manual-version-updates-on-linux)._
+ You can change the runtime version used by your function app. Because of the potential of breaking changes, you can only change the runtime version before you have created any functions in your function app. > [!IMPORTANT]
@@ -117,6 +123,67 @@ As before, replace `<FUNCTION_APP>` with the name of your function app and `<RES
The function app restarts after the change is made to the application setting.
+## Manual version updates on Linux
+
+To pin a Linux function app to a specific host version, you specify the image URL in the 'LinuxFxVersion' field in site config. For example: if we want to pin a node 10 function app to say host version 3.0.13142 -
+
+For **linux app service/elastic premium apps** -
+Set `LinuxFxVersion` to `DOCKER|mcr.microsoft.com/azure-functions/node:3.0.13142-node10-appservice`.
+
+For **linux consumption apps** -
+Set `LinuxFxVersion` to `DOCKER|mcr.microsoft.com/azure-functions/mesh:3.0.13142-node10`.
++
+# [Azure CLI](#tab/azurecli-linux)
+
+You can view and set the `LinuxFxVersion` from the Azure CLI.
+
+Using the Azure CLI, view the current runtime version with the [az functionapp config show](/cli/azure/functionapp/config) command.
+
+```azurecli-interactive
+az functionapp config show --name <function_app> \
+--resource-group <my_resource_group>
+```
+
+In this code, replace `<function_app>` with the name of your function app. Also replace `<my_resource_group>` with the name of the resource group for your function app.
+
+You see the `linuxFxVersion` in the following output, which has been truncated for clarity:
+
+```output
+{
+ ...
+
+ "kind": null,
+ "limits": null,
+ "linuxFxVersion": <LINUX_FX_VERSION>,
+ "loadBalancing": "LeastRequests",
+ "localMySqlEnabled": false,
+ "location": "West US",
+ "logsDirectorySizeLimit": 35,
+ ...
+}
+```
+
+You can update the `linuxFxVersion` setting in the function app with the [az functionapp config set](/cli/azure/functionapp/config) command.
+
+```azurecli-interactive
+az functionapp config set --name <FUNCTION_APP> \
+--resource-group <RESOURCE_GROUP> \
+--linux-fx-version <LINUX_FX_VERSION>
+```
+
+Replace `<FUNCTION_APP>` with the name of your function app. Also replace `<RESOURCE_GROUP>` with the name of the resource group for your function app. Also, replace `<LINUX_FX_VERSION>` with the values explained above.
+
+You can run this command from the [Azure Cloud Shell](../cloud-shell/overview.md) by choosing **Try it** in the preceding code sample. You can also use the [Azure CLI locally](/cli/azure/install-azure-cli) to execute this command after executing [az login](/cli/azure/reference-index#az-login) to sign in.
++
+Similarly, the function app restarts after the change is made to the site config.
+
+> [!NOTE]
+> Note that setting `LinuxFxVersion` to image url directly for consumption apps will opt them out of placeholders and other cold start optimizations.
+
+---
+ ## Next steps > [!div class="nextstepaction"]
azure-government https://docs.microsoft.com/en-us/azure/azure-government/compliance/documentation-accelerate-compliance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/compliance/documentation-accelerate-compliance.md
@@ -14,7 +14,7 @@ ms.devlang: na
ms.topic: article ms.tgt_pltfrm: na ms.workload: azure-government
-ms.date: 10/27/2020
+ms.date: 01/05/2021
ms.author: todorb ---
@@ -70,7 +70,6 @@ For a list of existing Azure Marketplace offerings in this space, visit [this pa
> [!NOTE] >The information provided here will allow partners and customers to sign up and learn about the compliance program. The program is designed to help Azure and Azure Government customers successfully prepare their environments for authorization and request a FedRAMP ATO. This information does not constitute an offer of any kind, and submitting the forms below in no way guarantees participation in the program. At this time, the program details shared with partners and customers are notional and subject to change without notice.
- * Are you a customer looking for compliance help on Azure and don't know where to start? Fill out our [form](https://aka.ms/azcl).
* Free [training on FedRAMP](https://www.fedramp.gov/learning/). * FedRAMP [templates](https://www.fedramp.gov/templates/) to help you with program requirements. * Get familiar with the [FedRAMP Marketplace](https://marketplace.fedramp.gov/#/products).
@@ -79,4 +78,4 @@ For a list of existing Azure Marketplace offerings in this space, visit [this pa
* To learn how Azure Blueprints help you when using Azure Policy review the [blog post](https://azure.microsoft.com/blog/new-azure-blueprint-simplifies-compliance-with-nist-sp-800-53/). ## Next steps
-Review the documentation above. If you are still facing issues reach out to [Azure Compliance Acceleration Program](mailto:azcl@microsoft.com).
\ No newline at end of file
+Review the documentation above. If you are still facing issues reach out to [Azure Government Partner Inquiries](mailto:azgovpartinf@microsoft.com).
azure-government https://docs.microsoft.com/en-us/azure/azure-government/documentation-government-csp-list https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-csp-list.md
@@ -5,19 +5,20 @@ services: azure-government
cloud: gov ms.service: azure-government ms.topic: article
-ms.date: 7/13/2020
+ms.date: 01/05/2021
--- # Azure Government authorized reseller list Since the launch of the [Azure Government in the Cloud Solution Provider Program (CSP)](https://azure.microsoft.com/blog/announcing-microsoft-azure-government-services-in-the-cloud-solution-provider-program/), work has been done with the Partner Community to bring them the benefits of this channel, enable them to resell Azure Government, and help them grow their business while providing the cloud services their customers need.
-Below you can find a list of all the authorized Cloud Solution Providers, which can resell Azure Government. This list includes all approved CSPs as of **July 10, 2020** as well as the list of Licensing Solution Providers (LSP). Updates to this list will be made as new partners are onboarded.
+Below you can find a list of all the authorized Cloud Solution Providers, AOS-G (Agreement for Online Services for Government), and Licensing Solution Providers (LSP) which can transact Azure Government. This list includes all approved Partners as of **January 5, 2021**. Updates to this list will be made as new partners are onboarded.
## Approved direct CSPs |Partner Name| |----------------------------| |[10th Magnitude](https://www.10thmagnitude.com)|
+|[12:34 MicroTechnolgies Inc.](https://www.1234micro.com/)|
|[1901 Group, LLC](https://1901group.com)| |[3Cloud Solutions](https://www.3cloudsolutions.com/)| |[3Di inc](https://www.3disystems.com)|
@@ -29,8 +30,10 @@ Below you can find a list of all the authorized Cloud Solution Providers, which
|[ActioNet](https://www.actionet.com/)| |[ADNET Technologies](https://thinkadnet.com/)| |[Adoxio Business Solutions Limited](https://www.adoxio.com)|
+|[Advisicon, Inc](https://advisicon.com/)|
|[Aeon Nexus Corp.](https://www.aeonnexus.com/)| |[Affigent](http://www.affigent.com/)|
+|[Agile Defense Inc](https://agile-defense.com/)|
|[Agile IT](https://www.agileit.com/)| |[Airnet Group](https://www.airnetgroup.com/)| |[AIS Network](https://www.aisn.net/)|
@@ -54,6 +57,7 @@ Below you can find a list of all the authorized Cloud Solution Providers, which
|[Ascent Innovations LLC](https://www.ascent365.com/)| |[ASM Research LLC](https://www.asmr.com)| |ATLGaming|
+|[Arraya Solutions](https://www.arrayasolutions.com)|
|[Atmosera, Inc.](https://www.atmosera.com)| |[Atos IT Solutions and Services](https://atos.net)| |[Avolve Software Corp.](https://www.avolvesoftware.com)|
@@ -84,6 +88,7 @@ Below you can find a list of all the authorized Cloud Solution Providers, which
|[CGI Technologies and Solutions Inc.](https://www.cgi.com)| |[Ciellos Inc.](https://www.ciellos.com/)| |[Ciracom Inc.](https://ciracom.com)|
+|[Clients First Business Solutions LLC](https://www.clientsfirst-us.com)|
|[ClearShark](https://clearshark.com/)| |[CloudFit Software, LLC](https://www.cloudfitsoftware.com/)| |[Cloud Navigator, Inc - formerly ISC](https://www.cloudnav.com )|
@@ -98,12 +103,15 @@ Below you can find a list of all the authorized Cloud Solution Providers, which
|[Coretek Services](https://www.coretekservices.com/)| |[Cornerstone Technologies](https://www.cornerstonetechnologies.com/)| |[Corporate Technologies LLC](https://www.gocorptech.com/)|
+|[Crayon Software Experts LLC](https://www.crayon.com/)|
|[Cre8tive Technology Design](https://www.ctnd.com/)| |[Crowe Horwath LLP](https://www.crowe.com/)| |[CSI, L.L.C.](http://www.csinov.com/index.php)| |[CuroGens, Inc.](https://www.curogens.com/)| |[CSRA, LLC](https://www.csra.com)| |[CWPS](https://www.cwps.com/)|
+|[Cyber Advisors](https://cyberadvisors.com)|
+|[Cyber Cloud Technologies](https://www.cyber-cloud.com)|
|[Cyber Korp Inc.](https://cyberkorp.com/)| |[Cybercore Solutions LLC](https://cybercoresolutions.com/)| |[Dalecheck Technology Group](https://www.dalechek.com/)|
@@ -127,15 +135,18 @@ Below you can find a list of all the authorized Cloud Solution Providers, which
|[DXC Technology Services LLC](https://www.dxc.technology/services)| |[DXL Enterprises, Inc.](https://mahwahnjcoc.wliinc31.com/Supply-Chain-Management/DXL-Enterprises,-Inc-1349)| |[Dynamics Intelligence Inc.](https://www.dynamicsintelligence.us)|
+|[DynTek](https://www.dyntek.com)|
|eFibernet Inc.| |[eMazzanti Technologies](https://www.emazzanti.net/)| |[Enabling Technologies Corp.](https://www.enablingtechcorp.com/)| |[Ensono](https://www.ensono.com)| |[Enterprise Infrastructure Partners, LLC](http://www.entisp.com/)| |[Enterprise Technology International](https://enterpriseti.com)|
+|[Envistacom](https://www.envistacom.com)|
|[Epic Systems Inc](http://epicinfotech.com/)| |[EpochConcepts](https://epochconcepts.com)| |[Equilibrium IT Solutions, Inc.](https://eqinc.com/)|
+|[Evertec](http://www.evertecinc.com)|
|[eWay Corp](https://www.ewaycorp.com)| |[Exbabylon IT Solutions](https://www.exbabylon.com)| |[FI Consulting](https://www.ficonsulting.com/)|
@@ -165,6 +176,7 @@ Below you can find a list of all the authorized Cloud Solution Providers, which
|[Hendrix Corporation](https://www.hendrixcorp.com/)| |[Hewlett Packard Enterprise](https://www.hpe.com)| |[Hiscomp](http://www.hiscompllc.com/)|
+|[Hitachi Vantara](https://www.hitachivantarafederal.com/rean-cloud/)|
|[HTS Voice & Data Systems, Inc.](https://www.hts-tx.com/)| |[HumanTouch LLC](https://www.humantouchllc.com/)| |[I10 Inc](http://i10agile.com/)|
@@ -193,6 +205,7 @@ Below you can find a list of all the authorized Cloud Solution Providers, which
|[IV4, Inc](https://www.iv4.com)| |[Jackpine Technologies](https://www.jackpinetech.com)| |[Jacobs Technolgy Inc.](https://www.jacobs.com/)|
+|[Jadex Strategic Group](https://jadexstrategic.com)|
|[Jasper Solutions Inc.](https://jaspersolutions.com/)| |[JHC Technology, Inc.](https://www.jhctechnology.com/)| |[Quiet Professionals](https://quietprofessionalsllc.com)|
@@ -207,8 +220,10 @@ Below you can find a list of all the authorized Cloud Solution Providers, which
|[Ladlas Prince LLC](https://www.ladlasprince.com)| |[Lear360.com](https://www.lear360.com)| |[Leidos](https://www.leidos.com)|
+|[Leslie Digital Imaging LLC.](https://www.myldi.com)|
|[Liftoff, LLC](http://liftoffonline.com/)| |[Lightstream Managed Services, LLC](https://www.lightstream.tech)|
+|[Liquid Mercury Solutions](https://www.liquid-hg.com/)|
|[Logicalis, Inc.](https://www.us.logicalis.com/)| |[Lucidius Group LLC](http://www.lucidiusgrp.com)| |[M2 Technology, Inc.](http://www.m2ti.com/)|
@@ -218,6 +233,7 @@ Below you can find a list of all the authorized Cloud Solution Providers, which
|[ManCom Inc](https://www.mancominc.com/)| |[ManTech](https://www.mantech.com/Pages/Home.aspx)| |[Marco Technologies LLC](https://www.marconet.com/)|
+|[Menlo Technologies](https://www.menlo-technologies.com)|
|[MetroStar Systems Inc.](https://www.metrostarsystems.com)| |Mibura Inc.| |[Microtechnologies, LLC](https://www.microtech.net/)|
@@ -235,6 +251,7 @@ Below you can find a list of all the authorized Cloud Solution Providers, which
|[NewWave Telecom & Technologies, Inc](https://www.newwave.io)| |[NexustTek](https://www.nexustek.com/)| |[Nihilent Inc](https://nihilent.com)|
+|[Nimbus Logic LLC](https://www.nimbus-logic.com)|
|[Norseman, Inc](https://www.norseman.com)| |[Northern Sky Technologies, Inc]| |[Northrop Grumman](https://www.northropgrumman.com)|
@@ -243,6 +260,7 @@ Below you can find a list of all the authorized Cloud Solution Providers, which
|[Nubelity LLC](http://www.nubelity.com)| |[NuSoft Solutions (Atrio Systems, Inc.)](https://nusoftsolutions.com)| |[NWN Corporation](https://www.nwnit.com)|
+|[OCH Technologies LLC](https://www.ochtechnologies.com)|
|[Olive + Goose](https://www.oliveandgoose.com/)| |[Om Group, Inc.](http://www.omgroupinc.us/)| |[OneNeck IT Solutions](https://www.oneneck.com)|
@@ -252,11 +270,15 @@ Below you can find a list of all the authorized Cloud Solution Providers, which
|[OpsPro](https://opspro.com/)| |[Orion Communications, Inc.](https://www.orioncom.com)| |[Outlook Insight, LLC](http://outlookinsight.com/)|
+|[PA-Group](https://pa-group.us/)|
+|[Palecek Consulting Group](https://www.pcgit.net)|
|[Pangea Group Inc.](http://www.pangea-group.com)|
+|[Parachute Technology](https://www.parachutech.com)|
|[Paragon Software Solutions, Inc.](http://www.paragonhq.com/)| |[Patrocinium Systems, Inc.](https://www.patrocinium.com)| |[PCM](https://www.pcm.com/)| |[Peerless Tech Solutions](https://www.getpeerless.com)|
+|[People Services Inc. DBA CATCH Intelligence](https://catchintelligence.com)|
|[Perrygo Consulting Group, LLC](https://perrygo.com)| |[Perspecta](https://perspecta.com/)| |[Phacil](https://www.phacil.com/)|
@@ -266,6 +288,8 @@ Below you can find a list of all the authorized Cloud Solution Providers, which
|[Pitech Solutions Inc](https://www.pitechsol.com/)| |[Planet Technologies](https://go-planet.com)| |[Plexhosted LLC](https://plexhosted.com/)|
+|[Prescriptive Data Solutions LLC.](https://www.prescriptive.solutions)|
+|[Presidio](https://www.presidio.com)|
|[Principle Information Technology Company](https://www.principleinfotech.com/)| |[Practical Solutions](https://www.ps4b.com)| |[Prayag Lite](https://prayaglite.com/)|
@@ -280,7 +304,6 @@ Below you can find a list of all the authorized Cloud Solution Providers, which
|[Ravnur Inc.](https://www.ravnur.com)| |[Razor Technology, LLC](https://www.razor-tech.com)| |[Re:discovery Software, Inc.](https://rediscoverysoftware.com)|
-|[Hitachi Vantara](https://www.hitachivantarafederal.com/rean-cloud/)|
|[Red Level](https://redlevelgroup.com/)| |[Redapt Attunix](https://www.redapt.com)| |[Redhorse Corporation](https://www.redhorsecorp.com)|
@@ -291,18 +314,24 @@ Below you can find a list of all the authorized Cloud Solution Providers, which
|[Revenue Solutions, Inc](https://www.revenuesolutionsinc.com)| |[RMON Networks Inc.](https://rmonnetworks.com/)| |[rmsource, Inc.](https://www.rmsource.com)|
+|[RV Global Solutions](https://rvglobalsolutions.com/)|
|[Saiph Technologies Corporation](http://www.saiphtech.com/)| |[SAP NS2](https://sapns2.com)|
+|[Sarela Technology Solutions LLC](https://www.sarelatech.com)|
|[Science Applications International Corporation](https://www.saic.com)| |[Secure-24](https://www.secure-24.com)| |[Selex Galileo Inc](http://www.selexgalileo.com/)|
+|[Sev1Tech](https://www.sev1tech.com/)|
|[Sevatec Inc.](https://www.sevatec.com/)|. |[Shadow-Soft, LLC.](https://shadow-soft.com)| |[SHI International Corp](https://www.shi.com)|
+|[SHR Consulting Group LLC](https://www.shrgroupllc.com)|
|[Shoshin Technologies Inc.](https://www.shoshintech.com)|
+|[Sieena, Inc.](https://siennatech.com/)|
|[Simons Advisors, LLC](https://simonsadvisors.com/)| |[Sirius Computer Solutions, Inc.](https://www.siriuscom.com/)| |[SKY SOLUTIONS LLC](https://www.skysolutions.com/)|
+|[SKY Terra Technologies LLC](https://www.skyterratech.com)|
|[Smartronix](https://www.smartronix.com)| |[Socius 1 LLC](http://www.socius1.com)| |[Softchoice Corporation](https://www.softchoice.com)|
@@ -312,6 +341,7 @@ Below you can find a list of all the authorized Cloud Solution Providers, which
|[Stabilify](http://www.stabilify.net/)| |[Stafford Associates](https://www.staffordnet.com/)| |[Static Networks, LLC](https://staticnetworks.com)|
+|[Steel Root](https://steelroot.us)|
|[StoneFly, Inc.](https://stonefly.com)| |[Strategic Communications](https://stratcomminc.com)| |[Stratus Solutions](https://stratussolutions.com)|
@@ -324,12 +354,15 @@ Below you can find a list of all the authorized Cloud Solution Providers, which
|[Synoptek LLC](https://synoptek.com/)| |[Systems Engineering Inc](https://www.seisystems.com)| |[Systems Solutions Inc](https://www.ssi-net.com/)|
+|[Syvantis Technologies, Inc.](https://www.syvantis.com)|
|[Taborda Solutions](https://tabordasolutions.com)| |[TechFlow](https://www.techflow.com)| |[TechnoMile](https://technomile.com/)| |[TechTrend](https://techtrend.us)| |[TekSynap](https://www.teksynap.com)| |[The Cram Group LLC](https://aeccloud.com/)|
+|[The Informatics Application Group Inc.](https://tiag.net)|
+|[The Porter Group, LLC](https://www.thepottergroupllc.com/)|
|[Thundercat Technology](https://www.thundercattech.com/)| |[TIC Business Consultants, Ltd.](https://www.ticbiz.com/)| |[Tier1, Inc.](https://www.tier1inc.com)|
@@ -362,9 +395,13 @@ Below you can find a list of all the authorized Cloud Solution Providers, which
|[VVL Systems & Consulting, LLC](https://www.vvlsystems.com/)| |[Vistronix, LLC](http://www.vistronix.com/)| |[Vology Inc.](https://www.vology.com/)|
-|[vSolvIT]|
+|vSolvIT|
+|[Warren Averett Technology Group](https://warrenaverett.com/warren-averett-technology-group/)|
|[Wintellect, LLC](https://www.wintellect.com)|
+|[Wintellisys, Inc.](https://wintellisys.com)|
+|[Withum](https://www.withum.com/service/cyber-information-security-services/)|
|[Workspot, Inc.](https://workspot.com)|
+|[Wovenware CA, Inc.](https://www.wovenware.com)|
|[WWT](https://www2.wwt.com)| |[Xantrion Incorporated](https://www.xantrion.com)| |[X-Centric IT Solutions, LLC](https://www.x-centric.com/)|
@@ -374,7 +411,7 @@ Below you can find a list of all the authorized Cloud Solution Providers, which
|[Xtivia Inc.](https://www.xtivia.com)| |[ZL Technologies Inc.](https://www.zlti.com/)| |[Zones Inc](https://www.zones.com/site/home/https://docsupdatetracker.net/index.html)|-
+|[ZR Systems Group LLC](https://zrsystems.com)|
## Approved indirect CSP Providers
@@ -384,6 +421,7 @@ Below you can find a list of all the authorized Cloud Solution Providers, which
|[Arrow Enterprise Computing Solutions, Inc.](http://ecs.arrow.com/)| |[Crayon Software Experts LCC](https://www.crayon.com/en-US)| |[Carahsoft Technology Corporation](https://www.carahsoft.com)|
+|[DLT Solutions](https://www.dlt.com)|
|[Ingram Micro Inc.](https://usa.ingrammicro.com/)| |[Insight Public Sector Inc](https://www.ips.insight.com/en_US/public-sector.html)| |[Synnex](https://www.synnexcorp.com)|
@@ -412,22 +450,30 @@ Below you can find a list of all the authorized Cloud Solution Providers, which
|[Applied Information Sciences](https://www.appliedis.com)| |[Arctic Information Technology, Inc.](https://arcticit.com)| |[C3 Integrated Solutions, Inc.](https://www.c3isit.com)|
+|[CACI](https://www.caci.com)|
+|[Carahsoft](https://www.carahsoft.com/microsoft)|
|[Catapult Systems, LLC](https://www.catapultsystems.com)| |[CGI Federal Inc.](https://www.cgi.com/us/en-us/federal)| |[Cloud Navigator, Inc - formerly ISC](https://cloudnav.com)| |[Dox Electornics Inc.](https://www.doxnet.com)| |[F1 Soluitions Inc](https://www.f1networks.com)| |[Four Points Technolgy, LLC](https://www.4points.com)|
+|[Jackpine Technologies](https://www.jackpinetech.com)|
+|Jasper Solutions|
|[KTL Solutions, Inc.](https://www.ktlsolutions.com)| |[LiftOff LLC](https://www.liftoffllc.com)|
+|[Northrop Grumman](https://www.northropgrumman.com/)|
+|[Novetta](https://www.novetta.com)|
|[Permuta Technologies, Inc.](http://www.permuta.com/)| |[Planet Technologies, Inc.](https://go-planet.com)| |[Quiet Professionals, LLC](https://quietprofessionalsllc.com)|
-|[Smartronix](https://www.smartronix.com)|
+|[Red River](https://www.redriver.com)|
|[SAIC](https://www.saic.com)|
+|[Smartronix](https://www.smartronix.com)|
|[Summit 7 Services, Inc.](https://summit7systems.com)| |[TechTrend, Inc](https://techtrend.us)| |[VLCM](https://www.vlcmtech.com)| |[VC3](https://www.vc3.com)|
+|Vexcel|
If you would like to learn more about the Cloud Solution Provider Program, you can do so [here](/partner-center/faq-for-us-govt-cloud). If you would like to apply to the program, you can visit [this link](./documentation-government-csp-application.md). If you are interested to deploy to our [DoD regions via CSP](https://blogs.msdn.microsoft.com/azuregov/2017/12/18/announcing-the-availability-of-dod-regions-via-government-csp-program-for-azure-government/) talk to your CSP Provider and they can enable that for you. For any additional questions, reach out to [Azure Government CSP](mailto:azgovcsp@microsoft.com).
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/insights/container-insights-log-alerts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/container-insights-log-alerts.md
@@ -2,7 +2,7 @@
title: Log alerts from Azure Monitor for containers | Microsoft Docs description: This article describes how to create custom log alerts for memory and CPU utilization from Azure Monitor for containers. ms.topic: conceptual
-ms.date: 01/07/2020
+ms.date: 01/05/2021
---
@@ -220,7 +220,7 @@ KubePodInventory
KubePodInventory | where TimeGenerated < endDateTime | where TimeGenerated >= startDateTime
- | summarize PodStatus=any(PodStatus) by TimeGenerated, PodUid, ClusterId
+ | summarize PodStatus=any(PodStatus) by TimeGenerated, PodUid, ClusterName
| summarize TotalCount = count(), PendingCount = sumif(1, PodStatus =~ 'Pending'), RunningCount = sumif(1, PodStatus =~ 'Running'),
@@ -303,4 +303,4 @@ This section walks through the creation of a metric measurement alert rule using
- View [log query examples](container-insights-log-search.md#search-logs-to-analyze-data) to see pre-defined queries and examples to evaluate or customize for alerting, visualizing, or analyzing your clusters. -- To learn more about Azure Monitor and how to monitor other aspects of your Kubernetes cluster, see [View Kubernetes cluster performance](container-insights-analyze.md) and [View Kubernetes cluster health](./container-insights-overview.md).\ No newline at end of file
+- To learn more about Azure Monitor and how to monitor other aspects of your Kubernetes cluster, see [View Kubernetes cluster performance](container-insights-analyze.md) and [View Kubernetes cluster health](./container-insights-overview.md).
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/insights/data-explorer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/data-explorer.md new file mode 100644
@@ -0,0 +1,161 @@
+---
+title: Azure Monitor for Azure Data Explorer (preview)| Microsoft Docs
+description: This article describes Azure Monitor Insights for Azure Data Explorer Clusters.
+services: azure-monitor
+ms.topic: conceptual
+ms.date: 01/05/2021
+author: lgayhardt
+ms.author: lagayhar
+
+---
+
+# Azure Monitor for Azure Data Explorer (preview)
+
+Azure Monitor for Azure Data Explorer (preview) provides comprehensive monitoring of your clusters by delivering a unified view of your cluster performance, operations, usage, and failures.
+This article will help you understand how to onboard and use Azure Monitor for Azure Data Explorer (preview).
+
+## Introduction to Azure Monitor for Azure Data Explorer (preview)
+
+Before jumping into the experience, you should understand how it presents and visualizes information.
+- **At scale perspective** showing a snapshot view of your clusters' primary metrics, to easily track performance of queries, ingestion, and export operations.
+- **Drill down analysis** of a particular Azure Data Explorer cluster to help perform detailed analysis.
+- **Customizable** where you can change which metrics you want to see, modify or set thresholds that align with your limits, and save your own custom workbooks. Charts in the workbook can be pinned to Azure dashboards.
+
+## View from Azure Monitor (at scale perspective)
+
+From Azure Monitor, you can view the main performance metrics for the cluster, including metrics for queries, ingestion, and export operations from multiple clusters in your subscription, and help identify performance problems.
+
+To view the performance of your clusters across all your subscriptions, perform the following steps:
+
+1. Sign into the [Azure portal](https://portal.azure.com/)
+
+2. Select **Monitor** from the left-hand pane in the Azure portal, and under the Insights Hub section, select **Azure Data Explorer Clusters (preview)**.
+
+![Screenshot of overview experience with multiple graphs](./media/data-explorer/insights-hub.png)
+
+### Overview tab
+
+On the **Overview** tab for the selected subscription, the table displays interactive metrics for the Azure Data Explorer clusters grouped within the subscription. You can filter results based on the options you select from the following drop-down lists:
+
+* Subscriptions ΓÇô only subscriptions that have Azure Data Explorer clusters are listed.
+
+* Azure Data Explorer clusters ΓÇô by default, only up to five clusters are pre-selected. If you select all or multiple clusters in the scope selector, up to 200 clusters will be returned.
+
+* Time Range ΓÇô by default, displays the last 24 hours of information based on the corresponding selections made.
+
+The counter tile, under the drop-down list, rolls-up the total number of Azure Data Explorer clusters in the selected subscriptions and reflects how many are selected. There are conditional color-codings for the columns: Keep alive, CPU, Ingestion Utilization, and Cache Utilization. Orange-coded cells have values that are not sustainable for the cluster.
+
+To better understand what each of the metrics represent, we recommend reading through the documentation on [Azure Data Explorer metrics](https://docs.microsoft.com/azure/data-explorer/using-metrics#cluster-metrics).
+
+### Query Performance tab
+
+This tab shows the query duration, total number of concurrent queries, and the total number of throttled queries.
+
+![Screenshot of query performance tab](./media/data-explorer/query-performance.png)
+
+### Ingestion Performance tab
+
+This tab shows the ingestion latency, succeeded ingestion results, failed ingestion results, ingestion volume, and events processed for Event/IoT Hubs.
+
+[![Screenshot of ingestion performance tab](./media/data-explorer/ingestion-performance.png)](./media/data-explorer/ingestion-performance.png#lightbox)
+
+### Streaming Ingest Performance tab
+
+This tab provides information on the average data rate, average duration, and request rate.
+
+### Export Performance tab
+
+This tab provides information on exported records, lateness, pending count, and utilization percentage for continuous export operations.
+
+## View from an Azure Data Explorer Cluster resource (drill down analysis)
+
+To access Azure Monitor for Azure Data Explorer Clusters directly from an Azure Data Explorer Cluster:
+
+1. In the Azure portal, select **Azure Data Explorer Clusters**.
+
+2. From the list, choose an Azure Data Explorer Cluster. In the monitoring section, choose **Insights (preview)**.
+
+These views are also accessible by selecting the resource name of an Azure Data Explorer cluster from within the Azure Monitor insights view.
+
+Azure Monitor for Azure Data Explorer combines both logs and metrics to provide a global monitoring solution. The inclusion of logs-based visualizations requires users to [enable diagnostic logging of their Azure Data Explorer cluster and send them to a Log Analytics workspace.](https://docs.microsoft.com/azure/data-explorer/using-diagnostic-logs?tabs=commands-and-queries#enable-diagnostic-logs). The diagnostic logs that should be enabled are: **Command**, **Query**, **TableDetails**, and **TableUsageStatistics**.
+
+![Screenshot of blue button that displays the text "Enable Logs for Monitoring"](./media/data-explorer/enable-logs.png)
++
+ The **Overview** tab shows:
+
+- Metrics tiles highlighting the availability and overall status of the cluster to quickly assess its health.
+
+- A summary of active [Advisor recommendations](https://docs.microsoft.com/azure/data-explorer/azure-advisor) and [resource health](https://docs.microsoft.com/azure/data-explorer/monitor-with-resource-health) status.
+
+- Charts showing the top CPU and memory consumers and the number of unique users over time.
++
+[![Screenshot of view from an Azure Data Explorer cluster resource](./media/data-explorer/overview.png)](./media/data-explorer/overview.png#lightbox)
+
+The **Key Metrics** tab shows a unified view of some of the cluster's metrics, grouped by: general metrics, query-related, ingestion-related, and streaming ingestion-related metrics.
+
+[![Screenshot of failures view](./media/data-explorer/key-metrics.png)](./media/data-explorer/key-metrics.png#lightbox)
+
+The **Usage** tab allows users to deep dive into the performance of the cluster's commands and queries. On this page, you can:
+
+ - See which users and applications are sending the most queries or consuming the most CPU and memory (so you can understand which users are submitting the heaviest queries for the cluster to process).
+ - Identify top users and applications by failed queries.
+ - Identify recent changes in the number of queries, compared to the historical daily average (over the past 16 days), by user and application.
+ - Identify trends and peaks in the number of queries, memory, and CPU consumption by user, application and command type.
+
+[![Screenshot of operations view with donut charts of top application by command and query count, top principals by command and query count, and top commands by command types](./media/data-explorer/usage.png)](./media/data-explorer/usage.png#lightbox)
+
+[![Screenshot of operations view with line charts of query count by application, total memory by application and total CPU by application](./media/data-explorer/usage-2.png)](./media/data-explorer/usage-2.png#lightbox)
+
+The **tables** tab shows the latest and historical properties of tables in the cluster. You can see which tables are consuming the most space, track growth history by table size, hot data, and the number of rows over time.
+
+The **cache** tab allows users to analyze their actual queries' look back patterns and compare them to the configured cache policy (for each table). You can identify tables used by the most queries and tables that are not queried at all, and adapt the cache policy accordingly. You may get particular cache policy recommendations on specific tables in Azure Advisor (currently, cache recommendations are available only from the [main Azure Advisor dashboard](https://docs.microsoft.com/azure/data-explorer/azure-advisor#use-the-azure-advisor-recommendations)), based on actual queries' look back in the past 30 days and an un-optimized cache policy for at least 95% of the queries. Cache reduction recommendations in Azure Advisor are available for clusters that are "bounded by data" (meaning the cluster has low CPU and low ingestion utilization, but because of high data capacity, the cluster could not scale-in or scale-down).
+
+[![Screenshot of cache details](./media/data-explorer/cache-tab.png)](./media/data-explorer/cache-tab.png#lightbox)
+
+## Pin to Azure dashboard
+
+You can pin any one of the metric sections (of the "at-scale" perspective) to an Azure dashboard by selecting the pushpin icon at the top right of the section.
+
+![Screenshot of pin icon selected](./media/data-explorer/pin.png)
+
+## Customize Azure Monitor for Azure Data Explorer Cluster
+
+This section highlights common scenarios for editing the workbook to customize in support of your data analytics needs:
+* Scope the workbook to always select a particular subscription or Azure Data Explorer Cluster(s)
+* Change metrics in the grid
+* Change thresholds or color rendering/coding
+
+You can begin customizations by enabling the editing mode, by selecting the **Customize** button from the top toolbar.
+
+![Screenshot of customize button](./media/data-explorer/customize.png)
+
+Customizations are saved to a custom workbook to prevent overwriting the default configuration in our published workbook. Workbooks are saved within a resource group, either in the My Reports section that is private to you or in the Shared Reports section that's accessible to everyone with access to the resource group. After you save the custom workbook, you need to go to the workbook gallery to launch it.
+
+![Screenshot of the workbook gallery](./media/data-explorer/gallery.png)
+
+## Troubleshooting
+
+For general troubleshooting guidance, refer to the dedicated workbook-based insights [troubleshooting article](troubleshoot-workbooks.md).
+
+This section will help you with the diagnosis and troubleshooting of some of the common issues you may encounter when using Azure Monitor for Azure Data Explorer Cluster (preview). Use the list below to locate the information relevant to your specific issue.
+
+### Why don't I see all my subscriptions in the subscription picker?
+
+We only show subscriptions that contain Azure Data Explorer Clusters, chosen from the selected subscription filter, which are selected in the "Directory + Subscription" in the Azure portal header.
+
+![Screenshot of subscription filter](./media/key-vaults-insights-overview/Subscriptions.png)
+
+### Why do I not see any data for my Azure Data Explorer Cluster under the Usage, Tables or Cache sections?
+
+To view your logs-based data, you will need to [enable diagnostic logs](https://docs.microsoft.com/azure/data-explorer/using-diagnostic-logs?tabs=commands-and-queries#enable-diagnostic-logs) for each of the Azure Data Explorer Clusters you want to monitor. This can be done under the diagnostic settings for each cluster. You will need to send your data to a Log Analytics workspace. The diagnostic logs that should be enabled are: Command, Query, TableDetails, and TableUsageStatistics.
+
+### I have already enabled logs for my Azure Data Explorer Cluster, why am I still unable to see my data under Commands and Queries?
+
+Currently, diagnostic logs do not work retroactively, so the data will only start appearing once there have been actions taken to your Azure Data Explorer. Therefore, it may take some time, ranging from hours to a day, depending on how active your Azure Data Explorer cluster is.
++
+## Next steps
+
+Learn the scenarios workbooks are designed to support, how to author new and customize existing reports, and more by reviewing [Create interactive reports with Azure Monitor workbooks](../platform/workbooks-overview.md).
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/activity-log-alerts-webhook https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/activity-log-alerts-webhook.md
@@ -23,6 +23,20 @@ The webhook can optionally use token-based authorization for authentication. The
## Payload schema The JSON payload contained in the POST operation differs based on the payload's data.context.activityLog.eventSource field.
+> [!NOTE]
+> Currently, the description that is part of the Activity log event is copied to the fired **"Alert Description"** property.
+>
+> In order to align the Activity Log payload with other alert types, Starting April 1, 2021 the fired alert property **"DescriptionΓÇ£** will contain the alert rule description instead.
+>
+> In preparation for this change, we created a new property **ΓÇ£Activity Log Event DescriptionΓÇ£** to the Activity Log fired Alert. This new property will be filled with the **"Description"** property that is already available for use. This means that the new field **ΓÇ£Activity Log Event DescriptionΓÇ£** will contain the description that is part of the Activity log event.
+>
+> Please review your alert rules, action rules, webhooks, logic app or any other configurations where you might be using the **ΓÇ£DescriptionΓÇ¥** property from the fired alert and replace it with **ΓÇ£Activity Log Event DescriptionΓÇ¥** property.
+>
+> if your condition (in your action rules, webhooks, logic app or any other configurations) is currently based on the **"Description"** property for activity log alerts, you may need to modify it to be based on the **ΓÇ£Activity Log Event DescriptionΓÇ¥** property instead.
+>
+> In order to fill the new **"Description"** property, you can add a description in the alert rule definition.
+> ![Fired Activity Log Alerts](media/activity-log-alerts-webhook/activity-log-alert-fired.png)
+ ### Common ```json
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/azure-monitor-agent-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/azure-monitor-agent-overview.md
@@ -50,7 +50,7 @@ The following limitations apply during public preview of the Azure Monitor Agent
- The Azure Monitor agent does not support solutions and insights such as Azure Monitor for VMs and Azure Security Center. The only scenario currently supported is collecting data using the data collection rules that you configure. - Data collection rules must be created in the same region as any Log Analytics workspace used as a destination.-- Azure virtual machines and Azure Arc enabled servers are currently supported.Virtual machine scale sets, Azure Kubernetes Service, and other compute resource types are not currently supported.
+- Azure virtual machines, virtual machine scale sets, and Azure Arc enabled servers are currently supported. Azure Kubernetes Service and other compute resource types are not currently supported.
- The virtual machine must have access to the following HTTPS endpoints: - *.ods.opinsights.azure.com - *.ingest.monitor.azure.com
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/itsmc-definition https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/itsmc-definition.md
@@ -123,29 +123,30 @@ Use the following procedure to create action groups:
8. If you want to fill out-of-the-box fields with fixed values, select **Use Custom Template**. Otherwise, choose an existing [template](#template-definitions) in the **Template** list and enter the fixed values in the template fields.
-9. If you select **Create individual work items for each Configuration Item**, every configuration item will have its own work item. Meaning there will be one work item per configuration item.
+9. In the last section of the action ITSM group definition you can define how many alerts will be created from each alert. This section is relevant only to Log Search Alerts.
- * In a case you select in the work item dropdown "Incident" or "Alert":
- * If you check the **Create individual work items for each Configuration Item** check box, every alert will create a new work item. There can be more than one work item per configuration item in the ITSM system.
+ * In a case you select in the work item dropdown "Incident" or "Alert":
+ * If you check the **Create individual work items for each Configuration Item** check box, every configuration item in every alert will create a new work item. There can be more than one work item per configuration item in the ITSM system.
For example:
- 1) Alert 1 with 3 Configuration Items: A, B, C will create 3 work items.
- 2) Alert 2 with 1 Configuration Item: D will create 1 work item.
+ 1) Alert 1 with 3 Configuration Items: A, B, C - will create 3 work items.
+ 2) Alert 2 with 1 Configuration Item: D - will create 1 work item.
**By the end of this flow there will be 4 alerts** * If you clear the **Create individual work items for each Configuration Item** check box, there will be alerts that will not create a new work item. work items will be merged according to alert rule. For example:
- 1) Alert 1 with 3 Configuration Items: A, B, C will create 1 work item.
- 2) Alert 2 for the same alert rule as phase 1 with 1 Configuration Item: D will be merged to the work item in phase 1.
- 3) Alert 3 for a different alert rule with 1 Configuration Item: E will create 1 work item.
+ 1) Alert 1 with 3 Configuration Items: A, B, C - will create 1 work item.
+ 2) Alert 2 for the same alert rule as phase 1 with 1 Configuration Item: D - will be merged to the work item in phase 1.
+ 3) Alert 3 for a different alert rule with 1 Configuration Item: E - will create 1 work item.
**By the end of this flow there will be 2 alerts** ![Screenshot that shows the ITSM Incident window.](media/itsmc-overview/itsm-action-configuration.png)
- * In a case you select in the work item dropdown "Event": If you select **Create individual work items for each Log Entry** in the radio buttons selection, every
- alert will create a new work item. If you select **Create individual work items for each Configuration Item** in the radio buttons selection, every configuration item will have its own work item.
+ * In a case you select in the work item dropdown "Event":
+ * If you select **Create individual work items for each Log Entry** in the radio buttons selection, an alert will be created per each row in the search results of the log search alert query. In the payload of the alert the description property will have the row from the search results.
+ * If you select **Create individual work items for each Configuration Item** in the radio buttons selection, every configuration item in every alert will create a new work item. There can be more than one work item per configuration item in the ITSM system. This will be the same as the checking the checkbox in Incident/Alert section.
![Screenshot that shows the ITSM Event window.](media/itsmc-overview/itsm-action-configuration-event.png) 10. Select **OK**.
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/platform/private-storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/platform/private-storage.md
@@ -10,15 +10,15 @@ ms.date: 09/03/2020
# Using customer-managed storage accounts in Azure Monitor Log Analytics
-Log Analytics relies on Azure Storage in a variety of scenarios. This use is typically managed automatically. However, some cases require you to provide and manage your own storage account, also referred to as a customer-managed storage account. This document details the usage of customer-managed storage for the ingestion of WAD/LAD logs, Private Link specific scenarios, and customer-managed key (CMK) encryption.
+Log Analytics relies on Azure Storage in various scenarios. This use is typically managed automatically. However, some cases require you to provide and manage your own storage account, also referred to as a customer-managed storage account. This document covers the use of customer-managed storage for WAD/LAD logs, Private Link, and customer-managed key (CMK) encryption.
> [!NOTE] > We recommend that you donΓÇÖt take a dependency on the contents Log Analytics uploads to customer-managed storage, given that formatting and content may change. ## Ingesting Azure Diagnostics extension logs (WAD/LAD) The Azure Diagnostics extension agents (also called WAD and LAD for Windows and Linux agents respectively) collect various operating system logs and store them on a customer-managed storage account. You can then ingest these logs into Log Analytics to review and analyze them.
-How to collect Azure Diagnostics extension logs from your storage account
-Connect the storage account to your Log Analytics workspace as a storage data source using [the Azure portal](./diagnostics-extension-logs.md#collect-logs-from-azure-storage) or by calling the [Storage Insights API](/rest/api/loganalytics/storage%20insights/createorupdate).
+### How to collect Azure Diagnostics extension logs from your storage account
+Connect the storage account to your Log Analytics workspace as a storage data source using [the Azure portal](./diagnostics-extension-logs.md#collect-logs-from-azure-storage) or by calling the [Storage Insights API](/rest/api/loganalytics/connectedsources/storage%20insights/createorupdate).
Supported data types: * Syslog
@@ -28,67 +28,77 @@ Supported data types:
* IIS Logs ## Using Private links
-Customer managed storage accounts are required in some use cases, when private links are used to connect to Azure Monitor resources. One such case is the ingestion of Custom logs or IIS logs. These data types are first uploaded as blobs to an intermediary Azure Storage account and only then ingested to a workspace. Similarly, some Azure Monitor solutions may use storage accounts to store large files, such as Azure Security Center (ASC) which may need to upload files.
+Customer-managed storage accounts are used to ingest Custom logs or IIS logs when private links are used to connect to Azure Monitor resources. The ingestion process of these data types first uploads logs to an intermediary Azure Storage account, and only then ingests them to a workspace.
-##### Private Link scenarios that require a customer-managed storage
-* Ingestion of Custom logs and IIS logs
-* Allowing ASC solution to upload files
+### Using a customer-managed storage account over a Private Link
+#### Workspace requirements
+When connecting to Azure Monitor over a private link, Log Analytics agents are only able to send logs to workspaces accessible over a private link. This requirement means you should:
+* Configure an Azure Monitor Private Link Scope (AMPLS) object
+* Connect it to your workspaces
+* Connect the AMPLS to your network over a private link.
-### How to use a customer-managed storage account over a Private Link
-##### Workspace requirements
-When connecting to Azure Monitor over a private link, Log Analytics agents are only able to send logs to workspaces linked to your network over a private link. This rule requires that you properly configure an Azure Monitor Private Link Scope (AMPLS) object, connect it to your workspaces, and then connect the AMPLS to your network over a private link. For more information on the AMPLS configuration procedure, see [Use Azure Private Link to securely connect networks to Azure Monitor](./private-link-security.md).
-##### Storage account requirements
+For more information on the AMPLS configuration procedure, see [Use Azure Private Link to securely connect networks to Azure Monitor](./private-link-security.md).
+
+#### Storage account requirements
For the storage account to successfully connect to your private link, it must:
-* Be located on your VNet or a peered network and connected to your VNet over a private link. This allows agents on your VNet to send logs to the storage account.
+* Be located on your VNet or a peered network, and connected to your VNet over a private link.
* Be located on the same region as the workspace itΓÇÖs linked to. * Allow Azure Monitor to access the storage account. If you chose to allow only select networks to access your storage account, you should select the exception: ΓÇ£Allow trusted Microsoft services to access this storage accountΓÇ¥. ![Storage account trust MS services image](./media/private-storage/storage-trust.png) * If your workspace handles traffic from other networks as well, you should configure the storage account to allow incoming traffic coming from the relevant networks/internet.
-##### Link your storage account to a Log Analytics workspace
-You can link your storage account to the workspace via the [Azure CLI](/cli/azure/monitor/log-analytics/workspace/linked-storage) or [REST API](/rest/api/loganalytics/linkedstorageaccounts).
-Applicable dataSourceType values:
-* CustomLogs ΓÇô to use the storage for custom logs and IIS logs during ingestion.
-* AzureWatson ΓÇô use the storage for files uploaded by the ASC (Azure Security Center) solution.
-For more information on managing retention, replacing a linked storage account, and monitoring your storage account activity, see [Managing linked storage accounts](#managing-linked-storage-accounts).
-
-## Encrypting data with CMK
-Azure Storage encrypts all data at rest in a storage account. By default, it encrypts data with Microsoft-managed keys (MMK). However, Azure Storage will instead let you use a Customer-managed key (CMK) from Azure Key vault to encrypt your storage data. You can either import your own keys into Azure Key Vault, or you can use the Azure Key Vault APIs to generate keys.
-##### CMK scenarios that require a customer-managed storage account
+### Using a customer-managed storage account for CMK data encryption
+Azure Storage encrypts all data at rest in a storage account. By default, it uses Microsoft-managed keys (MMK) to encrypt the data; However, Azure Storage also allows you to use CMK from Azure Key vault to encrypt your storage data. You can either import your own keys into Azure Key Vault, or you can use the Azure Key Vault APIs to generate keys.
+#### CMK scenarios that require a customer-managed storage account
* Encrypting log-alert queries with CMK * Encrypting saved queries with CMK
-### How to apply CMK to customer-managed storage accounts
+#### How to apply CMK to customer-managed storage accounts
##### Storage account requirements The storage account and the key vault must be in the same region, but they can be in different subscriptions. For more information about Azure Storage encryption and key management, see [Azure Storage encryption for data at rest](../../storage/common/storage-service-encryption.md). ##### Apply CMK to your storage accounts
-To configure your Azure Storage account to use customer-managed keys with Azure Key Vault, use the [Azure portal](../../storage/common/customer-managed-keys-configure-key-vault.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json), [PowerShell](../../storage/common/customer-managed-keys-configure-key-vault.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) or the [CLI](../../storage/common/customer-managed-keys-configure-key-vault.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json).
+To configure your Azure Storage account to use CMK with Azure Key Vault, use the [Azure portal](../../storage/common/customer-managed-keys-configure-key-vault.md?toc=%252fazure%252fstorage%252fblobs%252ftoc.json), [PowerShell](../../storage/common/customer-managed-keys-configure-key-vault.md?toc=%252fazure%252fstorage%252fblobs%252ftoc.json), or the [CLI](../../storage/common/customer-managed-keys-configure-key-vault.md?toc=%252fazure%252fstorage%252fblobs%252ftoc.json).
-## Managing linked storage accounts
+## Link storage accounts to your Log Analytics workspace
+### Using the Azure portal
+On the Azure portal, open your Workspace' menu and select *Linked storage accounts*. A blade will open, showing the linked storage accounts by the use cases mentioned above (Ingestion over Private Link, applying CMK to saved queries or to alerts).
+![Linked storage accounts blade image](./media/private-storage/all-linked-storage-accounts.png)
+Selecting an item on the table will open its storage account details, where you can set or update the linked storage account for this type.
+![Link a storage account blade image](./media/private-storage/link-a-storage-account-blade.png)
+You can use the same account for different use cases if you prefer.
-To link or unlink storage accounts to your workspace use the [Azure CLI](/cli/azure/monitor/log-analytics/workspace/linked-storage) or [REST API](/rest/api/loganalytics/linkedstorageaccounts).
+### Using the Azure CLI or REST API
+You can also link a storage account to your workspace via the [Azure CLI](/cli/azure/monitor/log-analytics/workspace/linked-storage) or [REST API](/rest/api/loganalytics/linkedstorageaccounts).
+
+The applicable dataSourceType values are:
+* CustomLogs ΓÇô to use the storage account for custom logs and IIS logs ingestion
+* Query - to use the storage account to store saved queries (required for CMK encryption)
+* Alerts - to use the storage account to store log-based alerts (required for CMK encryption)
++
+## Managing linked storage accounts
-##### Create or modify a link
+### Create or modify a link
When you link a storage account to a workspace, Log Analytics will start using it instead of the storage account owned by the service. You can * Register multiple storage accounts to spread the load of logs between them * Reuse the same storage account for multiple workspaces
-##### Unlink a storage account
+### Unlink a storage account
To stop using a storage account, unlink the storage from the workspace. Unlinking all storage accounts from a workspace means Log Analytics will attempt to rely on service-managed storage accounts. If your network has limited access to the internet, these storages may not be available and any scenario that relies on storage will fail.
-##### Replace a storage account
+### Replace a storage account
To replace a storage account used for ingestion, 1. **Create a link to a new storage account.** The logging agents will get the updated configuration and start sending data to the new storage as well. The process could take a few minutes. 2. **Then unlink the old storage account so agents will stop writing to the removed account.** The ingestion process keeps reading data from this account until itΓÇÖs all ingested. DonΓÇÖt delete the storage account until you see all logs were ingested. ### Maintaining storage accounts
-##### Manage log retention
-When using your own storage account, retention is up to you. In other words, Log Analytics does not delete logs stored on your private storage. Instead, you should setup a policy to handle the load according to your preferences.
+#### Manage log retention
+When using your own storage account, retention is up to you. Log Analytics won't delete logs stored on your private storage. Instead, you should set up a policy to handle the load according to your preferences.
-##### Consider load
-Storage accounts can handle a certain load of read and write requests before they start throttling requests (see [Scalability and performance targets for Blob storage](../../storage/common/scalability-targets-standard-account.md) for more details). Throttling affects the time it takes to ingest logs. If your storage account is overloaded, register an additional storage account to spread the load between them. To monitor your storage accountΓÇÖs capacity and performance review its [Insights in the Azure portal]( https://docs.microsoft.com/azure/azure-monitor/insights/storage-insights-overview).
+#### Consider load
+Storage accounts can handle a certain load of read and write requests before they start throttling requests (For more information, see [Scalability and performance targets for Blob storage](../../storage/common/scalability-targets-standard-account.md)). Throttling affects the time it takes to ingest logs. If your storage account is overloaded, register an additional storage account to spread the load between them. To monitor your storage accountΓÇÖs capacity and performance review its [Insights in the Azure portal]( https://docs.microsoft.com/azure/azure-monitor/insights/storage-insights-overview).
### Related charges Storage accounts are charged by the volume of stored data, the type of the storage, and the type of redundancy. For details see [Block blob pricing](https://azure.microsoft.com/pricing/details/storage/blobs) and [Table Storage pricing](https://azure.microsoft.com/pricing/details/storage/tables).
azure-netapp-files https://docs.microsoft.com/en-us/azure/azure-netapp-files/azure-netapp-files-faqs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-faqs.md
@@ -13,7 +13,7 @@ ms.workload: storage
ms.tgt_pltfrm: na ms.devlang: na ms.topic: conceptual
-ms.date: 11/16/2020
+ms.date: 01/05/2020
ms.author: b-juche --- # FAQs About Azure NetApp Files
@@ -133,6 +133,16 @@ Yes, you can. However, the file path must be used in either a different subscrip
For example, you create a volume called `vol1`. And then you create another volume also called `vol1` in a different capacity pool but in the same subscription and region. In this case, using the same volume name `vol1` will cause an error. To use the same file path, the name must be in a different region or subscription.
+### When I try to access NFS volumes through a Windows client, why does the client take a long time to search folders and subfolders?
+
+Make sure that `CaseSensitiveLookup` is enabled on the Windows client to speed up the look-up of folders and subfolders:
+
+1. Use the following PowerShell command to enable CaseSensitiveLookup:
+ `Set-NfsClientConfiguration -CaseSensitiveLookup 1`
+2. Mount the volume on the Windows server.
+ Example:
+ `Mount -o rsize=1024 -o wsize=1024 -o mtype=hard \\10.x.x.x\testvol X:*`
+ ## SMB FAQs ### Which SMB versions are supported by Azure NetApp Files?
azure-netapp-files https://docs.microsoft.com/en-us/azure/azure-netapp-files/create-volumes-dual-protocol https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/create-volumes-dual-protocol.md
@@ -13,7 +13,7 @@ ms.workload: storage
ms.tgt_pltfrm: na ms.devlang: na ms.topic: how-to
-ms.date: 01/04/2020
+ms.date: 01/05/2020
ms.author: b-juche --- # Create a dual-protocol (NFSv3 and SMB) volume for Azure NetApp Files
@@ -128,7 +128,10 @@ Azure NetApp Files supports creating volumes using NFS (NFSv3 and NFSv4.1), SMB3
* Another machine in the domain containing the root certificate 3. Export the root CA certificate.
- Root CA certificates can be exported from Personal or Trusted Root Certification Authorities.
+ Root CA certificates can be exported from the Personal or Trusted Root Certification Authorities directory, as shown in the following examples:
+ ![screenshot that shows personal certificates](../media/azure-netapp-files/personal-certificates.png)
+ ![screenshot that shows trusted root certification authorities](../media/azure-netapp-files/trusted-root-certification-authorities.png)
+ Ensure that the certificate is exported in the Base-64 encoded X.509 (.CER) format: ![Certificate Export Wizard](../media/azure-netapp-files/certificate-export-wizard.png)
azure-relay https://docs.microsoft.com/en-us/azure/azure-relay/relay-what-is-it https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-relay/relay-what-is-it.md
@@ -12,7 +12,7 @@ The relay service supports the following scenarios between on-premises services
- Traditional one-way, request/response, and peer-to-peer communication - Event distribution at internet-scope to enable publish/subscribe scenarios -- Bi-directional and unbuffered socket communication across network boundaries.
+- Bi-directional and unbuffered socket communication across network boundaries
Azure Relay differs from network-level integration technologies such as VPN. An Azure relay can be scoped to a single application endpoint on a single machine. The VPN technology is far more intrusive, as it relies on altering the network environment.
@@ -51,7 +51,7 @@ Hybrid Connections and WCF Relay both enable secure connection to assets that ex
| **WCF** |x | | | **.NET Core** | |x | | **.NET Framework** |x |x |
-| **Java script/Node.JS** | |x |
+| **JavaScript/Node.js** | |x |
| **Standards-Based open protocol** | |x | | **RPC programming models** | |x |
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/manage-resource-groups-cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/manage-resource-groups-cli.md
@@ -3,8 +3,8 @@ title: Manage resource groups - Azure CLI
description: Use Azure CLI to manage your resource groups through Azure Resource Manager. Shows how to create, list, and delete resource groups. author: mumian ms.topic: conceptual
-ms.date: 09/01/2020
-ms.author: jgao
+ms.date: 01/05/2021
+ms.author: jgao
ms.custom: devx-track-azurecli ---
@@ -79,14 +79,14 @@ You can move the resources in the group to another resource group. For more info
## Lock resource groups
-Locking prevents other users in your organization from accidentally deleting or modifying critical resources, such as Azure subscription, resource group, or resource.
+Locking prevents other users in your organization from accidentally deleting or modifying critical resources, such as Azure subscription, resource group, or resource.
The following script locks a resource group so the resource group can't be deleted. ```azurecli-interactive echo "Enter the Resource Group name:" && read resourceGroupName &&
-az lock create --name LockGroup --lock-type CanNotDelete --resource-group $resourceGroupName
+az lock create --name LockGroup --lock-type CanNotDelete --resource-group $resourceGroupName
``` The following script gets all locks for a resource group:
@@ -94,7 +94,7 @@ The following script gets all locks for a resource group:
```azurecli-interactive echo "Enter the Resource Group name:" && read resourceGroupName &&
-az lock list --resource-group $resourceGroupName
+az lock list --resource-group $resourceGroupName
``` The following script deletes a lock:
@@ -120,13 +120,88 @@ After setting up your resource group successfully, you may want to view the Reso
- Automate future deployments of the solution because the template contains all the complete infrastructure. - Learn template syntax by looking at the JavaScript Object Notation (JSON) that represents your solution.
+To export all resources in a resource group, use [az group export](/cli/azure/group?view=azure-cli-latest#az_group_export&preserve-view=true) and provide the resource group name.
+
+```azurecli-interactive
+echo "Enter the Resource Group name:" &&
+read resourceGroupName &&
+az group export --name $resourceGroupName
+```
+
+The script displays the template on the console. Copy the JSON, and save as a file.
+
+Instead of exporting all resources in the resource group, you can select which resources to export.
+
+To export one resource, pass that resource ID.
+ ```azurecli-interactive echo "Enter the Resource Group name:" && read resourceGroupName &&
-az group export --name $resourceGroupName
+echo "Enter the storage account name:" &&
+read storageAccountName &&
+storageAccount=$(az resource show --resource-group $resourceGroupName --name $storageAccountName --resource-type Microsoft.Storage/storageAccounts --query id --output tsv) &&
+az group export --resource-group $resourceGroupName --resource-ids $storageAccount
+```
+
+To export more than one resource, pass the space-separated resource IDs. To export all resources, do not specify this argument or supply "*".
+
+```azurecli-interactive
+az group export --resource-group <resource-group-name> --resource-ids $storageAccount1 $storageAccount2
+```
+
+When exporting the template, you can specify whether parameters are used in the template. By default, parameters for resource names are included but they don't have a default value. You must pass that parameter value during deployment.
+
+```json
+"parameters": {
+ "serverfarms_demoHostPlan_name": {
+ "type": "String"
+ },
+ "sites_webSite3bwt23ktvdo36_name": {
+ "type": "String"
+ }
+}
```
-The script displays the template on the console. Copy the JSON, and save as a file.
+In the resource, the parameter is used for the name.
+
+```json
+"resources": [
+ {
+ "type": "Microsoft.Web/serverfarms",
+ "apiVersion": "2016-09-01",
+ "name": "[parameters('serverfarms_demoHostPlan_name')]",
+ ...
+ }
+]
+```
+
+If you use the `--include-parameter-default-value` parameter when exporting the template, the template parameter includes a default value that is set to the current value. You can either use that default value or overwrite the default value by passing in a different value.
+
+```json
+"parameters": {
+ "serverfarms_demoHostPlan_name": {
+ "defaultValue": "demoHostPlan",
+ "type": "String"
+ },
+ "sites_webSite3bwt23ktvdo36_name": {
+ "defaultValue": "webSite3bwt23ktvdo36",
+ "type": "String"
+ }
+}
+```
+
+If you use the `--skip-resource-name-params` parameter when exporting the template, parameters for resource names aren't included in the template. Instead, the resource name is set directly on the resource to its current value. You can't customize the name during deployment.
+
+```json
+"resources": [
+ {
+ "type": "Microsoft.Web/serverfarms",
+ "apiVersion": "2016-09-01",
+ "name": "demoHostPlan",
+ ...
+ }
+]
+```
The export template feature doesn't support exporting Azure Data Factory resources. To learn about how you can export Data Factory resources, see [Copy or clone a data factory in Azure Data Factory](../../data-factory/copy-clone-data-factory.md).
azure-resource-manager https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/template-specs-create-linked https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-specs-create-linked.md
@@ -2,7 +2,7 @@
title: Create a template spec with linked templates description: Learn how to create a template spec with linked templates. ms.topic: conceptual
-ms.date: 11/17/2020
+ms.date: 01/05/2021
---
@@ -187,7 +187,7 @@ az ts create \
--version "1.0.0.0" \ --resource-group templateSpecRG \ --location "westus2" \
- --template-file "c:\Templates\linkedTS\azuredeploy.json"
+ --template-file "<path-to-main-template>"
``` ---
@@ -233,7 +233,7 @@ az group create \
--name webRG \ --location westus2
-id = $(az template-specs show --name webSpec --resource-group templateSpecRG --version "1.0.0.0" --query "id")
+id = $(az ts show --name webSpec --resource-group templateSpecRG --version "1.0.0.0" --query "id")
az deployment group create \ --resource-group webRG \
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/virtual-machines/windows/sql-server-iaas-agent-extension-automate-management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/sql-server-iaas-agent-extension-automate-management.md
@@ -36,20 +36,21 @@ The SQL Server IaaS Agent extension provides a number of benefits for SQL Server
- **Free**: The extension in all three manageability modes is completely free. There is no additional cost associated with the extension, or with changing management modes. -- **Simplified license management**: The extension simplifies SQL Server license management, and allows you to quickly identify SQL Server VMs with the Azure Hybrid Benefit enabled using the [Azure portal](manage-sql-vm-portal.md), the Azure CLI, or PowerShell:
+- **Simplified license management**: The extension simplifies SQL Server license management, and allows you to quickly identify SQL Server VMs with the Azure Hybrid Benefit enabled using the [Azure portal](manage-sql-vm-portal.md), PowerShell or the Azure CLI:
+
+ # [PowerShell](#tab/azure-powershell)
+
+ ```powershell-interactive
+ Get-AzSqlVM | Where-Object {$_.LicenseType -eq 'AHUB'}
+ ```
# [Azure CLI](#tab/azure-cli) ```azurecli-interactive
- $vms = az sql vm list | ConvertFrom-Json
- $vms | Where-Object {$_.sqlServerLicenseType -eq "AHUB"}
+ $ az sql vm list --query "[?sqlServerLicenseType=='AHUB']"
```
- # [PowerShell](#tab/azure-powershell)
- ```powershell-interactive
- Get-AzSqlVM | Where-Object {$_.LicenseType -eq 'AHUB'}
- ```
---
azure-vmware https://docs.microsoft.com/en-us/azure/azure-vmware/concepts-hub-and-spoke https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-hub-and-spoke.md
@@ -123,7 +123,7 @@ As a security best practice, deploy [Microsoft Azure Bastion](../bastion/index.y
For Azure DNS resolution, there are two options available: -- Use the Azure Active Directory (Azure AD) domain controllers deployed on the Hub (described in [Identity considerations](#identity-considerations)) as name servers.
+- Use the domain controllers deployed on the Hub (described in [Identity considerations](#identity-considerations)) as name servers.
- Deploy and configure an Azure DNS private zone.
@@ -131,7 +131,7 @@ The best approach is to combine both to provide reliable name resolution for Azu
As a general design recommendation, use the existing Azure DNS infrastructure (in this case, Active Directory-integrated DNS) deployed onto at least two Azure VMs deployed in the Hub virtual network and configured in the Spoke virtual networks to use those Azure DNS servers in the DNS settings.
-You can use Azure Private DNS, where the Azure Private DNS zone links to the virtual network. The DNS servers are used as hybrid resolvers with conditional forwarding to on-premises or Azure VMware Solution running DNS leveraging customer Azure Private DNS infrastructure.
+You can use Azure Private DNS, where the Azure Private DNS zone links to the virtual network. The DNS servers are used as hybrid resolvers with conditional forwarding to on-premises or Azure VMware Solution running DNS using customer Azure Private DNS infrastructure.
To automatically manage the DNS records' lifecycle for the VMs deployed within the Spoke virtual networks, enable autoregistration. When enabled, the maximum number of private DNS zones is only one. If disabled, then the maximum number is 1000.
@@ -139,7 +139,7 @@ On-premises and Azure VMware Solution servers can be configured with conditional
## Identity considerations
-For identity purposes, the best approach is to deploy at least one AD domain controller on the Hub. Use two shared service subnets in zone-distributed fashion or a VM availability set. See [Azure Architecture Center](/azure/architecture/reference-architectures/identity/adds-extend-domain) for extending your on-premises AD domain to Azure.
+For identity purposes, the best approach is to deploy at least one domain controller on the Hub. Use two shared service subnets in zone-distributed fashion or a VM availability set. For more information on extending your on-premises Active Directory (AD) domain to Azure, see [Azure Architecture Center](/azure/architecture/reference-architectures/identity/adds-extend-domain).
Additionally, deploy another domain controller on the Azure VMware Solution side to act as identity and DNS source within the vSphere environment.
azure-vmware https://docs.microsoft.com/en-us/azure/azure-vmware/production-ready-deployment-steps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/production-ready-deployment-steps.md
@@ -57,7 +57,7 @@ Define the NSX-T admin password. During the deployment, you'll create an NSX-T
The first step in planning the deployment is to plan out the IP segmentation. Azure VMware Solution ingests a /22 network that you provide. Then carves it up into smaller segments and then uses those IP segments for vCenter, VMware HCX, NSX-T, and vMotion.
-Azure VMware Solution connects to your Microsoft Azure Virtual Network via an internal ExpressRoute circuit. In most cases, it connects to your data center via ExpressRoute Global Reach.
+Azure VMware Solution connects to your Microsoft Azure Virtual Network through an internal ExpressRoute circuit. In most cases, it connects to your data center through ExpressRoute Global Reach.
Azure VMware Solution, your existing Azure environment, and your on-premises environment all exchange routes (typically). That being the case, the /22 CIDR network address block you define in this step shouldn't overlap anything you already have on-premises or Azure.
@@ -95,7 +95,7 @@ To access your Azure VMware Solution private cloud, the ExpressRoute circuit, wh
The ExpressRoute circuit from Azure VMware Solution connects to an ExpressRoute gateway in the Azure Virtual Network that you define in this step. >[!IMPORTANT]
->You can use an existing ExpressRoute Gateway to connect to Azure VMware Solution as long as it does not exceed the limit of four ExpressRoute circuits per virtual network. However, to access Azure VMware Solution from on-premises through ExpressRoute, you must have ExpressRoute Global Reach since the ExpressRoute gateway does not provide transitive routing between its connected circuits.
+>You can use an existing ExpressRoute Gateway to connect to Azure VMware Solution as long as it does not exceed the limit of four ExpressRoute circuits per virtual network. However, to access Azure VMware Solution from on-premises through ExpressRoute, you must have ExpressRoute Global Reach since the ExpressRoute Gateway does not provide transitive routing between its connected circuits.
If you want to connect the ExpressRoute circuit from Azure VMware Solution to an existing ExpressRoute gateway, you can do it after deployment.
azure-vmware https://docs.microsoft.com/en-us/azure/azure-vmware/tutorial-expressroute-global-reach-private-cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/tutorial-expressroute-global-reach-private-cloud.md
@@ -2,7 +2,7 @@
title: Tutorial - Peer on-premises environments to a private cloud description: Learn how to create ExpressRoute Global Reach peering to a private cloud in an Azure VMware Solution. ms.topic: tutorial
-ms.date: 09/21/2020
+ms.date: 1/5/2021
--- # Tutorial: Peer on-premises environments to a private cloud
@@ -28,6 +28,7 @@ Before you enable connectivity between two ExpressRoute circuits using ExpressRo
- Established connectivity to and from an Azure VMware Solution private cloud with its ExpressRoute circuit peered with an ExpressRoute gateway in an Azure virtual network (VNet) ΓÇô which is _circuit 2_ from peering procedures. - A separate, functioning ExpressRoute circuit used to connect on-premises environments to Azure ΓÇô which is _circuit 1_ from the peering procedures' perspective. - A /29 non-overlapping [network address block](../expressroute/expressroute-routing.md#ip-addresses-used-for-peerings) for the ExpressRoute Global Reach peering.
+- Make sure that all routers including the ExpressRoute provider's service supports 4-byte Autonomous System Number (ASN). Azure VMware Solution uses 4-byte public ASNs to advertise routes.
> [!TIP] > In the context of these prerequisites, your on-premises ExpressRoute circuit is _circuit 1_, and your private cloud ExpressRoute circuit is in a different subscription and labeled _circuit 2_.
backup https://docs.microsoft.com/en-us/azure/backup/backup-center-faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-center-faq.md deleted file mode 100644
@@ -1,47 +0,0 @@
-title: Backup Center - FAQ
-description: This article answers frequently asked questions about Backup Center
-ms.topic: conceptual
-ms.date: 09/08/2020
-
-# Backup Center - Frequently asked questions
-
-## Management
-
-### Can Backup Center be used across tenants?
-
-Yes, if you're using [Azure Lighthouse](../lighthouse/overview.md) and have delegated access to subscriptions across different tenants, you can use Backup Center as a single pane of glass to manage backups across tenants.
-
-### Can Backup Center be used to manage both Recovery Services vaults and Backup vaults?
-
-Yes, Backup Center can manage both [Recovery Services vaults](./backup-azure-recovery-services-vault-overview.md) and [Backup vaults](backup-vault-overview.md).
-
-### Is there a delay before data surfaces in Backup Center?
-
-Backup Center is aimed at providing real-time information. There may be a few seconds lag between the time an entity shows up in an individual vault screen, and the time the same entity shows up in Backup Center.
-
-## Configuration
-
-### Do I need to configure anything to see data in Backup Center?
-
-No. Backup Center comes ready out of the box. However, to view [Backup Reports](./configure-reports.md) under Backup Center, you need to configure reporting for your vaults.
-
-### Do I need to have any special permissions to use Backup Center?
-
-Backup Center as such doesn't need any new permissions. As long as you have the right level of Azure RBAC access for the resources you're managing, you can use Backup Center for these resources. For example, to view information about your backups, you'll need **Reader** access to your vaults. To configure backup and perform other backup-related actions, you'll need **Backup Contributor** or **Backup Operator** roles. Learn more about [Azure roles for Azure Backup](./backup-rbac-rs-vault.md).
-
-If you're using [Backup Reports](./configure-reports.md) under Backup Center, you will need access to the Log Analytics workspace(s) that your vault(s) are sending data to, to view reports for these vaults.
-
-## Pricing
-
-### Do I need to pay anything extra to use Backup Explorer?
-
-Currently, there are no additional costs (apart from your backup costs) to use Backup Center. However, if you're using [Backup Reports](./configure-reports.md) under Backup Center, there's a [cost involved](https://azure.microsoft.com/pricing/details/monitor/) in using Azure Monitor Logs for Backup Reports.
-
-## Next steps
-
-Read the other FAQs:
-
-* [Common questions about Recovery Services vaults](./backup-azure-backup-faq.md)
-* [Common questions about Azure VM backups](./backup-azure-vm-backup-faq.md)
\ No newline at end of file
bastion https://docs.microsoft.com/en-us/azure/bastion/troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/bastion/troubleshoot.md
@@ -78,7 +78,7 @@ The key's randomart image is:
## <a name="blackscreen"></a>Black screen in the Azure portal
-**Q:** When I try to connect using Azure Bastion, I get a black screen in the Azure portal.
+**Q:** When I try to connect using Azure Bastion, I can't connnect to the target VM and I get a black screen in the Azure portal.
**A:** This happens when there is either a network connectivity issue between your web browser and Azure Bastion (your client Internet firewall may be blocking WebSockets traffic or similar), or between the Azure Bastion and your target VM. Most cases include an NSG applied either to AzureBastionSubnet, or on your target VM subnet that is blocking the RDP/SSH traffic in your virtual network. Allow WebSockets traffic on your client internet firewall, and check the NSGs on your target VM subnet.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Anomaly-Detector/overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Anomaly-Detector/overview.md
@@ -8,7 +8,7 @@ manager: nitinme
ms.service: cognitive-services ms.subservice: anomaly-detector ms.topic: overview
-ms.date: 11/23/2020
+ms.date: 01/05/2021
ms.author: mbullwin keywords: anomaly detection, machine learning, algorithms ms.custom: cog-serv-seo-aug-2020
@@ -78,9 +78,18 @@ After signing up:
You can read the paper [Time-Series Anomaly Detection Service at Microsoft](https://arxiv.org/abs/1906.03821) (accepted by KDD 2019) to learn more about the SR-CNN algorithms developed by Microsoft. - > [!VIDEO https://www.youtube.com/embed/ERTaAnwCarM]
+## Service availability and redundancy
+
+### Is the Anomaly Detector service zone resilient?
+
+Yes. The Anomaly Detector service is zone-resilient by default.
+
+### How do I configure the Anomaly Detector service to be zone-resilient?
+
+No customer configuration is necessary to enable zone-resiliency. Zone-resiliency for Anomaly Detector resources is available by default and managed by the service itself.
+ ## Deploy on premises using Docker containers [Use Anomaly Detector containers](anomaly-detector-container-howto.md) to deploy API features on-premises. Docker containers enable you to bring the service closer to your data for compliance, security or other operational reasons.
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Custom-Vision-Service/get-started-build-detector https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/get-started-build-detector.md
@@ -96,6 +96,7 @@ After training has completed, the model's performance is calculated and displaye
- **Precision** indicates the fraction of identified classifications that were correct. For example, if the model identified 100 images as dogs, and 99 of them were actually of dogs, then the precision would be 99%. - **Recall** indicates the fraction of actual classifications that were correctly identified. For example, if there were actually 100 images of apples, and the model identified 80 as apples, the recall would be 80%.
+- **Mean average precision** is the average value of the average precision (AP). AP is the area under the precision/recall curve (precision plotted against recall for each prediction made).
![The training results show the overall precision and recall, and mean average precision.](./media/get-started-build-detector/trained-performance.png)
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Custom-Vision-Service/limits-and-quotas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/limits-and-quotas.md
@@ -41,6 +41,3 @@ The number of training images per project and tags per project are expected to i
|Max regions per object detection training image|300|300| |Max tags per classification image|100|100|
-> [!NOTE]
-> Images smaller than than 256 pixels will be accepted but upscaled.
-> Image aspect ratio should not be larger than 25
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/LUIS/luis-how-to-batch-test https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/luis-how-to-batch-test.md
@@ -172,10 +172,10 @@ Remember to add your LUIS key to `Apim-Subscription-Id` in the header, and set `
Start a batch test using either an app version ID or a publishing slot. Send a **POST** request to one of the following endpoint formats. Include your batch file in the body of the request. Publishing slot
-* `<YOUR-PREDICTION-ENDPOINT>/luis/prediction/v3.0/apps/<YOUR-APP-ID>/slots/<YOUR-SLOT-NAME>/evaluations`
+* `<YOUR-PREDICTION-ENDPOINT>/luis/prediction/v3.0-preview/apps/<YOUR-APP-ID>/slots/<YOUR-SLOT-NAME>/evaluations`
App version ID
-* `<YOUR-PREDICTION-ENDPOINT>/luis/prediction/v3.0/apps/<YOUR-APP-ID>/versions/<YOUR-APP-VERSION-ID>/evaluations`
+* `<YOUR-PREDICTION-ENDPOINT>/luis/prediction/v3.0-preview/apps/<YOUR-APP-ID>/versions/<YOUR-APP-VERSION-ID>/evaluations`
These endpoints will return an operation ID that you will use to check the status, and get results.
@@ -185,20 +185,20 @@ These endpoints will return an operation ID that you will use to check the statu
Use the operation ID from the batch test you started to get its status from the following endpoint formats: Publishing slot
-* `<YOUR-PREDICTION-ENDPOINT>/luis/prediction/v3.0/apps/<YOUR-APP-ID>/slots/<YOUR-SLOT-ID>/evaluations/<YOUR-OPERATION-ID>/status`
+* `<YOUR-PREDICTION-ENDPOINT>/luis/prediction/v3.0-preview/apps/<YOUR-APP-ID>/slots/<YOUR-SLOT-ID>/evaluations/<YOUR-OPERATION-ID>/status`
App version ID
-* `<YOUR-PREDICTION-ENDPOINT>/luis/prediction/v3.0/apps/<YOUR-APP-ID>/versions/<YOUR-APP-VERSION-ID>/evaluations/<YOUR-OPERATION-ID>/status`
+* `<YOUR-PREDICTION-ENDPOINT>/luis/prediction/v3.0-preview/apps/<YOUR-APP-ID>/versions/<YOUR-APP-VERSION-ID>/evaluations/<YOUR-OPERATION-ID>/status`
### Get the results from a batch test Use the operation ID from the batch test you started to get its results from the following endpoint formats: Publishing slot
-* `<YOUR-PREDICTION-ENDPOINT>/luis/prediction/v3.0/apps/<YOUR-APP-ID>/slots/<YOUR-SLOT-ID>/evaluations/<YOUR-OPERATION-ID>/result`
+* `<YOUR-PREDICTION-ENDPOINT>/luis/prediction/v3.0-preview/apps/<YOUR-APP-ID>/slots/<YOUR-SLOT-ID>/evaluations/<YOUR-OPERATION-ID>/result`
App version ID
-* `<YOUR-PREDICTION-ENDPOINT>/luis/prediction/v3.0/apps/<YOUR-APP-ID>/versions/<YOUR-APP-VERSION-ID>/evaluations/<YOUR-OPERATION-ID>/result`
+* `<YOUR-PREDICTION-ENDPOINT>/luis/prediction/v3.0-preview/apps/<YOUR-APP-ID>/versions/<YOUR-APP-VERSION-ID>/evaluations/<YOUR-OPERATION-ID>/result`
### Batch file of utterances
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/LUIS/whats-new https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/whats-new.md
@@ -4,7 +4,7 @@ description: This article is regularly updated with news about the Azure Cogniti
ms.service: cognitive-services ms.subservice: language-understanding ms.topic: overview
-ms.date: 06/15/2020
+ms.date: 01/05/2021
--- # What's new in Language Understanding
@@ -15,7 +15,8 @@ Learn what's new in the service. These items include release notes, videos, blog
### December 2020
-* All LUIS users are required to [migrate to a LUIS authorint resource](luis-migration-authoring.md)
+* All LUIS users are required to [migrate to a LUIS authoring resource](luis-migration-authoring.md)
+* New [evaluation endpoints](luis-how-to-batch-test.md#batch-testing-using-the-rest-api) which allow you to submit batch tests usting the REST API, and get accuracy results for your intents and entities. Available starting with the v3.0-preview LUIS Endpoint.
### June 2020
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/whats-new-docs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/whats-new-docs.md
@@ -1,81 +1,69 @@
--- title: "Cognitive Services: What's new in docs"
-description: "What's new in the Cognitive Services docs for November 1, 2020 through November 30, 2020. "
+description: "What's new in the Cognitive Services docs for December 1, 2020 - December 31, 2020."
author: erhopf manager: nitinme ms.topic: conceptual ms.author: erhopf ms.service: cognitive-services
-ms.date: 12/07/2020
+ms.date: 01/05/2021
---
-# Cognitive Services docs: What's new for November 1, 2020 - November 30, 2020
+# Cognitive Services docs: What's new for December 1, 2020 - December 31, 2020
-Welcome to what's new in the Cognitive Services docs from November 1, 2020 through November 30, 2020. This article lists some of the major changes to docs during this period.
+Welcome to what's new in the Cognitive Services docs from December 1, 2020 through December 31, 2020. This article lists some of the major changes to docs during this period.
## Cognitive Services
-### Updated articles
+### New articles
-- [Quickstart: Create a Cognitive Services resource using the Azure Command-Line Interface(CLI)](cognitive-services-apis-create-account-cli.md)-- [Cognitive Services development options](cognitive-services-development-options.md)-- [Azure Cognitive Services support and help options](cognitive-services-support-options.md)-- [Enable diagnostic logging for Azure Cognitive Services](diagnostic-logging.md)-- [Natural language support for Azure Cognitive Services](language-support.md)-- [Azure security baseline for Cognitive Services](security-baseline.md)-
-## Containers
+- [Plan and manage costs for Azure Cognitive Services](plan-manage-costs.md)
### Updated articles -- [Azure Cognitive Services container image tags and release notes](/azure/cognitive-services/containers/container-image-tags)
+- [Configure Azure Cognitive Services virtual networks](cognitive-services-virtual-networks.md)
-## Form Recognizer
-
-### New articles
--- [Form Recognizer prebuilt invoice model](/azure/cognitive-services/form-recognizer/concept-invoices)-- [Form Recognizer Layout service](/azure/cognitive-services/form-recognizer/concept-layout)-- [Quickstart: Extract invoice data using the Form Recognizer REST API with Python](/azure/cognitive-services/form-recognizer/quickstarts/python-invoices)
+## Anomaly Detector
### Updated articles -- [Receipt concepts](/azure/cognitive-services/form-recognizer/concept-receipts)-- [What is Form Recognizer?](/azure/cognitive-services/form-recognizer/overview)-- [Train a Form Recognizer model with labels using the sample labeling tool](/azure/cognitive-services/form-recognizer/quickstarts/label-tool)-- [Quickstart: Extract business card data using the Form Recognizer REST API with Python](/azure/cognitive-services/form-recognizer/quickstarts/python-business-cards)-- [What's new in Form Recognizer?](/azure/cognitive-services/form-recognizer/whats-new)
+- [Anomaly Detector REST API quickstart](https://docs.microsoft.com/azure/cognitive-services/anomaly-detector/quickstarts/client-libraries?tabs=windows&pivots=rest-api)
-## Metrics Advisor
+## Bing Visual Search
-### New articles
+### Updated articles
+
+- [Use an insights token to get insights for an image](/azure/cognitive-services/bing-visual-search/use-insights-token.md)
-- [Metrics Advisor: what's new in the docs](/azure/cognitive-services/metrics-advisor/whats-new)
+## Containers
### Updated articles -- [Provide anomaly feedback](/azure/cognitive-services/metrics-advisor/how-tos/anomaly-feedback)-- [Metrics Advisor frequently asked questions](/azure/cognitive-services/metrics-advisor/faq)-- [Quickstart: Use the client libraries or REST APIs to customize your solution](/azure/cognitive-services/metrics-advisor/quickstarts/rest-api-and-client-library)
+- [Deploy and run container on Azure Container Instance](/azure/cognitive-services/containers/azure-container-instance-recipe.md)
-## QnA Maker
+## Form Recognizer
-### New articles
+### Updated articles
-* [QnA Maker managed public preview announcement](https://techcommunity.microsoft.com/t5/azure-ai/introducing-qna-maker-managed-now-in-public-preview/ba-p/1845575)
-* [Create a new QnA Maker managed service](https://docs.microsoft.com/azure/cognitive-services/qnamaker/how-to/set-up-qnamaker-service-azure?tabs=v2)
-* [Migrate your existing knowledge base to QnA Maker managed](https://docs.microsoft.com/azure/cognitive-services/qnamaker/tutorials/migrate-knowledge-base)
+- [Form Recognizer landing page](/azure/cognitive-services/form-recognizer/index.yml)
+- [Quickstart: Use the Form Recognizer client library](/azure/cognitive-services/form-recognizer/quickstarts/client-library.md)
## Text Analytics ### Updated articles -- [Data and rate limits for the Text Analytics API](/azure/cognitive-services/text-analytics/concepts/data-limits)-- [How to: Use Text Analytics for health (preview)](/azure/cognitive-services/text-analytics/how-tos/text-analytics-for-health)-- [How to call the Text Analytics REST API](/azure/cognitive-services/text-analytics/how-tos/text-analytics-how-to-call-api)-- [How to use Named Entity Recognition in Text Analytics](/azure/cognitive-services/text-analytics/how-tos/text-analytics-how-to-entity-linking)-- [How to: Sentiment analysis and Opinion Mining](/azure/cognitive-services/text-analytics/how-tos/text-analytics-how-to-sentiment-analysis)-- [What's new in the Text Analytics API?](/azure/cognitive-services/text-analytics/whats-new)-- [Example: Detect language with Text Analytics](/azure/cognitive-services/text-analytics/how-tos/text-analytics-how-to-language-detection)
+- [Text Analytics API v3 language support](/azure/cognitive-services/text-analytics/language-support.md)
+- [How to call the Text Analytics REST API](/azure/cognitive-services/text-analytics/how-tos/text-analytics-how-to-call-api.md)
+- [How to use Named Entity Recognition in Text Analytics](/azure/cognitive-services/text-analytics/how-tos/text-analytics-how-to-entity-linking.md)
+- [Example: How to extract key phrases using Text Analytics](/azure/cognitive-services/text-analytics/how-tos/text-analytics-how-to-keyword-extraction.md)
+- [Text Analytics API Documentation - Tutorials, API Reference - Azure Cognitive Services | Microsoft Docs](/azure/cognitive-services/text-analytics/index.yml)
+- [Quickstart: Use the Text Analytics client library and REST API](/azure/cognitive-services/text-analytics/quickstarts/client-libraries-rest-api.md)
+
+## Community contributors
+
+The following people contributed to the Cognitive Services docs during this period. Thank you!
+
+- [hyoshioka0128](https://github.com/hyoshioka0128) - Hiroshi Yoshioka (1)
+- [pymia](https://github.com/pymia) - Mia // Huai-Wen Chang (1)
[!INCLUDE [Service specific updates](./includes/service-specific-updates.md)]
container-instances https://docs.microsoft.com/en-us/azure/container-instances/container-instances-monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-instances/container-instances-monitor.md
@@ -2,7 +2,7 @@
title: Monitor container instances description: How to monitor the consumption of compute resources like CPU and memory by your containers in Azure Container Instances. ms.topic: article
-ms.date: 04/24/2019
+ms.date: 12/17/2020
--- # Monitor container resources in Azure Container Instances
@@ -21,11 +21,11 @@ At this time, Azure Monitor metrics are only available for Linux containers.
Azure Monitor provides the following [metrics for Azure Container Instances][supported-metrics]. These metrics are available for a container group and individual containers. By default, the metrics are aggregated as averages.
-* **CPU Usage** - measured in **millicores**. One millicore is 1/1000th of a CPU core, so 500 millicores represents usage of 0.5 CPU core.
-
-* **Memory Usage** - in bytes.
-
-* **Network Bytes Received Per Second** and **Network Bytes Transmitted Per Second**.
+- **CPU Usage** measured in **millicores**.
+ - One millicore is 1/1000th of a CPU core, so 500 millicores represents usage of 0.5 CPU core.
+- **Memory Usage** in bytes
+- **Network bytes received** per second
+- **Network bytes transmitted** per second
## Get metrics - Azure portal
@@ -33,7 +33,7 @@ When a container group is created, Azure Monitor data is available in the Azure
![dual-chart][dual-chart]
-In a container group that contains multiple containers, use a [dimension][monitor-dimension] to present metrics by container. To create a chart with individual container metrics, perform the following steps:
+In a container group that contains multiple containers, use a [dimension][monitor-dimension] to display metrics by container. To create a chart with individual container metrics, perform the following steps:
1. In the **Overview** page, select one of the metric charts, such as **CPU**. 1. Select the **Apply splitting** button, and select **Container Name**.
@@ -58,18 +58,11 @@ az monitor metrics list --resource $CONTAINER_GROUP --metric CPUUsage --output t
```output Timestamp Name Average ------------------- --------- ---------
-2019-04-23 22:59:00 CPU Usage
-2019-04-23 23:00:00 CPU Usage
-2019-04-23 23:01:00 CPU Usage 0.0
-2019-04-23 23:02:00 CPU Usage 0.0
-2019-04-23 23:03:00 CPU Usage 0.5
-2019-04-23 23:04:00 CPU Usage 0.5
-2019-04-23 23:05:00 CPU Usage 0.5
-2019-04-23 23:06:00 CPU Usage 1.0
-2019-04-23 23:07:00 CPU Usage 0.5
-2019-04-23 23:08:00 CPU Usage 0.5
-2019-04-23 23:09:00 CPU Usage 1.0
-2019-04-23 23:10:00 CPU Usage 0.5
+2020-12-17 23:34:00 CPU Usage
+. . .
+2020-12-18 00:25:00 CPU Usage
+2020-12-18 00:26:00 CPU Usage 0.4
+2020-12-18 00:27:00 CPU Usage 0.0
``` Change the value of the `--metric` parameter in the command to get other [supported metrics][supported-metrics]. For example, use the following command to get **memory** usage metrics.
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/configure-synapse-link https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/configure-synapse-link.md
@@ -222,7 +222,7 @@ The [Azure Resource Manager template](./manage-with-templates.md#azure-cosmos-ac
## <a id="cosmosdb-synapse-link-samples"></a> Getting started with Azure Synpase Link - Samples
-You can find samples to get started with Azure Synapse Link on [GitHub](https://aka.ms/cosmosdb-synapselink-samples). These showcase end-to-end solutions with IoT and retail scenarios. You can also find the samples corresponding to Azure Cosmos DB API for MongoDB in the same repo under the [MongoDB](https://github.com/Azure-Samples/Synapse/tree/master/Notebooks/PySpark/Synapse%20Link%20for%20Cosmos%20DB%20samples) folder.
+You can find samples to get started with Azure Synapse Link on [GitHub](https://aka.ms/cosmosdb-synapselink-samples). These showcase end-to-end solutions with IoT and retail scenarios. You can also find the samples corresponding to Azure Cosmos DB API for MongoDB in the same repo under the [MongoDB](https://github.com/Azure-Samples/Synapse/tree/main/Notebooks/PySpark/Synapse%20Link%20for%20Cosmos%20DB%20samples) folder.
## Next steps
cosmos-db https://docs.microsoft.com/en-us/azure/cosmos-db/synapse-link-power-bi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/synapse-link-power-bi.md
@@ -13,7 +13,7 @@ ms.author: acomet
In this article, you learn how to build a serverless SQL pool database and views over Synapse Link for Azure Cosmos DB. You will query the Azure Cosmos DB containers and then build a model with Power BI over those views to reflect that query.
-In this scenario, you will use dummy data about Surface product sales in a partner retail store. You will analyze the revenue per store based on the proximity to large households and the impact of advertising for a specific week. In this article, you create two views named **RetailSales** and **StoreDemographics** and a query between them. You can get the sample product data from this [GitHub](https://github.com/Azure-Samples/Synapse/tree/master/Notebooks/PySpark/Synapse%20Link%20for%20Cosmos%20DB%20samples/Retail/RetailData) repo.
+In this scenario, you will use dummy data about Surface product sales in a partner retail store. You will analyze the revenue per store based on the proximity to large households and the impact of advertising for a specific week. In this article, you create two views named **RetailSales** and **StoreDemographics** and a query between them. You can get the sample product data from this [GitHub](https://github.com/Azure-Samples/Synapse/tree/main/Notebooks/PySpark/Synapse%20Link%20for%20Cosmos%20DB%20samples/Retail/RetailData) repo.
> [!IMPORTANT] > Synapse serverless SQL pool support for Azure Synapse Link for Azure Cosmos DB is currently in preview. This preview version is provided without a service level agreement, and it's not recommended for production workloads. For more information, see [Supplemental terms of use for Microsoft Azure previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
@@ -28,7 +28,7 @@ Make sure to create the following resources before you start:
* Create a database within the Azure Cosmos account and two containers that have [analytical store enabled.](configure-synapse-link.md#create-analytical-ttl)
-* Load products data into the Azure Cosmos containers as described in this [batch data ingestion](https://github.com/Azure-Samples/Synapse/blob/master/Notebooks/PySpark/Synapse%20Link%20for%20Cosmos%20DB%20samples/Retail/spark-notebooks/pyspark/1CosmoDBSynapseSparkBatchIngestion.ipynb) notebook.
+* Load products data into the Azure Cosmos containers as described in this [batch data ingestion](https://github.com/Azure-Samples/Synapse/blob/main/Notebooks/PySpark/Synapse%20Link%20for%20Cosmos%20DB%20samples/Retail/spark-notebooks/pyspark/1CosmoDBSynapseSparkBatchIngestion.ipynb) notebook.
* [Create a Synapse workspace](../synapse-analytics/quickstart-create-workspace.md) named **SynapseLinkBI**.
data-factory https://docs.microsoft.com/en-us/azure/data-factory/security-and-access-control-troubleshoot-guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/security-and-access-control-troubleshoot-guide.md
@@ -5,7 +5,7 @@ services: data-factory
author: lrtoyou1223 ms.service: data-factory ms.topic: troubleshooting
-ms.date: 11/19/2020
+ms.date: 01/05/2021
ms.author: lle ms.reviewer: craigg ---
@@ -147,6 +147,16 @@ Try to enable public network access on the user interface, as shown in the follo
![Screenshot of the "Enabled" control for "Allow public network access" on the Networking pane.](media/self-hosted-integration-runtime-troubleshoot-guide/enable-public-network-access.png)
+### Pipeline runtime varies when basing on different IR
+
+#### Symptoms
+
+Simply toggling the Linked Service dropdown in the dataset performs the same pipeline activities, but has drastically different run-times. When the dataset is based on the Managed Virtual Network Integration Runtime, it takes more than 2 minutes on average to complete the run, but it takes approximately 20 seconds to complete when based on the Default Integration Runtime.
+
+#### Cause
+
+Checking the details of pipeline runs, you can see that the slow pipeline is running on Managed VNet (Virtual Network) IR while the normal one is running on Azure IR. By design, Managed VNet IR takes longer queue time than Azure IR as we are not reserving one compute node per data factory, so there is a warm up around 2 minutes for each copy activity to start, and it occurs primarily on VNet join rather than Azure IR.
+ ## Next steps For more help with troubleshooting, try the following resources:
databox-online https://docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-deploy-add-shares https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-deploy-add-shares.md
@@ -7,7 +7,7 @@ author: alkohli
ms.service: databox ms.subservice: edge ms.topic: tutorial
-ms.date: 03/21/2019
+ms.date: 01/04/2021
ms.author: alkohli Customer intent: As an IT admin, I need to understand how to add and connect to shares on Azure Stack Edge Pro so I can use it to transfer data to Azure. ---
@@ -37,7 +37,7 @@ Before you add shares to Azure Stack Edge Pro, make sure that:
To create a share, do the following procedure:
-1. In the [Azure portal](https://portal.azure.com/), select your Azure Stack Edge resource and then go to the **Overview**. Your device should be online.
+1. In the [Azure portal](https://portal.azure.com/), select your Azure Stack Edge resource and then go to the **Overview**. Your device should be online. Select **Cloud storage gateway**.
![Device online](./media/azure-stack-edge-deploy-add-shares/device-online-1.png)
databox-online https://docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-gpu-deploy-gpu-virtual-machine https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-gpu-virtual-machine.md
@@ -335,7 +335,7 @@ For more information, see [Nvidia GPU driver extension for Windows](../virtual-m
### GPU extension for Linux
-To deploy Nvidia GPU drivers for an existing VM, edit the `addGPUExtLinuxVM.parameters.json` parameters file and then deploy the template `addGPUextensiontoVM.json`.
+To deploy Nvidia GPU drivers for an existing VM, edit the parameters file and then deploy the template `addGPUextensiontoVM.json`. There are specific parameters files for Ubuntu and Red Hat Enterprise Linux (RHEL) as discussed in the following sections.
#### Edit parameters file
@@ -368,8 +368,7 @@ If using Ubuntu, the `addGPUExtLinuxVM.parameters.json` file takes the following
} } ```
-If using Red Hat Enterprise Linux (RHEL), file takes the following parameters:
-
+If using Red Hat Enterprise Linux (RHEL), the `addGPUExtensionRHELVM.parameters.json` file takes the following parameters:
```powershell {
databox-online https://docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-j-series-deploy-add-shares https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-j-series-deploy-add-shares.md
@@ -7,7 +7,7 @@ author: alkohli
ms.service: databox ms.subservice: edge ms.topic: tutorial
-ms.date: 12/22/2020
+ms.date: 01/04/2021
ms.author: alkohli Customer intent: As an IT admin, I need to understand how to add and connect to shares on Azure Stack Edge Pro so I can use it to transfer data to Azure. ---
@@ -38,7 +38,7 @@ Before you add shares to Azure Stack Edge Pro, make sure that:
To create a share, do the following procedure:
-1. In the [Azure portal](https://portal.azure.com/), select your Azure Stack Edge resource and then go to the **Overview**. Your device should be online.
+1. In the [Azure portal](https://portal.azure.com/), select your Azure Stack Edge resource and then go to the **Overview**. Your device should be online. Select **Cloud storage gateway**.
![Device online](./media/azure-stack-edge-j-series-deploy-add-shares/device-online-1.png)
@@ -46,7 +46,7 @@ To create a share, do the following procedure:
![Add a share](./media/azure-stack-edge-j-series-deploy-add-shares/select-add-share-1.png)
-3. In the **Add share** pane, do the following procedure:
+3. In the **Add share** pane, follow these steps:
a. In the **Name** box, provide a unique name for your share. The share name can have only letters, numerals, and hyphens. It must have between 3 to 63 characters and begin with a letter or a numeral. Hyphens must be preceded and followed by a letter or a numeral.
databox-online https://docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-j-series-manage-bandwidth-schedules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-j-series-manage-bandwidth-schedules.md
@@ -7,7 +7,7 @@ author: alkohli
ms.service: databox ms.subservice: edge ms.topic: how-to
-ms.date: 08/28/2020
+ms.date: 01/05/2021
ms.author: alkohli --- # Use the Azure portal to manage bandwidth schedules on your Azure Stack Edge Pro GPU
@@ -35,12 +35,12 @@ Do the following steps in the Azure portal to add a schedule.
![Select Bandwidth](media/azure-stack-edge-j-series-manage-bandwidth-schedules/add-schedule-1.png)
-3. In the **Add schedule**:
+3. In the **Add schedule**:
1. Provide the **Start day**, **End day**, **Start time**, and **End time** of the schedule. 2. Check the **All day** option if this schedule should run all day. 3. **Bandwidth rate** is the bandwidth in Megabits per second (Mbps) used by your device in operations involving the cloud (both uploads and downloads). Supply a number between 20 and 2,147,483,647 for this field.
- 4. Check **Unlimited** bandwidth if you do not want to throttle the date upload and download.
+ 4. Select **Unlimited bandwidth** if you do not want to throttle the date upload and download.
5. Select **Add**. ![Add schedule](media/azure-stack-edge-j-series-manage-bandwidth-schedules/add-schedule-2.png)
@@ -53,9 +53,10 @@ Do the following steps in the Azure portal to add a schedule.
Do the following steps to edit a bandwidth schedule.
-1. In the Azure portal, go to your Azure Stack Edge resource and then go to **Bandwidth**.
+1. In the Azure portal, go to your Azure Stack Edge resource and then go to **Bandwidth**.
2. From the list of bandwidth schedules, select a schedule that you want to modify.
- ![Select bandwidth schedule](media/azure-stack-edge-j-series-manage-bandwidth-schedules/modify-schedule-1.png)
+
+ ![Select bandwidth schedule](media/azure-stack-edge-j-series-manage-bandwidth-schedules/modify-schedule-1.png)
3. Make the desired changes and save the changes.
databox-online https://docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-j-series-manage-shares https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-j-series-manage-shares.md
@@ -7,14 +7,14 @@ author: alkohli
ms.service: databox ms.subservice: edge ms.topic: how-to
-ms.date: 08/28/2020
+ms.date: 01/04/2021
ms.author: alkohli --- # Use Azure portal to manage shares on your Azure Stack Edge Pro <!--[!INCLUDE [applies-to-skus](../../includes/azure-stack-edge-applies-to-all-sku.md)]-->
-This article describes how to manage shares on your Azure Stack Edge Pro. You can manage the Azure Stack Edge Pro via the Azure portal or via the local web UI. Use the Azure portal to add, delete, refresh shares, or sync storage key for storage account associated with the shares.
+This article describes how to manage shares on your Azure Stack Edge Pro. You can manage the Azure Stack Edge Pro via the Azure portal or via the local web UI. Use the Azure portal to add, delete, refresh shares, or sync storage key for storage account associated with the shares. This article applies to Azure Stack Edge Pro GPU, Azure Stack Edge Pro R, and Azure Stack Edge Mini R devices.
## About shares
@@ -28,7 +28,7 @@ To transfer data to Azure, you need to create shares on your Azure Stack Edge Pr
Do the following steps in the Azure portal to create a share.
-1. In the Azure portal, go to your Azure Stack Edge resource and then go to **Gateway > Shares**. Select **+ Add share** on the command bar.
+1. In the Azure portal, go to your Azure Stack Edge resource and then go to **Cloud storage gateway > Shares**. Select **+ Add share** on the command bar.
![Select add share](media/azure-stack-edge-j-series-manage-shares/add-share-1.png)
@@ -58,7 +58,7 @@ Do the following steps in the Azure portal to create a share.
## Add a local share
-1. In the Azure portal, go to your Azure Stack Edge resource and then go to **Gateway > Shares**. Select **+ Add share** on the command bar.
+1. In the Azure portal, go to your Azure Stack Edge resource and then go to **Cloud storage gateway > Shares**. Select **+ Add share** on the command bar.
![Select add share 2](media/azure-stack-edge-j-series-manage-shares/add-local-share-1.png)
@@ -94,7 +94,7 @@ Do the following steps in the Azure portal to create a share.
If you created a share before you configured compute on your Azure Stack Edge Pro device, you will need to mount the share. Take the following steps to mount a share.
-1. In the Azure portal, go to your Azure Stack Edge resource and then go to **Gateway > Shares**. From the list of the shares, select the share you want to mount. The **Used for compute** column will show the status as **Disabled** for the selected share.
+1. In the Azure portal, go to your Azure Stack Edge resource and then go to **Cloud storage gateway > Shares**. From the list of the shares, select the share you want to mount. The **Used for compute** column will show the status as **Disabled** for the selected share.
![Select share](media/azure-stack-edge-j-series-manage-shares/mount-share-1.png)
@@ -118,11 +118,11 @@ If you created a share before you configured compute on your Azure Stack Edge Pr
Do the following steps in the Azure portal to unmount a share.
-1. In the Azure portal, go to your Azure Stack Edge resource and then go to **Gateway > Shares**.
+1. In the Azure portal, go to your Azure Stack Edge resource and then go to **Cloud storage gateway > Shares**. From the list of the shares, select the share that you want to unmount. You want to make sure that the share you unmount is not used by any modules. If the share is used by a module, then you will see issues with the corresponding module.
![Select share 2](media/azure-stack-edge-j-series-manage-shares/unmount-share-1.png)
-2. From the list of the shares, select the share that you want to unmount. You want to make sure that the share you unmount is not used by any modules. If the share is used by a module, then you will see issues with the corresponding module. Select **Unmount**.
+2. Select **Unmount**.
![Select unmount](media/azure-stack-edge-j-series-manage-shares/unmount-share-2.png)
databox-online https://docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-j-series-manage-users https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-j-series-manage-users.md
@@ -7,7 +7,7 @@ author: alkohli
ms.service: databox ms.subservice: edge ms.topic: how-to
-ms.date: 08/28/2020
+ms.date: 01/05/2021
ms.author: alkohli --- # Use the Azure portal to manage users on your Azure Stack Edge Pro
@@ -25,19 +25,19 @@ In this article, you learn how to:
## About users
-Users can be read-only or full privilege. As the names indicate, the read-only users can only view the share data. The full privilege users can read share data, write to these shares, and modify or delete the share data.
+Users can be read-only or full privilege. Read-only users can only view the share data. Full privilege users can read share data, write to these shares, and modify or delete the share data.
- **Full privilege user** - A local user with full access. - **Read-only user** - A local user with read-only access. These users are associated with shares that allow read-only operations.
-The user permissions are first defined when the user is created during share creation. After the permissions associated with a user are defined, these can be modified by using File Explorer.
+The user permissions are first defined when the user is created during share creation. They can be modified by using File Explorer.
## Add a user Do the following steps in the Azure portal to add a user.
-1. In the Azure portal, go to your Azure Stack Edge resource and then go to **Overview > Users**. Select **+ Add user** on the command bar.
+1. In the Azure portal, go to your Azure Stack Edge resource and then go to **Users**. Select **+ Add user** on the command bar.
![Select add user](media/azure-stack-edge-j-series-manage-users/add-user-1.png)
@@ -54,7 +54,7 @@ Do the following steps in the Azure portal to add a user.
## Modify user You can change the password associated with a user once the user is created. Select from the list of users. Enter and confirm the new password. Save the changes.
-
+ ![Modify user](media/azure-stack-edge-j-series-manage-users/modify-user-1.png)
@@ -63,7 +63,7 @@ You can change the password associated with a user once the user is created. Sel
Do the following steps in the Azure portal to delete a user.
-1. In the Azure portal, go to your Azure Stack Edge resource and then go to **Overview > Users**.
+1. In the Azure portal, go to your Azure Stack Edge resource and then go to **Users**.
![Select user to delete](media/azure-stack-edge-j-series-manage-users/delete-user-1.png)
databox-online https://docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-manage-shares https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-manage-shares.md
@@ -1,22 +1,22 @@
---
-title: Azure Stack Edge Pro share management | Microsoft Docs
-description: Describes how to use the Azure portal to manage shares on your Azure Stack Edge Pro.
+title: Azure Stack Edge Pro - FPGA share management | Microsoft Docs
+description: Describes how to use the Azure portal to manage shares on your Azure Stack Edge Pro - FPGA.
services: databox author: alkohli ms.service: databox ms.subservice: edge ms.topic: how-to
-ms.date: 03/25/2019
+ms.date: 01/04/2021
ms.author: alkohli ---
-# Use the Azure portal to manage shares on Azure Stack Edge Pro
+# Use the Azure portal to manage shares on Azure Stack Edge Pro FPGA
-This article describes how to manage shares on your Azure Stack Edge Pro. You can manage the Azure Stack Edge Pro via the Azure portal or via the local web UI. Use the Azure portal to add, delete, refresh shares, or sync storage key for storage account associated with the shares.
+This article describes how to manage shares on your Azure Stack Edge Pro FPGA device. You can manage the Azure Stack Edge Pro FPGA device via the Azure portal or via the local web UI. Use the Azure portal to add, delete, refresh shares, or sync storage key for storage account associated with the shares.
## About shares
-To transfer data to Azure, you need to create shares on your Azure Stack Edge Pro. The shares that you add on the Azure Stack Edge Pro device can be local shares or shares that push data to cloud.
+To transfer data to Azure, you need to create shares on your Azure Stack Edge Pro FPGA. The shares that you add on the Azure Stack Edge Pro device can be local shares or shares that push data to cloud.
- **Local shares**: Use these shares when you want the data to be processed locally on the device. - **Shares**: Use these shares when you want the device data to be automatically pushed to your storage account in the cloud. All the cloud functions such as **Refresh** and **Sync storage keys** apply to the shares.
@@ -34,7 +34,7 @@ In this article, you learn how to:
Do the following steps in the Azure portal to create a share.
-1. In the Azure portal, go to your Azure Stack Edge resource and then go to **Gateway > Shares**. Select **+ Add share** on the command bar.
+1. In the Azure portal, go to your Azure Stack Edge resource and then go to **Cloud storage gateway**. Go to **Shares** and then select **+ Add share** on the command bar.
![Select add share](media/azure-stack-edge-manage-shares/add-share-1.png)
@@ -67,7 +67,7 @@ Do the following steps in the Azure portal to create a share.
## Add a local share
-1. In the Azure portal, go to your Azure Stack Edge resource and then go to **Gateway > Shares**. Select **+ Add share** on the command bar.
+1. In the Azure portal, go to your Azure Stack Edge resource and then go to **Cloud storage gateway > Shares**. Select **+ Add share** on the command bar.
![Select add share 2](media/azure-stack-edge-manage-shares/add-local-share-1.png)
@@ -91,15 +91,13 @@ Do the following steps in the Azure portal to create a share.
![View updates Shares blade](media/azure-stack-edge-manage-shares/add-local-share-3.png)
- Select the share to view the local mountpoint for the Edge compute modules for this share.
- ![View local share details](media/azure-stack-edge-manage-shares/add-local-share-4.png)
## Mount a share If you created a share before you configured compute on your Azure Stack Edge Pro device, you will need to mount the share. Take the following steps to mount a share.
-1. In the Azure portal, go to your Azure Stack Edge resource and then go to **Gateway > Shares**. From the list of the shares, select the share you want to mount. The **Used for compute** column will show the status as **Disabled** for the selected share.
+1. In the Azure portal, go to your Azure Stack Edge resource and then go to **Cloud storage gateway > Shares**. From the list of the shares, select the share you want to mount. The **Used for compute** column will show the status as **Disabled** for the selected share.
![Select share 3](media/azure-stack-edge-manage-shares/select-share-mount.png)
@@ -117,13 +115,13 @@ If you created a share before you configured compute on your Azure Stack Edge Pr
5. Select the share again to view the local mountpoint for the share. Edge compute module uses this local mountpoint for the share.
- ![Local mountpoint for the share](media/azure-stack-edge-manage-shares/share-mountpoint.png)
+ ![Local mountpoint for the share](media/azure-stack-edge-manage-shares/share-mountpoint.png)
## Unmount a share Do the following steps in the Azure portal to unmount a share.
-1. In the Azure portal, go to your Azure Stack Edge resource and then go to **Gateway > Shares**.
+1. In the Azure portal, go to your Azure Stack Edge resource and then go to **Cloud storage gateway > Shares**.
![Select share 4](media/azure-stack-edge-manage-shares/select-share-unmount.png)
@@ -143,13 +141,13 @@ Do the following steps in the Azure portal to unmount a share.
Do the following steps in the Azure portal to delete a share.
-1. From the list of shares, select and click the share that you want to delete.
+1. In the Azure portal, go to your Azure Stack Edge resource and then go to **Cloud storage gateway > Shares**. From the list of shares, select and click the share that you want to delete.
![Select share 5](media/azure-stack-edge-manage-shares/delete-share-1.png)
-2. Click **Delete**.
+2. Select **Delete**.
- ![Click delete](media/azure-stack-edge-manage-shares/delete-share-2.png)
+ ![Select delete](media/azure-stack-edge-manage-shares/delete-share-2.png)
3. When prompted for confirmation, click **Yes**.
@@ -168,15 +166,15 @@ The refresh feature allows you to refresh the contents of a share. When you refr
Do the following steps in the Azure portal to refresh a share.
-1. In the Azure portal, go to **Shares**. Select and click the share that you want to refresh.
+1. In the Azure portal, go to your Azure Stack Edge resource and then go to **Cloud storage gateway > Shares**. Select and click the share that you want to refresh.
![Select share 6](media/azure-stack-edge-manage-shares/refresh-share-1.png)
-2. Click **Refresh**.
+2. Select **Refresh data**.
- ![Click refresh](media/azure-stack-edge-manage-shares/refresh-share-2.png)
+ ![Select refresh](media/azure-stack-edge-manage-shares/refresh-share-2.png)
-3. When prompted for confirmation, click **Yes**. A job starts to refresh the contents of the on-premises share.
+3. When prompted for confirmation, select **Yes**. A job starts to refresh the contents of the on-premises share.
![Confirm refresh](media/azure-stack-edge-manage-shares/refresh-share-3.png)
@@ -194,7 +192,7 @@ If your storage account keys have been rotated, then you need to sync the storag
Do the following steps in the Azure portal to sync your storage access key.
-1. Go to **Overview** in your resource. From the list of shares, choose and click a share associated with the storage account that you need to sync.
+1. In the Azure portal, go to your Azure Stack Edge resource and then go to **Cloud storage gateway > Shares**. From the list of shares, choose and click a share associated with the storage account that you need to sync.
![Select share with relevant storage account](media/azure-stack-edge-manage-shares/sync-storage-key-1.png)
databox-online https://docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-manage-users https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-manage-users.md
@@ -1,5 +1,5 @@
---
-title: Azure Stack Edge Pro manage users | Microsoft Docs
+title: Azure Stack Edge Pro FPGA manage users | Microsoft Docs
description: Describes how to use the Azure portal to manage users on your Azure Stack Edge Pro. services: databox author: alkohli
@@ -7,12 +7,12 @@ author: alkohli
ms.service: databox ms.subservice: edge ms.topic: how-to
-ms.date: 03/11/2019
+ms.date: 01/05/2021
ms.author: alkohli ---
-# Use the Azure portal to manage users on your Azure Azure Stack Edge Pro
+# Use the Azure portal to manage users on your Azure Stack Edge Pro FPGA
-This article describes how to manage users on your Azure Stack Edge Pro. You can manage the Azure Stack Edge Pro via the Azure portal or via the local web UI. Use the Azure portal to add, modify, or delete users.
+This article describes how to manage users on your Azure Stack Edge Pro FPGA device. You can manage the Azure Stack Edge Pro via the Azure portal or via the local web UI. Use the Azure portal to add, modify, or delete users.
In this article, you learn how to:
@@ -34,7 +34,7 @@ The user permissions are first defined when the user is created during share cre
Do the following steps in the Azure portal to add a user.
-1. In the Azure portal, go to your Azure Stack Edge resource and then go to **Overview > Users**. Select **+ Add user** on the command bar.
+1. In the Azure portal, go to your Azure Stack Edge resource and then go to **Users**. Select **+ Add user** on the command bar.
![Select add user](media/azure-stack-edge-manage-users/add-user-1.png)
@@ -60,7 +60,7 @@ You can change the password associated with a user once the user is created. Sel
Do the following steps in the Azure portal to delete a user.
-1. In the Azure portal, go to your Azure Stack Edge resource and then go to **Overview > Users**.
+1. In the Azure portal, go to your Azure Stack Edge resource and then go to **Users**.
![Select user to delete](media/azure-stack-edge-manage-users/delete-user-1.png)
databox-online https://docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-mini-r-deploy-prep https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-mini-r-deploy-prep.md
@@ -7,14 +7,14 @@ author: alkohli
ms.service: databox ms.subservice: edge ms.topic: tutorial
-ms.date: 12/16/2020
+ms.date: 01/04/2021
ms.author: alkohli Customer intent: As an IT admin, I need to understand how to prepare the portal to deploy Azure Stack Edge Mini R device so I can use it to transfer data to Azure. --- # Tutorial: Prepare to deploy Azure Stack Edge Mini R
-This is the first tutorial in the series of deployment tutorials that are required to completely deploy Azure Stack Edge Mini R device. This tutorial describes how to prepare the Azure portal to deploy an Azure Stack Edge resource.
+This tutorial is the first in the series of deployment tutorials that are required to completely deploy an Azure Stack Edge Mini R device. This tutorial describes how to prepare the Azure portal to deploy an Azure Stack Edge resource.
You need administrator privileges to complete the setup and configuration process. The portal preparation takes less than 10 minutes.
@@ -32,7 +32,7 @@ To deploy Azure Stack Edge Mini R, refer to the following tutorials in the presc
| --- | --- | | **Preparation** |These steps must be completed in preparation for the upcoming deployment. | | **[Deployment configuration checklist](#deployment-configuration-checklist)** |Use this checklist to gather and record information before and during the deployment. |
-| **[Deployment prerequisites](#prerequisites)** |These validate the environment is ready for deployment. |
+| **[Deployment prerequisites](#prerequisites)** |These prerequisites validate that the environment is ready for deployment. |
| | | |**Deployment tutorials** |These tutorials are required to deploy your Azure Stack Edge Mini R device in production. | |**[1. Prepare the Azure portal for device](azure-stack-edge-mini-r-deploy-prep.md)** |Create and configure your Azure Stack Edge resource before you install the physical device. |
@@ -42,7 +42,7 @@ To deploy Azure Stack Edge Mini R, refer to the following tutorials in the presc
|**[5. Configure device settings](azure-stack-edge-mini-r-deploy-set-up-device-update-time.md)** |Assign a device name and DNS domain, configure update server and device time. | |**[6. Configure security settings](azure-stack-edge-mini-r-deploy-configure-certificates-vpn-encryption.md)** |Configure certificates using your own certificates, set up VPN, and configure encryption-at-rest for your device. | |**[7. Activate the device](azure-stack-edge-mini-r-deploy-activate.md)** |Use the activation key from service to activate the device. The device is ready to set up SMB or NFS shares or connect via REST. |
-|**[8. Configure compute](azure-stack-edge-gpu-deploy-configure-compute.md)** |Configure the compute role on your device. This will also create a Kubernetes cluster. |
+|**[8. Configure compute](azure-stack-edge-gpu-deploy-configure-compute.md)** |Configure the compute role on your device. A Kubernetes cluster is also created. |
You can now begin to set up the Azure portal.
@@ -80,7 +80,7 @@ Before you begin, make sure that:
If you have an existing Azure Stack Edge resource to manage your physical device, skip this step and go to [Get the activation key](#get-the-activation-key).
-To create a Azure Stack Edge resource, take the following steps in the Azure portal.
+To create an Azure Stack Edge resource, take the following steps in the Azure portal.
1. Use your Microsoft Azure credentials to sign in to the Azure portal at this URL: [https://portal.azure.com](https://portal.azure.com).
@@ -101,7 +101,7 @@ To create a Azure Stack Edge resource, take the following steps in the Azure por
|Setting |Value | |---------|---------|
- |Subscription |This is automatically populated based on the earlier selection. Subscription is linked to your billing account. |
+ |Subscription |The subscription is automatically populated based on the earlier selection. Subscription is linked to your billing account. |
|Resource group |Select an existing group or create a new group.<br>Learn more about [Azure Resource Groups](../azure-resource-manager/management/overview.md). |
@@ -109,7 +109,7 @@ To create a Azure Stack Edge resource, take the following steps in the Azure por
|Setting |Value | |---------|---------|
- |Name | A friendly name to identify the resource.<br>The name has between 2 and 50 characters containing letter, numbers, and hyphens.<br> Name starts and ends with a letter or a number. |
+ |Name | A friendly name to identify the resource.<br>The name has from 2 to 50 characters including letters, numbers, and hyphens.<br> Name starts and ends with a letter or a number. |
|Region |For a list of all the regions where the Azure Stack Edge resource is available, see [Azure products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=databox&regions=all). If using Azure Government, all the government regions are available as shown in the [Azure regions](https://azure.microsoft.com/global-infrastructure/regions/).<br> Choose a location closest to the geographical region where you want to deploy your device.| ![Create a resource 4](media/azure-stack-edge-mini-r-deploy-prep/create-resource-4.png)
@@ -117,11 +117,11 @@ To create a Azure Stack Edge resource, take the following steps in the Azure por
8. Select **Next: Shipping address**.
- - If you already have a device, select the combo box for **I have a Azure Stack Edge Pro R device**.
+ - If you already have a device, select the combo box for **I already have a device**.
![Create a resource 5](media/azure-stack-edge-mini-r-deploy-prep/create-resource-5.png)
- - If this is the new device that you are ordering, enter the contact name, company, address to ship the device, and contact information.
+ - If this is the new device that you are ordering, enter the contact name, company, address to ship the device to, and contact information.
![Create a resource 6](media/azure-stack-edge-mini-r-deploy-prep/create-resource-6.png)
@@ -129,9 +129,9 @@ To create a Azure Stack Edge resource, take the following steps in the Azure por
10. On the **Review + create** tab, review the **Pricing details**, **Terms of use**, and the details for your resource. Select the combo box for **I have reviewed the privacy terms**.
- ![Create a resource 7](media/azure-stack-edge-mini-r-deploy-prep/create-resource-7.png)
+ ![Create a resource 7](media/azure-stack-edge-mini-r-deploy-prep/create-resource-7.png)
- You are also notified that during the resource creation, a Managed Service Identity (MSI) is enabled that lets you authenticate to cloud services. This identity exists for as long as the resource exists.
+ You're also notified that during resource creation, a Managed Service Identity (MSI) is enabled that lets you authenticate to cloud services. This identity exists for as long as the resource exists.
8. Select **Create**.
@@ -153,9 +153,9 @@ After the Azure Stack Edge resource is up and running, you'll need to get the ac
![Select Device setup](media/azure-stack-edge-mini-r-deploy-prep/azure-stack-edge-resource-2.png)
-2. On the **Activate** tile, provide a name for the Azure Key Vault or accept the default name. The key vault name can be between 3 and 24 characters.
+2. On the **Activate** tile, provide a name for the Azure Key Vault, or accept the default name. The key vault name can be between 3 and 24 characters.
- A key vault is created for each Azure Stack Edge resource that is activated with your device. The key vault lets you store and access secrets, for example, the Channel Integrity Key (CIK) for the service is stored in the key vault.
+ A key vault is created for each Azure Stack Edge resource that is activated with your device. The key vault lets you store and access secrets. For example, the Channel Integrity Key (CIK) for the service is stored in the key vault.
Once you have specified a key vault name, select **Generate key** to create an activation key.
databox https://docs.microsoft.com/en-us/azure/databox/data-box-limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/data-box-limits.md
@@ -7,7 +7,7 @@ author: alkohli
ms.service: databox ms.subservice: pod ms.topic: article
-ms.date: 11/16/2020
+ms.date: 01/05/2021
ms.author: alkohli --- # Azure Data Box limits
@@ -38,7 +38,7 @@ Data Box caveats for an import order include:
[!INCLUDE [data-box-data-upload-caveats](../../includes/data-box-data-upload-caveats.md)]
-## For export order
+### For export order
Data Box caveats for an export order include:
ddos-protection https://docs.microsoft.com/en-us/azure/ddos-protection/alerts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/alerts.md
@@ -29,7 +29,7 @@ In this tutorial, you'll learn how to:
- If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. - Before you can complete the steps in this tutorial, you must first create a [Azure DDoS Standard protection plan](manage-ddos-protection.md) and DDoS Protection Standard must be enabled on a virtual network.-- DDoS monitors public IP addresses assigned to resources within a virtual network. If you don't have any resources with public IP addresses in the virtual network, you must first create a resource with a public IP address. You can monitor the public IP address of all resources deployed through Resource Manager (not classic) listed in [Virtual network for Azure services](../virtual-network/virtual-network-for-azure-services.md#services-that-can-be-deployed-into-a-virtual-network) (including Azure Load Balancers where the backend virtual machines are in the virtual network), except for Azure App Service Environments and Azure VPN Gateway. To continue with this tutorial, you can quickly create a [Windows](../virtual-machines/windows/quick-create-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json) or [Linux](../virtual-machines/linux/quick-create-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json) virtual machine.  
+- DDoS monitors public IP addresses assigned to resources within a virtual network. If you don't have any resources with public IP addresses in the virtual network, you must first create a resource with a public IP address. You can monitor the public IP address of all resources deployed through Resource Manager (not classic) listed in [Virtual network for Azure services](../virtual-network/virtual-network-for-azure-services.md#services-that-can-be-deployed-into-a-virtual-network) (including Azure Load Balancers where the backend virtual machines are in the virtual network), except for Azure App Service Environments. To continue with this tutorial, you can quickly create a [Windows](../virtual-machines/windows/quick-create-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json) or [Linux](../virtual-machines/linux/quick-create-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json) virtual machine.  
## Configure alerts through Azure Monitor
@@ -97,4 +97,4 @@ In this tutorial, you learned how to:
To learn how to test and simulate a DDoS attack, see the simulation testing guide: > [!div class="nextstepaction"]
-> [Test through simulations](test-through-simulations.md)
\ No newline at end of file
+> [Test through simulations](test-through-simulations.md)
ddos-protection https://docs.microsoft.com/en-us/azure/ddos-protection/diagnostic-logging https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/diagnostic-logging.md
@@ -36,7 +36,7 @@ In this tutorial, you'll learn how to:
- If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. - Before you can complete the steps in this tutorial, you must first create a [Azure DDoS Standard protection plan](manage-ddos-protection.md) and DDoS Protection Standard must be enabled on a virtual network.-- DDoS monitors public IP addresses assigned to resources within a virtual network. If you don't have any resources with public IP addresses in the virtual network, you must first create a resource with a public IP address. You can monitor the public IP address of all resources deployed through Resource Manager (not classic) listed in [Virtual network for Azure services](../virtual-network/virtual-network-for-azure-services.md#services-that-can-be-deployed-into-a-virtual-network) (including Azure Load Balancers where the backend virtual machines are in the virtual network), except for Azure App Service Environments and Azure VPN Gateway. To continue with this tutorial, you can quickly create a [Windows](../virtual-machines/windows/quick-create-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json) or [Linux](../virtual-machines/linux/quick-create-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json) virtual machine.  
+- DDoS monitors public IP addresses assigned to resources within a virtual network. If you don't have any resources with public IP addresses in the virtual network, you must first create a resource with a public IP address. You can monitor the public IP address of all resources deployed through Resource Manager (not classic) listed in [Virtual network for Azure services](../virtual-network/virtual-network-for-azure-services.md#services-that-can-be-deployed-into-a-virtual-network) (including Azure Load Balancers where the backend virtual machines are in the virtual network), except for Azure App Service Environments. To continue with this tutorial, you can quickly create a [Windows](../virtual-machines/windows/quick-create-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json) or [Linux](../virtual-machines/linux/quick-create-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json) virtual machine.  
## Configure DDoS diagnostic logs
ddos-protection https://docs.microsoft.com/en-us/azure/ddos-protection/telemetry https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/telemetry.md
@@ -65,7 +65,7 @@ The following [metrics](https://docs.microsoft.com/azure/azure-monitor/platform/
- If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. - Before you can complete the steps in this tutorial, you must first create a [Azure DDoS Standard protection plan](manage-ddos-protection.md) and DDoS Protection Standard must be enabled on a virtual network.-- DDoS monitors public IP addresses assigned to resources within a virtual network. If you don't have any resources with public IP addresses in the virtual network, you must first create a resource with a public IP address. You can monitor the public IP address of all resources deployed through Resource Manager (not classic) listed in [Virtual network for Azure services](../virtual-network/virtual-network-for-azure-services.md#services-that-can-be-deployed-into-a-virtual-network) (including Azure Load Balancers where the backend virtual machines are in the virtual network), except for Azure App Service Environments and Azure VPN Gateway. To continue with this tutorial, you can quickly create a [Windows](../virtual-machines/windows/quick-create-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json) or [Linux](../virtual-machines/linux/quick-create-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json) virtual machine.
+- DDoS monitors public IP addresses assigned to resources within a virtual network. If you don't have any resources with public IP addresses in the virtual network, you must first create a resource with a public IP address. You can monitor the public IP address of all resources deployed through Resource Manager (not classic) listed in [Virtual network for Azure services](../virtual-network/virtual-network-for-azure-services.md#services-that-can-be-deployed-into-a-virtual-network) (including Azure Load Balancers where the backend virtual machines are in the virtual network), except for Azure App Service Environments. To continue with this tutorial, you can quickly create a [Windows](../virtual-machines/windows/quick-create-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json) or [Linux](../virtual-machines/linux/quick-create-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json) virtual machine.
## View DDoS protection telemetry
ddos-protection https://docs.microsoft.com/en-us/azure/ddos-protection/test-through-simulations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/test-through-simulations.md
@@ -38,9 +38,9 @@ We have partnered with [BreakingPoint Cloud](https://www.ixiacom.com/products/br
|--------- |--------- | |Target IP address | Enter one of your public IP address you want to test. | |Port Number | Enter _443_. |
- |DDoS Profile | Select **TCP SYN Flood**.|
- |Test Size | Select **200K pps, 100 Mbps and 8 source IPs.** |
- |Test Duration | Select **10 Minutes**.|
+ |DDoS Profile | Possible values include **DNS Flood**, **NTPv2 Flood**, **SSDP Flood**, **TCP SYN Flood**, **UDP 64B Flood**, **UDP 128B Flood**, **UDP 256B Flood**, **UDP 512B Flood**, **UDP 1024B Flood**, **UDP 1514B Flood**, **UDP Fragmentation** **UDP Memcached**.|
+ |Test Size | Possible values include **100K pps, 50 Mbps and 4 source IPs**, **200K pps, 100 Mbps and 8 source IPs**, **400K pps, 200Mbps and 16 source IPs**, **800K pps, 400 Mbps and 32 source IPs**. |
+ |Test Duration | Possible values include **10 Minutes**, **15 Minutes**, **20 Minutes**, **25 Minutes**, **30 Minutes**.|
It should now appear like this:
digital-twins https://docs.microsoft.com/en-us/azure/digital-twins/troubleshoot-known-issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/troubleshoot-known-issues.md
@@ -42,11 +42,11 @@ This article provides information about known issues associated with Azure Digit
## Issue with default Azure credential authentication on Azure.Identity 1.3.0
-**Issue description:** When writing authentication code in your Azure Digital Twins applications using version **1.3.0** of the **[Azure.Identity](/dotnet/api/azure.identity?view=azure-dotnet&preserve-view=true) library**, you may experience issues with the [DefaultAzureCredential](/dotnet/api/azure.identity.defaultazurecredential?view=azure-dotnet?view=azure-dotnet&preserve-view=true) method used in many samples throughout these docs. This presents as an error response of "Azure.Identity.AuthenticationFailedException: SharedTokenCacheCredential authentication failed" when the code tries to authenticate.
+**Issue description:** When writing authentication code using version **1.3.0** of the **[Azure.Identity](/dotnet/api/azure.identity?view=azure-dotnet&preserve-view=true) library**, some users have experienced issues with the [DefaultAzureCredential](/dotnet/api/azure.identity.defaultazurecredential?view=azure-dotnet?view=azure-dotnet&preserve-view=true) method used in many samples throughout these Azure Digital Twins docs. This presents as an error response of "Azure.Identity.AuthenticationFailedException: SharedTokenCacheCredential authentication failed" when the code tries to authenticate.
| Does this affect me? | Cause | Resolution | | --- | --- | --- |
-| DefaultAzureCredential is used in most of the documentation examples that include authentication. If you are writing authentication code using DefaultAzureCredential and using version 1.3.0 of the `Azure.Identity` library, this is likely to affect you. | This issue presents when using DefaultAzureCredential with version **1.3.0** of the `Azure.Identity` library. | To resolve, switch your application to use [version 1.2.2](https://www.nuget.org/packages/Azure.Identity/1.2.2) of `Azure.Identity`. After changing the library version, authentication should succeed as expected. |
+| `DefaultAzureCredential` is used in most of the documentation examples for this service that include authentication. If you are writing authentication code using `DefaultAzureCredential` with version 1.3.0 of the `Azure.Identity` library and seeing this error message, this affects you. | This is likely a result of some configuration issue with `Azure.Identity`. | One strategy to resolve this is to exclude `SharedTokenCacheCredential` from your credential, as described in this [DefaultAzureCredential issue](https://github.com/Azure/azure-sdk/issues/1970) that is currently open against `Azure.Identity`.<br>Another option is to change your application to use an earlier version of `Azure.Identity`, such as [version 1.2.3](https://www.nuget.org/packages/Azure.Identity/1.2.3). This has no functional impact to Azure Digital Twins and thus is also an accepted solution. |
## Next steps
event-grid https://docs.microsoft.com/en-us/azure/event-grid/cloudevents-schema https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/cloudevents-schema.md
@@ -1,6 +1,6 @@
--- title: Use Azure Event Grid with events in CloudEvents schema
-description: Describes how to use the CloudEvents schema for events in Azure Event Grid. The service supports events in the JSON implementation of Cloud Events.
+description: Describes how to use the CloudEvents schema for events in Azure Event Grid. The service supports events in the JSON implementation of CloudEvents.
ms.topic: conceptual ms.date: 11/10/2020 ms.custom: devx-track-js, devx-track-csharp, devx-track-azurecli
@@ -9,16 +9,15 @@ ms.custom: devx-track-js, devx-track-csharp, devx-track-azurecli
# Use CloudEvents v1.0 schema with Event Grid In addition to its [default event schema](event-schema.md), Azure Event Grid natively supports events in the [JSON implementation of CloudEvents v1.0](https://github.com/cloudevents/spec/blob/v1.0/json-format.md) and [HTTP protocol binding](https://github.com/cloudevents/spec/blob/v1.0/http-protocol-binding.md). [CloudEvents](https://cloudevents.io/) is an [open specification](https://github.com/cloudevents/spec/blob/v1.0/spec.md) for describing event data.
-CloudEvents simplifies interoperability by providing a common event schema for publishing, and consuming cloud based events. This schema allows for uniform tooling, standard ways of routing & handling events, and universal ways of deserializing the outer event schema. With a common schema, you can more easily integrate work across platforms.
+CloudEvents simplifies interoperability by providing a common event schema for publishing and consuming cloud-based events. This schema allows for uniform tooling, standard ways of routing and handling events, and universal ways of deserializing the outer event schema. With a common schema, you can more easily integrate work across platforms.
CloudEvents is being built by several [collaborators](https://github.com/cloudevents/spec/blob/master/community/contributors.md), including Microsoft, through the [Cloud Native Computing Foundation](https://www.cncf.io/). It's currently available as version 1.0. This article describes how to use the CloudEvents schema with Event Grid. - ## CloudEvent schema
-Here is an example of an Azure Blob Storage event in CloudEvents format:
+Here's an example of an Azure Blob Storage event in CloudEvents format:
``` JSON {
@@ -46,9 +45,9 @@ Here is an example of an Azure Blob Storage event in CloudEvents format:
} ```
-A detailed description of the available fields, their types, and definitions in CloudEvents v1.0 is [available here](https://github.com/cloudevents/spec/blob/v1.0/spec.md#required-attributes).
+For a detailed description of the available fields, their types, and definitions, see [CloudEvents v1.0](https://github.com/cloudevents/spec/blob/v1.0/spec.md#required-attributes).
-The headers values for events delivered in the CloudEvents schema and the Event Grid schema are the same except for `content-type`. For CloudEvents schema, that header value is `"content-type":"application/cloudevents+json; charset=utf-8"`. For Event Grid schema, that header value is `"content-type":"application/json; charset=utf-8"`.
+The headers values for events delivered in the CloudEvents schema and the Event Grid schema are the same except for `content-type`. For the CloudEvents schema, that header value is `"content-type":"application/cloudevents+json; charset=utf-8"`. For the Event Grid schema, that header value is `"content-type":"application/json; charset=utf-8"`.
## Configure Event Grid for CloudEvents
@@ -56,14 +55,13 @@ You can use Event Grid for both input and output of events in the CloudEvents sc
Event Grid resource | Input schema | Delivery schema |---------------------|-------------------|---------------------
-| System Topics | Event Grid Schema | Event Grid Schema or CloudEvent Schema
-| User Topics/Domains | Event Grid Schema | Event Grid Schema
-| User Topics/Domains | CloudEvent Schema | CloudEvent Schema
-| User Topics/Domains | Custom Schema | Custom Schema OR Event Grid Schema OR CloudEvent Schema
-| PartnerTopics | CloudEvent Schema | CloudEvent Schema
-
+| System Topics | Event Grid schema | Event Grid schema or CloudEvent schema
+| User Topics/Domains | Event Grid schema | Event Grid schema
+| User Topics/Domains | CloudEvent schema | CloudEvent schema
+| User Topics/Domains | Custom schema | Custom schema, Event Grid schema, or CloudEvent schema
+| PartnerTopics | CloudEvent schema | CloudEvent schema
-For all event schemas, Event Grid requires validation when publishing to an Event Grid topic and when creating an event subscription.
+For all event schemas, Event Grid requires validation when you're publishing to an Event Grid topic and when you're creating an event subscription.
For more information, see [Event Grid security and authentication](security-authentication.md).
@@ -71,7 +69,7 @@ For more information, see [Event Grid security and authentication](security-auth
You set the input schema for a custom topic when you create the custom topic.
-For Azure CLI, use:
+For the Azure CLI, use:
```azurecli-interactive az eventgrid topic create \
@@ -95,7 +93,7 @@ New-AzEventGridTopic `
You set the output schema when you create the event subscription.
-For Azure CLI, use:
+For the Azure CLI, use:
```azurecli-interactive topicID=$(az eventgrid topic show --name <topic-name> -g gridResourceGroup --query id --output tsv)
@@ -120,22 +118,22 @@ New-AzEventGridSubscription `
Currently, you can't use an Event Grid trigger for an Azure Functions app when the event is delivered in the CloudEvents schema. Use an HTTP trigger. For examples of implementing an HTTP trigger that receives events in the CloudEvents schema, see [Using CloudEvents with Azure Functions](#azure-functions).
- ## Endpoint Validation with CloudEvents v1.0
+## Endpoint validation with CloudEvents v1.0
-If you are already familiar with Event Grid, you may be aware of Event Grid's endpoint validation handshake for preventing abuse. CloudEvents v1.0 implements its own [abuse protection semantics](webhook-event-delivery.md) using the HTTP OPTIONS method. You can read more about it [here](https://github.com/cloudevents/spec/blob/v1.0/http-webhook.md#4-abuse-protection). When using the CloudEvents schema for output, Event Grid uses with the CloudEvents v1.0 abuse protection in place of the Event Grid validation event mechanism.
+If you're already familiar with Event Grid, you might be aware of the endpoint validation handshake for preventing abuse. CloudEvents v1.0 implements its own [abuse protection semantics](webhook-event-delivery.md) by using the HTTP OPTIONS method. To read more about it, see [HTTP 1.1 Web Hooks for event delivery - Version 1.0](https://github.com/cloudevents/spec/blob/v1.0/http-webhook.md#4-abuse-protection). When you use the CloudEvents schema for output, Event Grid uses the CloudEvents v1.0 abuse protection in place of the Event Grid validation event mechanism.
<a name="azure-functions"></a> ## Use with Azure Functions
-The [Azure Functions Event Grid binding](../azure-functions/functions-bindings-event-grid.md) does not natively support CloudEvents, so HTTP-triggered functions are used to read CloudEvents messages. When using an HTTP trigger to read CloudEvents, you have to write code for what the Event Grid trigger does automatically:
+The [Azure Functions Event Grid binding](../azure-functions/functions-bindings-event-grid.md) doesn't natively support CloudEvents, so HTTP-triggered functions are used to read CloudEvents messages. When you use an HTTP trigger to read CloudEvents, you have to write code for what the Event Grid trigger does automatically:
-* Sends a validation response to a [subscription validation request](../event-grid/webhook-event-delivery.md).
-* Invokes the function once per element of the event array contained in the request body.
+* Sends a validation response to a [subscription validation request](../event-grid/webhook-event-delivery.md)
+* Invokes the function once per element of the event array contained in the request body
-For information about the URL to use for invoking the function locally or when it runs in Azure, see the [HTTP trigger binding reference documentation](../azure-functions/functions-bindings-http-webhook.md)
+For information about the URL to use for invoking the function locally or when it runs in Azure, see the [HTTP trigger binding reference documentation](../azure-functions/functions-bindings-http-webhook.md).
-The following sample C# code for an HTTP trigger simulates Event Grid trigger behavior. Use this example for events delivered in the CloudEvents schema.
+The following sample C# code for an HTTP trigger simulates Event Grid trigger behavior. Use this example for events delivered in the CloudEvents schema.
```csharp [FunctionName("HttpTrigger")]
@@ -155,7 +153,7 @@ public static async Task<HttpResponseMessage> Run([HttpTrigger(AuthorizationLeve
var requestmessage = await req.Content.ReadAsStringAsync(); var message = JToken.Parse(requestmessage);
- // The request is not for subscription validation, so it's for an event.
+ // The request isn't for subscription validation, so it's for an event.
// CloudEvents schema delivers one event at a time. log.LogInformation($"Source: {message["source"]}"); log.LogInformation($"Time: {message["eventTime"]}");
@@ -186,7 +184,7 @@ module.exports = function (context, req) {
{ var message = req.body;
- // The request is not for subscription validation, so it's for an event.
+ // The request isn't for subscription validation, so it's for an event.
// CloudEvents schema delivers one event at a time. var event = JSON.parse(message); context.log('Source: ' + event.source);
@@ -201,5 +199,5 @@ module.exports = function (context, req) {
## Next steps * For information about monitoring event deliveries, see [Monitor Event Grid message delivery](monitor-event-delivery.md).
-* We encourage you to test, comment on, and [contribute](https://github.com/cloudevents/spec/blob/master/community/CONTRIBUTING.md) to CloudEvents.
+* We encourage you to test, comment on, and [contribute to CloudEvents](https://github.com/cloudevents/spec/blob/master/community/CONTRIBUTING.md).
* For more information about creating an Azure Event Grid subscription, see [Event Grid subscription schema](subscription-creation-schema.md).
expressroute https://docs.microsoft.com/en-us/azure/expressroute/expressroute-locations-providers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-locations-providers.md
@@ -6,7 +6,7 @@ author: duongau
ms.service: expressroute ms.topic: conceptual
-ms.date: 12/28/2020
+ms.date: 01/05/2021
ms.author: duau --- # ExpressRoute partners and peering locations
@@ -73,7 +73,7 @@ The following table shows connectivity locations and the service providers for e
| **Location** | **Address** | **Zone** | **Local Azure regions** | **ER Direct** | **Service providers** | | --- | --- | --- | --- | --- | --- | | **Amsterdam** | [Equinix AM5](https://www.equinix.com/locations/europe-colocation/netherlands-colocation/amsterdam-data-centers/am5/) | 1 | West Europe | 10G, 100G | Aryaka Networks, AT&T NetBond, British Telecom, Colt, Equinix, euNetworks, GÉANT, InterCloud, Interxion, KPN, IX Reach, Level 3 Communications, Megaport, NTT Communications, Orange, Tata Communications, Telefonica, Telenor, Telia Carrier, Verizon, Zayo |
-| **Amsterdam2** | [Interxion AMS8](https://www.interxion.com/Locations/amsterdam/schiphol/) | 1 | West Europe | 10G, 100G | British Telecom, CenturyLink Cloud Connect, Colt, DE-CIX, euNetworks, GÉANT, Interxion, NOS, NTT Global DataCenters EMEA, Orange, Vodafone |
+| **Amsterdam2** | [Interxion AMS8](https://www.interxion.com/Locations/amsterdam/schiphol/) | 1 | West Europe | 10G, 100G | British Telecom, CenturyLink Cloud Connect, Colt, DE-CIX, Equinix, euNetworks, GÉANT, Interxion, NOS, NTT Global DataCenters EMEA, Orange, Vodafone |
| **Atlanta** | [Equinix AT2](https://www.equinix.com/locations/americas-colocation/united-states-colocation/atlanta-data-centers/at2/) | 1 | n/a | 10G, 100G | Equinix, Megaport | | **Auckland** | [Vocus Group NZ Albany](https://www.vocus.co.nz/business/cloud-data-centres) | 2 | n/a | 10G | Devoli, Kordia, Megaport, Spark NZ, Vocus Group NZ | | **Bangkok** | [AIS](https://business.ais.co.th/solution/en/azure-expressroute.html) | 2 | n/a | 10G | AIS, UIH |
expressroute https://docs.microsoft.com/en-us/azure/expressroute/expressroute-locations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-locations.md
@@ -7,7 +7,7 @@ author: duongau
ms.service: expressroute ms.topic: conceptual ms.workload: infrastructure-services
-ms.date: 12/10/2020
+ms.date: 01/05/2021
ms.author: duau ---
@@ -101,7 +101,7 @@ The following table shows locations by service provider. If you want to view ava
| **du datamena** |Supported |Supported | Dubai2 | | **eir** |Supported |Supported |Dublin| | **[Epsilon Global Communications](https://www.epsilontel.com/solutions/direct-cloud-connect)** |Supported |Supported |Singapore, Singapore2 |
-| **[Equinix](https://www.equinix.com/partners/microsoft-azure/)** |Supported |Supported |Amsterdam, Atlanta, Berlin, Canberra2, Chicago, Dallas, Dubai2, Dublin, Frankfurt, Geneva, Hong Kong SAR, London, London2, Los Angeles, Los Angeles2, Melbourne, Miami, Milan, New York, Osaka, Paris, Sao Paulo, Seattle, Seoul, Silicon Valley, Singapore, Stockholm, Sydney, Tokyo, Toronto, Washington DC, Zurich |
+| **[Equinix](https://www.equinix.com/partners/microsoft-azure/)** |Supported |Supported |Amsterdam, Amsterdam2, Atlanta, Berlin, Canberra2, Chicago, Dallas, Dubai2, Dublin, Frankfurt, Geneva, Hong Kong SAR, London, London2, Los Angeles, Los Angeles2, Melbourne, Miami, Milan, New York, Osaka, Paris, Sao Paulo, Seattle, Seoul, Silicon Valley, Singapore, Stockholm, Sydney, Tokyo, Toronto, Washington DC, Zurich |
| **Etisalat UAE** |Supported |Supported |Dubai| | **[euNetworks](https://eunetworks.com/services/solutions/cloud-connect/microsoft-azure-expressroute/)** |Supported |Supported |Amsterdam, Amsterdam2, Dublin, Frankfurt, London | | **[FarEasTone](https://www.fetnet.net/corporate/en/Enterprise.html)** |Supported |Supported |Taipei|
firewall-manager https://docs.microsoft.com/en-us/azure/firewall-manager/migrate-to-policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall-manager/migrate-to-policy.md
@@ -25,11 +25,10 @@ Modify the following script to migrate your firewall configuration.
```azurepowershell #Input params to be modified as needed
-$FirewallName = "AZFW"
-$ResourceGroupName = "AzFWMigrateRG"
-$PolicyName = "fwp9"
-$Location = "WestUS"
-
+$FirewallResourceGroup = "AzFWMigrateRG"
+$FirewallName = "azfw"
+$FirewallPolicyName = "fwpolicy"
+$FirewallPolicyLocation = "WestEurope"
$DefaultAppRuleCollectionGroupName = "ApplicationRuleCollectionGroup" $DefaultNetRuleCollectionGroupName = "NetworkRuleCollectionGroup"
@@ -72,10 +71,14 @@ Function GetApplicationRuleCmd
return $cmd }
+If(!(Get-AzResourceGroup -Name $FirewallResourceGroup))
+{
+ New-AzResourceGroup -Name $FirewallResourceGroup -Location $FirewallPolicyLocation
+}
-$azfw = Get-AzFirewall -Name $FirewallName -ResourceGroupName $ResourceGroupName
+$azfw = Get-AzFirewall -Name $FirewallName -ResourceGroupName $FirewallResourceGroup
Write-Host "creating empty firewall policy"
-$fwp = New-AzFirewallPolicy -Name $PolicyName -ResourceGroupName $ResourceGroupName -Location $Location -ThreatIntelMode $azfw.ThreatIntelMode
+$fwp = New-AzFirewallPolicy -Name $FirewallPolicyName -ResourceGroupName $FirewallResourceGroup -Location $FirewallPolicyLocation -ThreatIntelMode $azfw.ThreatIntelMode
Write-Host $fwp.Name "created" Write-Host "creating " $azfw.ApplicationRuleCollections.Count " application rule collections"
@@ -110,11 +113,30 @@ If ($azfw.NetworkRuleCollections.Count -gt 0) {
Write-Host "creating " $rc.Rules.Count " network rules for collection " $rc.Name $firewallPolicyNetRules = @() ForEach ($rule in $rc.Rules) {
- $firewallPolicyNetRule = New-AzFirewallPolicyNetworkRule -Name $rule.Name -SourceAddress $rule.SourceAddresses -DestinationAddress $rule.DestinationAddresses -DestinationPort $rule.DestinationPorts -Protocol $rule.Protocols
- Write-Host "Created network rule " $firewallPolicyNetRule.Name
- $firewallPolicyNetRules += $firewallPolicyNetRule
- }
- $fwpNetRuleCollection = New-AzFirewallPolicyFilterRuleCollection -Name $rc.Name -Priority $rc.Priority -ActionType $rc.Action.Type -Rule $firewallPolicyNetRules
+ If($rule.SourceAddresses){
+ If($rule.DestinationAddresses)
+ {
+ $firewallPolicyNetRule = New-AzFirewallPolicyNetworkRule -Name $rule.Name -SourceAddress $rule.SourceAddresses -DestinationAddress $rule.DestinationAddresses -DestinationPort $rule.DestinationPorts -Protocol $rule.Protocols
+ }
+ elseif($rule.DestinationIpGroups)
+ {
+ $firewallPolicyNetRule = New-AzFirewallPolicyNetworkRule -Name $rule.Name -SourceIpGroup $rule.SourceIpGroups -DestinationAddress $rule.DestinationAddresses -DestinationPort $rule.DestinationPorts -Protocol $rule.Protocols
+ }
+ }
+ elseif($rule.SourceIpGroups){
+ If($rule.DestinationAddresses)
+ {
+ $firewallPolicyNetRule = New-AzFirewallPolicyNetworkRule -Name $rule.Name -SourceAddress $rule.SourceAddresses -DestinationIpGroup $rule.DestinationIpGroups -DestinationPort $rule.DestinationPorts -Protocol $rule.Protocols
+ }
+ elseif($rule.DestinationIpGroups)
+ {
+ $firewallPolicyNetRule = New-AzFirewallPolicyNetworkRule -Name $rule.Name -SourceIpGroup $rule.SourceIpGroups -DestinationIpGroup $rule.DestinationIpGroups -DestinationPort $rule.DestinationPorts -Protocol $rule.Protocols
+ }
+ }
+ Write-Host "Created network rule " $firewallPolicyNetRule.Name
+ $firewallPolicyNetRules += $firewallPolicyNetRule
+ }
+ $fwpNetRuleCollection = New-AzFirewallPolicyFilterRuleCollection -Name $rc.Name -Priority $rc.Pl.llriority -ActionType $rc.Action.Type -Rule $firewallPolicyNetRules
Write-Host "Created NetworkRuleCollection " $fwpNetRuleCollection.Name } $firewallPolicyNetRuleCollections += $fwpNetRuleCollection
@@ -135,17 +157,19 @@ If ($azfw.NatRuleCollections.Count -gt 0) {
$firewallPolicyNatRuleCollections = @() $priority = 100 ForEach ($rc in $azfw.NatRuleCollections) {
+ $firewallPolicyNatRules = @()
If ($rc.Rules.Count -gt 0) { Write-Host "creating " $rc.Rules.Count " nat rules for collection " $rc.Name ForEach ($rule in $rc.Rules) {
- $firewallPolicyNatRule = New-AzFirewallPolicyNetworkRule -Name $rule.Name -SourceAddress $rule.SourceAddresses -DestinationAddress $rule.DestinationAddresses -DestinationPort $rule.DestinationPorts -Protocol $rule.Protocols
+ $firewallPolicyNatRule = New-AzFirewallPolicyNatRule -Name $rule.Name -SourceAddress $rule.SourceAddresses -TranslatedAddress $rule.TranslatedAddress -TranslatedPort $rule.TranslatedPort -DestinationAddress $rule.DestinationAddresses -DestinationPort $rule.DestinationPorts -Protocol $rule.Protocols
Write-Host "Created nat rule " $firewallPolicyNatRule.Name
- $natRuleCollectionName = $rc.Name+$rule.Name
- $fwpNatRuleCollection = New-AzFirewallPolicyNatRuleCollection -Name $natRuleCollectionName -Priority $priority -ActionType $rc.Action.Type -Rule $firewallPolicyNatRule -TranslatedAddress $rule.TranslatedAddress -TranslatedPort $rule.TranslatedPort
- $priority += 1
- Write-Host "Created NatRuleCollection " $fwpNatRuleCollection.Name
- $firewallPolicyNatRuleCollections += $fwpNatRuleCollection
- }
+ $firewallPolicyNatRules += $firewallPolicyNatRule
+ }
+ $natRuleCollectionName = $rc.Name+$rule.Name
+ $fwpNatRuleCollection = New-AzFirewallPolicyNatRuleCollection -Name $natRuleCollectionName -Priority $priority -ActionType $rc.Action.Type -Rule $firewallPolicyNatRules
+ $priority += 1
+ Write-Host "Created NatRuleCollection " $fwpNatRuleCollection.Name
+ $firewallPolicyNatRuleCollections += $fwpNatRuleCollection
} } $natRuleGroup = New-AzFirewallPolicyRuleCollectionGroup -Name $DefaultNatRuleCollectionGroupName -Priority $NatRuleGroupPriority -RuleCollection $firewallPolicyNatRuleCollections -FirewallPolicyObject $fwp
@@ -154,4 +178,4 @@ If ($azfw.NatRuleCollections.Count -gt 0) {
``` ## Next steps
-Learn more about Azure Firewall Manager deployment: [Azure Firewall Manager deployment overview](deployment-overview.md).
\ No newline at end of file
+Learn more about Azure Firewall Manager deployment: [Azure Firewall Manager deployment overview](deployment-overview.md).
hdinsight https://docs.microsoft.com/en-us/azure/hdinsight/hdinsight-private-link https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-private-link.md
@@ -24,7 +24,7 @@ By default, the HDInsight RP uses an *inbound* connection to the cluster using p
The basic load balancers used in the default virtual network architecture automatically provide public NAT (network address translation) to access the required outbound dependencies, such as the HDInsight RP. If you want to restrict outbound connectivity to the public internet, you can [configure a firewall](./hdinsight-restrict-outbound-traffic.md), but it's not a requirement.
-Configuring `resourceProviderConnection` to outbound also allows you to access cluster-specific resources, such as Azure Data Lake Storage Gen2 or external metastores, using private endpoints. Using private endpoints for these resources is not mandetory, but if you plan to have private endpoints for these resources, you must configure the private endpoints and DNS entries `before` you create the HDInsight cluster. We recommend you create and provide all of the external SQL databases you need, such as Apache Ranger, Ambari, Oozie and Hive metastores, at cluster creation time. The requirement is that all of these resources must be accessible from inside the cluster subnet, either through their own private endpoint or otherwise.
+Configuring `resourceProviderConnection` to outbound also allows you to access cluster-specific resources, such as Azure Data Lake Storage Gen2 or external metastores, using private endpoints. Using private endpoints for these resources is not mandatory, but if you plan to have private endpoints for these resources, you must configure the private endpoints and DNS entries `before` you create the HDInsight cluster. We recommend you create and provide all of the external SQL databases you need, such as Apache Ranger, Ambari, Oozie and Hive metastores, at cluster creation time. The requirement is that all of these resources must be accessible from inside the cluster subnet, either through their own private endpoint or otherwise.
Using private endpoints for Azure Key Vault is not supported. If you're using Azure Key Vault for CMK encryption at rest, the Azure Key Vault endpoint must be accessible from within the HDInsight subnet with no private endpoint.
@@ -105,4 +105,4 @@ To use Azure CLI, see the example [here](/cli/azure/hdinsight?view=azure-cli-lat
## Next steps * [Enterprise Security Package for Azure HDInsight](enterprise-security-package.md)
-* [Enterprise security general information and guidelines in Azure HDInsight](./domain-joined/general-guidelines.md)
\ No newline at end of file
+* [Enterprise security general information and guidelines in Azure HDInsight](./domain-joined/general-guidelines.md)
iot-edge https://docs.microsoft.com/en-us/azure/iot-edge/how-to-auto-provision-x509-certs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-auto-provision-x509-certs.md
@@ -9,6 +9,7 @@ ms.date: 04/09/2020
ms.topic: conceptual ms.service: iot-edge services: iot-edge
+ms.custom: contperf-fy21q2
--- # Create and provision an IoT Edge device using X.509 certificates
iot-hub https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-event-grid-routing-comparison https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-event-grid-routing-comparison.md
@@ -27,7 +27,7 @@ While both message routing and Event Grid enable alert configuration, there are
| Feature | IoT Hub message routing | IoT Hub integration with Event Grid | | ------- | --------------- | ---------- |
-| **Device messages and events** | Yes, message routing can be used for telemetry data, report device twin changes, device lifecycle events (ex. when devices are created, deleted, connected and disconnected from IoT Hub), and digital twin change events. | Yes, Event Grid can be used for telemetry data and device lifecycle events. But Event grid can not be used for device twin change events and digital twin change events. |
+| **Device messages and events** | Yes, message routing can be used for telemetry data, device twin changes, device lifecycle events, and digital twin change events. | Yes, Event Grid can be used for telemetry data and device events like device created/deleted/connected/disconnected. But Event grid cannot be used for device twin change events and digital twin change events. |
| **Ordering** | Yes, ordering of events is maintained. | No, order of events is not guaranteed. | | **Filtering** | Rich filtering on message application properties, message system properties, message body, device twin tags, and device twin properties. Filtering isn't applied to digital twin change events. For examples, see [Message Routing Query Syntax](iot-hub-devguide-routing-query-syntax.md). | Filtering based on event type, subject type and attributes in each event. For examples, see [Understand filtering events in Event Grid Subscriptions](../event-grid/event-filtering.md). When subscribing to telemetry events, you can apply additional filters on the data to filter on message properties, message body and device twin in your IoT Hub, before publishing to Event Grid. See [how to filter events](../iot-hub/iot-hub-event-grid.md#filter-events). | | **Endpoints** | <ul><li>Event Hubs</li> <li>Azure Blob Storage</li> <li>Service Bus queue</li> <li>Service Bus topics</li></ul><br>Paid IoT Hub SKUs (S1, S2, and S3) are limited to 10 custom endpoints. 100 routes can be created per IoT Hub. | <ul><li>Azure Functions</li> <li>Azure Automation</li> <li>Event Hubs</li> <li>Logic Apps</li> <li>Storage Blob</li> <li>Custom Topics</li> <li>Queue Storage</li> <li>Power Automate</li> <li>Third-party services through WebHooks</li></ul><br>500 endpoints per IoT Hub are supported. For the most up-to-date list of endpoints, see [Event Grid event handlers](../event-grid/overview.md#event-handlers). |
key-vault https://docs.microsoft.com/en-us/azure/key-vault/secrets/tutorial-rotation-dual https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/secrets/tutorial-rotation-dual.md
@@ -220,10 +220,10 @@ Notice that `value` of the key is same as secret in key vault:
Rotation functions template for two sets of credentials and several ready to use functions: -- [Function Template in PowerShell](https://github.com/Azure/KeyVault-Secrets-Rotation-Template-PowerShell)-- [Redis cache](https://github.com/Azure/KeyVault-Secrets-Rotation-Redis-PowerShell)-- [Storage account](https://github.com/Azure/KeyVault-Secrets-Rotation-StorageAccount-PowerShell)-- [Cosmos DB](https://github.com/Azure/KeyVault-Secrets-Rotation-CosmosDB-PowerShell)
+- [Project template](https://serverlesslibrary.net/sample/bc72c6c3-bd8f-4b08-89fb-c5720c1f997f)
+- [Redis Cache](https://serverlesslibrary.net/sample/0d42ac45-3db2-4383-86d7-3b92d09bc978)
+- [Storage Account](https://serverlesslibrary.net/sample/0e4e6618-a96e-4026-9e3a-74b8412213a4)
+- [Cosmos DB](https://serverlesslibrary.net/sample/bcfaee79-4ced-4a5c-969b-0cc3997f47cc)
> [!NOTE] > Above rotation functions are created by a member of the community and not by Microsoft. Community Azure Functions are not supported under any Microsoft support programme or service, and are made available AS IS without warranty of any kind.
load-balancer https://docs.microsoft.com/en-us/azure/load-balancer/quickstart-load-balancer-standard-internal-cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/quickstart-load-balancer-standard-internal-cli.md
@@ -12,13 +12,13 @@ ms.devlang: na
ms.topic: quickstart ms.tgt_pltfrm: na ms.workload: infrastructure-services
-ms.date: 10/23/2020
+ms.date: 12/19/2020
ms.author: allensu ms.custom: mvc, devx-track-js, devx-track-azurecli --- # Quickstart: Create an internal load balancer to load balance VMs using Azure CLI
-Get started with Azure Load Balancer by using Azure CLI to create a public load balancer and three virtual machines.
+Get started with Azure Load Balancer by using Azure CLI to create an internal load balancer and three virtual machines.
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
@@ -30,7 +30,7 @@ Get started with Azure Load Balancer by using Azure CLI to create a public load
An Azure resource group is a logical container into which Azure resources are deployed and managed.
-Create a resource group with [az group create](/cli/azure/group?view=azure-cli-latest#az-group-create):
+Create a resource group with [az group create](/cli/azure/group#az_group_create):
* Named **CreateIntLBQS-rg**. * In the **eastus** location.
@@ -39,6 +39,7 @@ Create a resource group with [az group create](/cli/azure/group?view=azure-cli-l
az group create \ --name CreateIntLBQS-rg \ --location eastus+ ``` ---
@@ -47,13 +48,15 @@ Create a resource group with [az group create](/cli/azure/group?view=azure-cli-l
>[!NOTE] >Standard SKU load balancer is recommended for production workloads. For more information about SKUs, see **[Azure Load Balancer SKUs](skus.md)**.
-## Configure virtual network
+:::image type="content" source="./media/quickstart-load-balancer-standard-internal-portal/resources-diagram-internal.png" alt-text="Standard load balancer resources created for quickstart." border="false":::
+
+## Configure virtual network - Standard
Before you deploy VMs and deploy your load balancer, create the supporting virtual network resources. ### Create a virtual network
-Create a virtual network using [az network vnet create](/cli/azure/network/vnet?view=azure-cli-latest#az-network-vnet-createt):
+Create a virtual network using [az network vnet create](/cli/azure/network/vnet#az-network-vnet-create):
* Named **myVNet**. * Address prefix of **10.1.0.0/16**.
@@ -71,11 +74,64 @@ Create a virtual network using [az network vnet create](/cli/azure/network/vnet?
--subnet-name myBackendSubnet \ --subnet-prefixes 10.1.0.0/24 ```+
+### Create a public IP address
+
+Use [az network public-ip create](/cli/azure/network/public-ip#az-network-public-ip-create) to create a public ip address for the bastion host:
+
+* Create a standard zone redundant public IP address named **myBastionIP**.
+* In **CreateIntLBQS-rg**.
+
+```azurecli-interactive
+az network public-ip create \
+ --resource-group CreateIntLBQS-rg \
+ --name myBastionIP \
+ --sku Standard
+```
+### Create a bastion subnet
+
+Use [az network vnet subnet create](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-create) to create a bastion subnet:
+
+* Named **AzureBastionSubnet**.
+* Address prefix of **10.1.1.0/24**.
+* In virtual network **myVNet**.
+* In resource group **CreateIntLBQS-rg**.
+
+```azurecli-interactive
+az network vnet subnet create \
+ --resource-group CreateIntLBQS-rg \
+ --name AzureBastionSubnet \
+ --vnet-name myVNet \
+ --address-prefixes 10.1.1.0/24
+```
+
+### Create bastion host
+
+Use [az network bastion create](/cli/azure/network/bastion#az-network-bastion-create) to create a bastion host:
+
+* Named **myBastionHost**.
+* In **CreateIntLBQS-rg**.
+* Associated with public IP **myBastionIP**.
+* Associated with virtual network **myVNet**.
+* In **eastus** location.
+
+```azurecli-interactive
+az network bastion create \
+ --resource-group CreateIntLBQS-rg \
+ --name myBastionHost \
+ --public-ip-address myBastionIP \
+ --vnet-name myVNet \
+ --location eastus
+```
+
+It can take a few minutes for the Azure Bastion host to deploy.
++ ### Create a network security group For a standard load balancer, the VMs in the backend address for are required to have network interfaces that belong to a network security group.
-Create a network security group using [az network nsg create](/cli/azure/network/nsg?view=azure-cli-latest#az-network-nsg-create):
+Create a network security group using [az network nsg create](/cli/azure/network/nsg#az-network-nsg-create):
* Named **myNSG**. * In resource group **CreateIntLBQS-rg**.
@@ -88,7 +144,7 @@ Create a network security group using [az network nsg create](/cli/azure/network
### Create a network security group rule
-Create a network security group rule using [az network nsg rule create](/cli/azure/network/nsg/rule?view=azure-cli-latest#az-network-nsg-rule-create):
+Create a network security group rule using [az network nsg rule create](/cli/azure/network/nsg/rule#az-network-nsg-rule-create):
* Named **myNSGRuleHTTP**. * In the network security group you created in the previous step, **myNSG**.
@@ -116,142 +172,59 @@ Create a network security group rule using [az network nsg rule create](/cli/azu
--priority 200 ```
-## Create backend servers
+## Create backend servers - Standard
In this section, you create:
-* Network interfaces for the backend servers.
-* A cloud configuration file named **cloud-init.txt** for the server configuration.
-* Two virtual machines to be used as backend servers for the load balancer.
+* Three network interfaces for the virtual machines.
+* Three virtual machines to be used as backend servers for the load balancer.
### Create network interfaces for the virtual machines
-Create two network interfaces with [az network nic create](/cli/azure/network/nic?view=azure-cli-latest#az-network-nic-create):
-
-#### VM1
-
-* Named **myNicVM1**.
-* In resource group **CreateIntLBQS-rg**.
-* In virtual network **myVNet**.
-* In subnet **myBackendSubnet**.
-* In network security group **myNSG**.
-
-```azurecli-interactive
- az network nic create \
- --resource-group CreateIntLBQS-rg \
- --name myNicVM1 \
- --vnet-name myVNet \
- --subnet myBackEndSubnet \
- --network-security-group myNSG
-```
-#### VM2
+Create three network interfaces with [az network nic create](/cli/azure/network/nic#az-network-nic-create):
-* Named **myNicVM2**.
+* Named **myNicVM1**, **myNicVM2**, and **myNicVM3**.
* In resource group **CreateIntLBQS-rg**. * In virtual network **myVNet**. * In subnet **myBackendSubnet**. * In network security group **myNSG**. ```azurecli-interactive
- az network nic create \
- --resource-group CreateIntLBQS-rg \
- --name myNicVM2 \
- --vnet-name myVnet \
- --subnet myBackEndSubnet \
- --network-security-group myNSG
+ array=(myNicVM1 myNicVM2 myNicVM3)
+ for vmnic in "${array[@]}"
+ do
+ az network nic create \
+ --resource-group CreateIntLBQS-rg \
+ --name $vmnic \
+ --vnet-name myVNet \
+ --subnet myBackEndSubnet \
+ --network-security-group myNSG
+ done
```
-### Create cloud-init configuration file
-
-Use a cloud-init configuration file to install NGINX and run a 'Hello World' Node.js app on a Linux virtual machine.
-
-In your current shell, create a file named cloud-init.txt. Copy and paste the following configuration into the shell. Ensure that you copy the whole cloud-init file correctly, especially the first line:
-
-```yaml
-#cloud-config
-package_upgrade: true
-packages:
- - nginx
- - nodejs
- - npm
-write_files:
- - owner: www-data:www-data
- - path: /etc/nginx/sites-available/default
- content: |
- server {
- listen 80;
- location / {
- proxy_pass http://localhost:3000;
- proxy_http_version 1.1;
- proxy_set_header Upgrade $http_upgrade;
- proxy_set_header Connection keep-alive;
- proxy_set_header Host $host;
- proxy_cache_bypass $http_upgrade;
- }
- }
- - owner: azureuser:azureuser
- - path: /home/azureuser/myapp/index.js
- content: |
- var express = require('express')
- var app = express()
- var os = require('os');
- app.get('/', function (req, res) {
- res.send('Hello World from host ' + os.hostname() + '!')
- })
- app.listen(3000, function () {
- console.log('Hello world app listening on port 3000!')
- })
-runcmd:
- - service nginx restart
- - cd "/home/azureuser/myapp"
- - npm init
- - npm install express -y
- - nodejs index.js
-```
### Create virtual machines
-Create the virtual machines with [az vm create](/cli/azure/vm?view=azure-cli-latest#az-vm-create):
-
-#### VM1
-* Named **myVM1**.
-* In resource group **CreateIntLBQS-rg**.
-* Attached to network interface **myNicVM1**.
-* Virtual machine image **UbuntuLTS**.
-* Configuration file **cloud-init.txt** you created in step above.
-* In **Zone 1**.
+Create the virtual machines with [az vm create](/cli/azure/vm#az-vm-create):
-```azurecli-interactive
- az vm create \
- --resource-group CreateIntLBQS-rg \
- --name myVM1 \
- --nics myNicVM1 \
- --image UbuntuLTS \
- --admin-user azureuser \
- --generate-ssh-keys \
- --custom-data cloud-init.txt \
- --zone 1 \
- --no-wait
-
-```
-#### VM2
-* Named **myVM2**.
+* Named **myVM1**, **myVM2**, and **myVM3**.
* In resource group **CreateIntLBQS-rg**.
-* Attached to network interface **myNicVM2**.
-* Virtual machine image **UbuntuLTS**.
-* Configuration file **cloud-init.txt** you created in step above.
-* In **Zone 2**.
+* Attached to network interface **myNicVM1**, **myNicVM2**, and **myNicVM3**.
+* Virtual machine image **win2019datacenter**.
+* In **Zone 1**, **Zone 2**, and **Zone 3**.
```azurecli-interactive
- az vm create \
+ array=(1 2 3)
+ for n in "${array[@]}"
+ do
+ az vm create \
--resource-group CreateIntLBQS-rg \
- --name myVM2 \
- --nics myNicVM2 \
- --image UbuntuLTS \
- --admin-user azureuser \
- --generate-ssh-keys \
- --custom-data cloud-init.txt \
- --zone 2 \
+ --name myVM$n \
+ --nics myNicVM$n \
+ --image win2019datacenter \
+ --admin-username azureuser \
+ --zone $n \
--no-wait
+ done
``` It may take a few minutes for the VMs to deploy.
@@ -267,7 +240,7 @@ This section details how you can create and configure the following components o
### Create the load balancer resource
-Create a public load balancer with [az network lb create](/cli/azure/network/lb?view=azure-cli-latest#az-network-lb-create):
+Create a public load balancer with [az network lb create](/cli/azure/network/lb#az-network-lb-create):
* Named **myLoadBalancer**. * A frontend pool named **myFrontEnd**.
@@ -283,7 +256,7 @@ Create a public load balancer with [az network lb create](/cli/azure/network/lb?
--vnet-name myVnet \ --subnet myBackendSubnet \ --frontend-ip-name myFrontEnd \
- --backend-pool-name myBackEndPool
+ --backend-pool-name myBackEndPool
``` ### Create the health probe
@@ -292,7 +265,7 @@ A health probe checks all virtual machine instances to ensure they can send netw
A virtual machine with a failed probe check is removed from the load balancer. The virtual machine is added back into the load balancer when the failure is resolved.
-Create a health probe with [az network lb probe create](/cli/azure/network/lb/probe?view=azure-cli-latest#az-network-lb-probe-create):
+Create a health probe with [az network lb probe create](/cli/azure/network/lb/probe#az-network-lb-probe-create):
* Monitors the health of the virtual machines. * Named **myHealthProbe**.
@@ -305,7 +278,7 @@ Create a health probe with [az network lb probe create](/cli/azure/network/lb/pr
--lb-name myLoadBalancer \ --name myHealthProbe \ --protocol tcp \
- --port 80
+ --port 80
``` ### Create the load balancer rule
@@ -316,7 +289,7 @@ A load balancer rule defines:
* The backend IP pool to receive the traffic. * The required source and destination port.
-Create a load balancer rule with [az network lb rule create](/cli/azure/network/lb/rule?view=azure-cli-latest#az-network-lb-rule-create):
+Create a load balancer rule with [az network lb rule create](/cli/azure/network/lb/rule#az-network-lb-rule-create):
* Named **myHTTPRule** * Listening on **Port 80** in the frontend pool **myFrontEnd**.
@@ -346,37 +319,25 @@ Create a load balancer rule with [az network lb rule create](/cli/azure/network/
### Add virtual machines to load balancer backend pool
-Add the virtual machines to the backend pool with [az network nic ip-config address-pool add](/cli/azure/network/nic/ip-config/address-pool?view=azure-cli-latest#az-network-nic-ip-config-address-pool-add):
-
+Add the virtual machines to the backend pool with [az network nic ip-config address-pool add](/cli/azure/network/nic/ip-config/address-pool#az-network-nic-ip-config-address-pool-add):
-#### VM1
* In backend address pool **myBackEndPool**. * In resource group **CreateIntLBQS-rg**.
-* Associated with network interface **myNicVM1** and **ipconfig1**.
+* Associated with network interface **myNicVM1**, **myNicVM2**, and **myNicVM3**.
* Associated with load balancer **myLoadBalancer**. ```azurecli-interactive
+ array=(VM1 VM2 VM3)
+ for vm in "${array[@]}"
+ do
az network nic ip-config address-pool add \ --address-pool myBackendPool \ --ip-config-name ipconfig1 \
- --nic-name myNicVM1 \
+ --nic-name myNic$vm \
--resource-group CreateIntLBQS-rg \ --lb-name myLoadBalancer
-```
-
-#### VM2
-* In backend address pool **myBackEndPool**.
-* In resource group **CreateIntLBQS-rg**.
-* Associated with network interface **myNicVM2** and **ipconfig1**.
-* Associated with load balancer **myLoadBalancer**.
+ done
-```azurecli-interactive
- az network nic ip-config address-pool add \
- --address-pool myBackendPool \
- --ip-config-name ipconfig1 \
- --nic-name myNicVM2 \
- --resource-group CreateIntLBQS-rg \
- --lb-name myLoadBalancer
``` # [**Basic SKU**](#tab/option-1-create-load-balancer-basic)
@@ -384,13 +345,15 @@ Add the virtual machines to the backend pool with [az network nic ip-config addr
>[!NOTE] >Standard SKU load balancer is recommended for production workloads. For more information about SKUS, see **[Azure Load Balancer SKUs](skus.md)**.
-## Configure virtual network
+:::image type="content" source="./media/quickstart-load-balancer-standard-internal-portal/resources-diagram-internal-basic.png" alt-text="Basic load balancer resources created in quickstart." border="false":::
+
+## Configure virtual network - Basic
Before you deploy VMs and deploy your load balancer, create the supporting virtual network resources. ### Create a virtual network
-Create a virtual network using [az network vnet create](/cli/azure/network/vnet?view=azure-cli-latest#az-network-vnet-createt):
+Create a virtual network using [az network vnet create](/cli/azure/network/vnet#az-network-vnet-createt):
* Named **myVNet**. * Address prefix of **10.1.0.0/16**.
@@ -408,11 +371,63 @@ Create a virtual network using [az network vnet create](/cli/azure/network/vnet?
--subnet-name myBackendSubnet \ --subnet-prefixes 10.1.0.0/24 ```+
+### Create a public IP address
+
+Use [az network public-ip create](/cli/azure/network/public-ip#az-network-public-ip-create) to create a public ip address for the bastion host:
+
+* Create a standard zone redundant public IP address named **myBastionIP**.
+* In **CreateIntLBQS-rg**.
+
+```azurecli-interactive
+az network public-ip create \
+ --resource-group CreateIntLBQS-rg \
+ --name myBastionIP \
+ --sku Standard
+```
+### Create a bastion subnet
+
+Use [az network vnet subnet create](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-create) to create a bastion subnet:
+
+* Named **AzureBastionSubnet**.
+* Address prefix of **10.1.1.0/24**.
+* In virtual network **myVNet**.
+* In resource group **CreateIntLBQS-rg**.
+
+```azurecli-interactive
+az network vnet subnet create \
+ --resource-group CreateIntLBQS-rg \
+ --name AzureBastionSubnet \
+ --vnet-name myVNet \
+ --address-prefixes 10.1.1.0/24
+```
+
+### Create bastion host
+
+Use [az network bastion create](/cli/azure/network/bastion#az-network-bastion-create) to create a bastion host:
+
+* Named **myBastionHost**.
+* In **CreateIntLBQS-rg**.
+* Associated with public IP **myBastionIP**.
+* Associated with virtual network **myVNet**.
+* In **eastus** location.
+
+```azurecli-interactive
+az network bastion create \
+ --resource-group CreateIntLBQS-rg \
+ --name myBastionHost \
+ --public-ip-address myBastionIP \
+ --vnet-name myVNet \
+ --location eastus
+```
+
+It can take a few minutes for the Azure Bastion host to deploy.
+ ### Create a network security group For a standard load balancer, the VMs in the backend address for are required to have network interfaces that belong to a network security group.
-Create a network security group using [az network nsg create](/cli/azure/network/nsg?view=azure-cli-latest#az-network-nsg-create):
+Create a network security group using [az network nsg create](/cli/azure/network/nsg#az-network-nsg-create):
* Named **myNSG**. * In resource group **CreateIntLBQS-rg**.
@@ -425,7 +440,7 @@ Create a network security group using [az network nsg create](/cli/azure/network
### Create a network security group rule
-Create a network security group rule using [az network nsg rule create](/cli/azure/network/nsg/rule?view=azure-cli-latest#az-network-nsg-rule-create):
+Create a network security group rule using [az network nsg rule create](/cli/azure/network/nsg/rule#az-network-nsg-rule-create):
* Named **myNSGRuleHTTP**. * In the network security group you created in the previous step, **myNSG**.
@@ -453,112 +468,48 @@ Create a network security group rule using [az network nsg rule create](/cli/azu
--priority 200 ```
-### Create network interfaces for the virtual machines
+## Create backend servers - Basic
-Create two network interfaces with [az network nic create](/cli/azure/network/nic?view=azure-cli-latest#az-network-nic-create):
+In this section, you create:
-#### VM1
+* Three network interfaces for the virtual machines.
+* Availability set for the virtual machines
+* Three virtual machines to be used as backend servers for the load balancer.
-* Named **myNicVM1**.
-* In resource group **CreateIntLBQS-rg**.
-* In virtual network **myVNet**.
-* In subnet **myBackendSubnet**.
-* In network security group **myNSG**.
+### Create network interfaces for the virtual machines
-```azurecli-interactive
+Create three network interfaces with [az network nic create](/cli/azure/network/nic#az-network-nic-create):
- az network nic create \
- --resource-group CreateIntLBQS-rg \
- --name myNicVM1 \
- --vnet-name myVNet \
- --subnet myBackEndSubnet \
- --network-security-group myNSG
-```
-#### VM2
-
-* Named **myNicVM2**.
+* Named **myNicVM1**, **myNicVM2**, and **myNicVM3**.
* In resource group **CreateIntLBQS-rg**. * In virtual network **myVNet**. * In subnet **myBackendSubnet**.
+* In network security group **myNSG**.
```azurecli-interactive
- az network nic create \
- --resource-group CreateIntLBQS-rg \
- --name myNicVM2 \
- --vnet-name myVnet \
- --subnet myBackEndSubnet \
- --network-security-group myNSG
-```
-
-## Create backend servers
-
-In this section, you create:
-
-* A cloud configuration file named **cloud-init.txt** for the server configuration.
-* Availability set for the virtual machines
-* Two virtual machines to be used as backend servers for the load balancer.
-
-To verify that the load balancer was successfully created, you install NGINX on the virtual machines.
-
-### Create cloud-init configuration file
-
-Use a cloud-init configuration file to install NGINX and run a 'Hello World' Node.js app on a Linux virtual machine.
-
-In your current shell, create a file named cloud-init.txt. Copy and paste the following configuration into the shell. Ensure that you copy the whole cloud-init file correctly, especially the first line:
-
-```yaml
-#cloud-config
-package_upgrade: true
-packages:
- - nginx
- - nodejs
- - npm
-write_files:
- - owner: www-data:www-data
- - path: /etc/nginx/sites-available/default
- content: |
- server {
- listen 80;
- location / {
- proxy_pass http://localhost:3000;
- proxy_http_version 1.1;
- proxy_set_header Upgrade $http_upgrade;
- proxy_set_header Connection keep-alive;
- proxy_set_header Host $host;
- proxy_cache_bypass $http_upgrade;
- }
- }
- - owner: azureuser:azureuser
- - path: /home/azureuser/myapp/index.js
- content: |
- var express = require('express')
- var app = express()
- var os = require('os');
- app.get('/', function (req, res) {
- res.send('Hello World from host ' + os.hostname() + '!')
- })
- app.listen(3000, function () {
- console.log('Hello world app listening on port 3000!')
- })
-runcmd:
- - service nginx restart
- - cd "/home/azureuser/myapp"
- - npm init
- - npm install express -y
- - nodejs index.js
+ array=(myNicVM1 myNicVM2 myNicVM3)
+ for vmnic in "${array[@]}"
+ do
+ az network nic create \
+ --resource-group CreateIntLBQS-rg \
+ --name $vmnic \
+ --vnet-name myVNet \
+ --subnet myBackEndSubnet \
+ --network-security-group myNSG
+ done
``` ### Create availability set for virtual machines
-Create the availability set with [az vm availability-set create](/cli/azure/vm/availability-set?view=azure-cli-latest#az-vm-availability-set-create):
+Create the availability set with [az vm availability-set create](/cli/azure/vm/availability-set#az-vm-availability-set-create):
-* Named **myAvSet**.
+* Named **myAvailabilitySet**.
* In resource group **CreateIntLBQS-rg**. * Location **eastus**. ```azurecli-interactive az vm availability-set create \
- --name myAvSet \
+ --name myAvailabilitySet \
--resource-group CreateIntLBQS-rg \ --location eastus
@@ -566,50 +517,29 @@ Create the availability set with [az vm availability-set create](/cli/azure/vm/a
### Create virtual machines
-Create the virtual machines with [az vm create](/cli/azure/vm?view=azure-cli-latest#az-vm-create):
+Create the virtual machines with [az vm create](/cli/azure/vm#az-vm-create):
-#### VM1
-* Named **myVM1**.
+* Named **myVM1**, **myVM2**, and **myVM3**.
* In resource group **CreateIntLBQS-rg**.
-* Attached to network interface **myNicVM1**.
-* Virtual machine image **UbuntuLTS**.
-* Configuration file **cloud-init.txt** you created in step above.
-* In availability set **myAvSet**.
+* Attached to network interface **myNicVM1**, **myNicVM2**, and **myNicVM3**.
+* Virtual machine image **win2019datacenter**.
+* In **myAvailabilitySet**.
-```azurecli-interactive
- az vm create \
- --resource-group CreateIntLBQS-rg \
- --name myVM1 \
- --nics myNicVM1 \
- --image UbuntuLTS \
- --admin-user azureuser \
- --generate-ssh-keys \
- --custom-data cloud-init.txt \
- --availability-set myAvSet \
- --no-wait
-
-```
-#### VM2
-* Named **myVM2**.
-* In resource group **CreateIntLBQS-rg**.
-* Attached to network interface **myNicVM2**.
-* Virtual machine image **UbuntuLTS**.
-* Configuration file **cloud-init.txt** you created in step above.
-* In **Zone 2**.
```azurecli-interactive
- az vm create \
+ array=(1 2 3)
+ for n in "${array[@]}"
+ do
+ az vm create \
--resource-group CreateIntLBQS-rg \
- --name myVM2 \
- --nics myNicVM2 \
- --image UbuntuLTS \
- --admin-user azureuser \
- --generate-ssh-keys \
- --custom-data cloud-init.txt \
- --availability-set myAvSet \
+ --name myVM$n \
+ --nics myNicVM$n \
+ --image win2019datacenter \
+ --admin-username azureuser \
+ --availability-set myAvailabilitySet \
--no-wait
+ done
```- It may take a few minutes for the VMs to deploy. ## Create basic load balancer
@@ -623,7 +553,7 @@ This section details how you can create and configure the following components o
### Create the load balancer resource
-Create a public load balancer with [az network lb create](/cli/azure/network/lb?view=azure-cli-latest#az-network-lb-create):
+Create a public load balancer with [az network lb create](/cli/azure/network/lb#az-network-lb-create):
* Named **myLoadBalancer**. * A frontend pool named **myFrontEnd**.
@@ -639,7 +569,7 @@ Create a public load balancer with [az network lb create](/cli/azure/network/lb?
--vnet-name myVNet \ --subnet myBackendSubnet \ --frontend-ip-name myFrontEnd \
- --backend-pool-name myBackEndPool
+ --backend-pool-name myBackEndPool
``` ### Create the health probe
@@ -648,7 +578,7 @@ A health probe checks all virtual machine instances to ensure they can send netw
A virtual machine with a failed probe check is removed from the load balancer. The virtual machine is added back into the load balancer when the failure is resolved.
-Create a health probe with [az network lb probe create](/cli/azure/network/lb/probe?view=azure-cli-latest#az-network-lb-probe-create):
+Create a health probe with [az network lb probe create](/cli/azure/network/lb/probe#az-network-lb-probe-create):
* Monitors the health of the virtual machines. * Named **myHealthProbe**.
@@ -661,7 +591,7 @@ Create a health probe with [az network lb probe create](/cli/azure/network/lb/pr
--lb-name myLoadBalancer \ --name myHealthProbe \ --protocol tcp \
- --port 80
+ --port 80
``` ### Create the load balancer rule
@@ -672,7 +602,7 @@ A load balancer rule defines:
* The backend IP pool to receive the traffic. * The required source and destination port.
-Create a load balancer rule with [az network lb rule create](/cli/azure/network/lb/rule?view=azure-cli-latest#az-network-lb-rule-create):
+Create a load balancer rule with [az network lb rule create](/cli/azure/network/lb/rule#az-network-lb-rule-create):
* Named **myHTTPRule** * Listening on **Port 80** in the frontend pool **myFrontEnd**.
@@ -696,96 +626,33 @@ Create a load balancer rule with [az network lb rule create](/cli/azure/network/
``` ### Add virtual machines to load balancer backend pool
-Add the virtual machines to the backend pool with [az network nic ip-config address-pool add](/cli/azure/network/nic/ip-config/address-pool?view=azure-cli-latest#az-network-nic-ip-config-address-pool-add):
-
+Add the virtual machines to the backend pool with [az network nic ip-config address-pool add](/cli/azure/network/nic/ip-config/address-pool#az-network-nic-ip-config-address-pool-add):
-#### VM1
* In backend address pool **myBackEndPool**. * In resource group **CreateIntLBQS-rg**.
-* Associated with network interface **myNicVM1** and **ipconfig1**.
+* Associated with network interface **myNicVM1**, **myNicVM2**, and **myNicVM3**.
* Associated with load balancer **myLoadBalancer**. ```azurecli-interactive
+ array=(VM1 VM2 VM3)
+ for vm in "${array[@]}"
+ do
az network nic ip-config address-pool add \ --address-pool myBackendPool \ --ip-config-name ipconfig1 \
- --nic-name myNicVM1 \
+ --nic-name myNic$vm \
--resource-group CreateIntLBQS-rg \ --lb-name myLoadBalancer
-```
+ done
-#### VM2
-* In backend address pool **myBackEndPool**.
-* In resource group **CreateIntLBQS-rg**.
-* Associated with network interface **myNicVM2** and **ipconfig1**.
-* Associated with load balancer **myLoadBalancer**.
-
-```azurecli-interactive
- az network nic ip-config address-pool add \
- --address-pool myBackendPool \
- --ip-config-name ipconfig1 \
- --nic-name myNicVM2 \
- --resource-group CreateIntLBQS-rg \
- --lb-name myLoadBalancer
```- --- ## Test the load balancer
-### Create Azure Bastion public IP
-
-Use [az network public-ip create](/cli/azure/network/public-ip?view=azure-cli-latest#az-network-public-ip-create) to create a public ip address for the bastion host:
-
-* Create a standard zone redundant public IP address named **myBastionIP**.
-* In **CreateIntLBQS-rg**.
-
-```azurecli-interactive
- az network public-ip create \
- --resource-group CreateIntLBQS-rg \
- --name myBastionIP \
- --sku Standard
-```
-
-### Create Azure Bastion subnet
-
-Use [az network vnet subnet create](/cli/azure/network/vnet/subnet?view=azure-cli-latest#az-network-vnet-subnet-create) to create a subnet:
-
-* Named **AzureBastionSubnet**.
-* Address prefix of **10.1.1.0/24**.
-* In virtual network **myVNet**.
-* In resource group **CreateIntLBQS-rg**.
-
-```azurecli-interactive
- az network vnet subnet create \
- --resource-group CreateIntLBQS-rg \
- --name AzureBastionSubnet \
- --vnet-name myVNet \
- --address-prefixes 10.1.1.0/24
-```
-
-### Create Azure Bastion host
-Use [az network bastion create](/cli/azure/network/bastion?view=azure-cli-latest#az-network-bastion-create) to create a bastion host:
-
-* Named **myBastionHost**
-* In **CreateIntLBQS-rg**
-* Associated with public IP **myBastionIP**.
-* Associated with virtual network **myVNet**.
-* In **eastus** location.
-
-```azurecli-interactive
- az network bastion create \
- --resource-group CreateIntLBQS-rg \
- --name myBastionHost \
- --public-ip-address myBastionIP \
- --vnet-name myVNet \
- --location eastus
-```
-It will take a few minutes for the bastion host to deploy.
- ### Create test virtual machine
-Create the network interface with [az network nic create](/cli/azure/network/nic?view=azure-cli-latest#az-network-nic-create):
+Create the network interface with [az network nic create](/cli/azure/network/nic#az-network-nic-create):
* Named **myNicTestVM**. * In resource group **CreateIntLBQS-rg**.
@@ -801,14 +668,12 @@ Create the network interface with [az network nic create](/cli/azure/network/nic
--subnet myBackEndSubnet \ --network-security-group myNSG ```
-Create the virtual machine with [az vm create](/cli/azure/vm?view=azure-cli-latest#az-vm-create):
+Create the virtual machine with [az vm create](/cli/azure/vm#az-vm-create):
* Named **myTestVM**. * In resource group **CreateIntLBQS-rg**. * Attached to network interface **myNicTestVM**. * Virtual machine image **Win2019Datacenter**.
-* Choose values for **\<adminpass>** and **\<adminuser>**.
-
```azurecli-interactive az vm create \
@@ -816,23 +681,41 @@ Create the virtual machine with [az vm create](/cli/azure/vm?view=azure-cli-late
--name myTestVM \ --nics myNicTestVM \ --image Win2019Datacenter \
- --admin-username <adminuser> \
- --admin-password <adminpass> \
+ --admin-username azureuser \
--no-wait ``` Can take a few minutes for the virtual machine to deploy.
+## Install IIS
+
+Use [az vm extension set](/cli/azure/vm/extension#az_vm_extension_set) to install IIS on the virtual machines and set the default website to the computer name.
+
+```azurecli-interactive
+ array=(myVM1 myVM2 myVM3)
+ for vm in "${array[@]}"
+ do
+ az vm extension set \
+ --publisher Microsoft.Compute \
+ --version 1.8 \
+ --name CustomScriptExtension \
+ --vm-name $vm \
+ --resource-group CreateIntLBQS-rg \
+ --settings '{"commandToExecute":"powershell Add-WindowsFeature Web-Server; powershell Add-Content -Path \"C:\\inetpub\\wwwroot\\Default.htm\" -Value $($env:computername)"}'
+ done
+
+```
+ ### Test 1. [Sign in](https://portal.azure.com) to the Azure portal.
-1. Find the private IP address for the load balancer on the **Overview** screen. Select **All services** in the left-hand menu, select **All resources**, and then select **myLoadBalancer**.
+2. Find the private IP address for the load balancer on the **Overview** screen. Select **All services** in the left-hand menu, select **All resources**, and then select **myLoadBalancer**.
-2. Make note or copy the address next to **Private IP Address** in the **Overview** of **myLoadBalancer**.
+3. Make note or copy the address next to **Private IP Address** in the **Overview** of **myLoadBalancer**.
-3. Select **All services** in the left-hand menu, select **All resources**, and then from the resources list, select **myTestVM** that is located in the **CreateIntLBQS-rg** resource group.
+4. Select **All services** in the left-hand menu, select **All resources**, and then from the resources list, select **myTestVM** that is located in the **CreateIntLBQS-rg** resource group.
-4. On the **Overview** page, select **Connect**, then **Bastion**.
+5. On the **Overview** page, select **Connect**, then **Bastion**.
6. Enter the username and password entered during VM creation.
@@ -846,7 +729,7 @@ To see the load balancer distribute traffic across all three VMs, you can custom
## Clean up resources
-When no longer needed, use the [az group delete](/cli/azure/group?view=azure-cli-latest#az-group-delete) command to remove the resource group, load balancer, and all related resources.
+When no longer needed, use the [az group delete](/cli/azure/group#az-group-delete) command to remove the resource group, load balancer, and all related resources.
```azurecli-interactive az group delete \
@@ -854,13 +737,14 @@ When no longer needed, use the [az group delete](/cli/azure/group?view=azure-cli
``` ## Next steps
-In this quickstart
+
+In this quickstart:
* You created a standard or public load balancer * Attached virtual machines. * Configured the load balancer traffic rule and health probe. * Tested the load balancer.
-To learn more about Azure Load Balancer, continue to
+To learn more about Azure Load Balancer, continue to:
> [!div class="nextstepaction"] > [What is Azure Load Balancer?](load-balancer-overview.md)\ No newline at end of file
logic-apps https://docs.microsoft.com/en-us/azure/logic-apps/workflow-definition-language-functions-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/workflow-definition-language-functions-reference.md
@@ -2143,7 +2143,7 @@ formatNumber(1234567890, '0,0.00', 'is-is')
Suppose that you want to format the number `17.35`. This example formats the number to the string "$17.35". ```
-formatNumber(17.36, 'C2')
+formatNumber(17.35, 'C2')
``` *Example 4*
@@ -2151,7 +2151,7 @@ formatNumber(17.36, 'C2')
Suppose that you want to format the number `17.35`. This example formats the number to the string "17,35 kr". ```
-formatNumber(17.36, 'C2', 'is-is')
+formatNumber(17.35, 'C2', 'is-is')
``` <a name="getFutureTime"></a>
@@ -2807,15 +2807,11 @@ lastIndexOf('<text>', '<searchText>')
If the string or substring value is empty, the following behavior occurs:
-* If the string value is empty, `-1` is returned:
+* If only the string value is empty, the function returns `-1`.
-* If the string and substring values are both empty, `0` is returned.
+* If the string and substring values are both empty, the function returns `0`.
-* If only the substring value is empty, the greater of the following two values is returned:
-
- * `0`
-
- * The length of the string, minus 1.
+* If only the substring value is empty, the function returns the string length minus 1.
*Examples*
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/concept-open-source https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-open-source.md
@@ -1,7 +1,7 @@
--- title: Open-source machine learning integration titleSuffix: Azure Machine Learning
-description: Learn how to use open-source Python machine learning frameworks to train, deploy and manage end-to-end machine learning solutions in Azure Machine Learning.
+description: Learn how to use open-source Python machine learning frameworks to train, deploy, and manage end-to-end machine learning solutions in Azure Machine Learning.
services: machine-learning ms.service: machine-learning ms.subservice: core
@@ -47,7 +47,7 @@ Training a deep learning model from scratch often requires large amounts of time
Reinforcement learning is an artificial intelligence technique that trains models using actions, states, and rewards: Reinforcement learning agents learn to take a set of predefined actions that maximize the specified rewards based on the current state of their environment.
-The [Ray RLLib](https://github.com/ray-project/ray) project has a set features that allow for high scalability throughout the training process. The iterative process is both time- and resource-intensive as reinforcement learning agents try to learn the optimal way of achieving a task. Ray RLLib also natively supports deep learning frameworks like TensorFlow and PyTorch.
+The [Ray RLLib](https://github.com/ray-project/ray) project has a set of features that allow for high scalability throughout the training process. The iterative process is both time- and resource-intensive as reinforcement learning agents try to learn the optimal way of achieving a task. Ray RLLib also natively supports deep learning frameworks like TensorFlow and PyTorch.
To learn how to use Ray RLLib with Azure Machine Learning, see the [how to train a reinforcement learning model](how-to-use-reinforcement-learning.md).
@@ -96,4 +96,4 @@ Machine Learning Operations (MLOps), commonly thought of as DevOps for machine l
Using DevOps practices like continuous integration (CI) and continuous deployment (CD), you can automate the end-to-end machine learning lifecycle and capture governance data around it. You can define your [machine learning CI/CD pipeline in GitHub actions](./how-to-github-actions-machine-learning.md) to run Azure Machine Learning training and deployment tasks.
-Capturing software dependencies, metrics, metadata, data and model versioning are an important part of the MLOps process in order to build transparent, reproducible, and auditable pipelines. For this task, you can [use MLFlow in Azure Machine Learning](how-to-use-mlflow.md) as well as when [training machine learning models in Azure Databricks](./how-to-use-mlflow-azure-databricks.md).
+Capturing software dependencies, metrics, metadata, data and model versioning are an important part of the MLOps process in order to build transparent, reproducible, and auditable pipelines. For this task, you can [use MLFlow in Azure Machine Learning](how-to-use-mlflow.md) as well as when [training machine learning models in Azure Databricks](./how-to-use-mlflow-azure-databricks.md). You can also [deploy MLflow models as an Azure web service](how-to-deploy-mlflow-models.md).
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-configure-auto-features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-configure-auto-features.md
@@ -66,9 +66,6 @@ The following table summarizes techniques that are automatically applied to your
|**Generate more features*** |For DateTime features: Year, Month, Day, Day of week, Day of year, Quarter, Week of the year, Hour, Minute, Second.<br><br> *For forecasting tasks,* these additional DateTime features are created: ISO year, Half - half-year, Calendar month as string, Week, Day of week as string, Day of quarter, Day of year, AM/PM (0 if hour is before noon (12 pm), 1 otherwise), AM/PM as string, Hour of day (12-hr basis)<br/><br/>For Text features: Term frequency based on unigrams, bigrams, and trigrams. Learn more about [how this is done with BERT.](#bert-integration)| |**Transform and encode***|Transform numeric features that have few unique values into categorical features.<br/><br/>One-hot encoding is used for low-cardinality categorical features. One-hot-hash encoding is used for high-cardinality categorical features.| |**Word embeddings**|A text featurizer converts vectors of text tokens into sentence vectors by using a pre-trained model. Each word's embedding vector in a document is aggregated with the rest to produce a document feature vector.|
-|**Target encodings**|For categorical features, this step maps each category with an averaged target value for regression problems, and to the class probability for each class for classification problems. Frequency-based weighting and k-fold cross-validation are applied to reduce overfitting of the mapping and noise caused by sparse data categories.|
-|**Text target encoding**|For text input, a stacked linear model with bag-of-words is used to generate the probability of each class.|
-|**Weight of Evidence (WoE)**|Calculates WoE as a measure of correlation of categorical columns to the target column. WoE is calculated as the log of the ratio of in-class vs. out-of-class probabilities. This step produces one numeric feature column per class and removes the need to explicitly impute missing values and outlier treatment.|
|**Cluster Distance**|Trains a k-means clustering model on all numeric columns. Produces *k* new features (one new numeric feature per cluster) that contain the distance of each sample to the centroid of each cluster.| ## Data guardrails
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-deploy-mlflow-models https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-mlflow-models.md new file mode 100644
@@ -0,0 +1,151 @@
+---
+title: Deploy MLflow models as web services
+titleSuffix: Azure Machine Learning
+description: Set up MLflow with Azure Machine Learning to deploy your ML models as an Azure web service.
+services: machine-learning
+author: shivp950
+ms.author: shipatel
+ms.service: machine-learning
+ms.subservice: core
+ms.reviewer: nibaccam
+ms.date: 12/23/2020
+ms.topic: conceptual
+ms.custom: how-to, devx-track-python
+---
+
+# Deploy MLflow models as Azure web services (preview)
+
+In this article, learn how to deploy your [MLflow](https://www.mlflow.org) model as an Azure web service, so you can leverage and apply Azure Machine Learning's model management and data drift detection capabilities to your production models.
+
+Azure Machine Learning offers deployment configurations for:
+* Azure Container Instance (ACI) which is a suitable choice for a quick dev-test deployment.
+* Azure Kubernetes Service (AKS) which is recommended for scalable production deployments.
+> [!TIP]
+> The information in this document is primarily for data scientists and developers who want to deploy their MLflow model to an Azure Machine Learning web service endpoint. If you are an administrator interested in monitoring resource usage and events from Azure Machine Learning, such as quotas, completed training runs, or completed model deployments, see [Monitoring Azure Machine Learning](monitor-azure-machine-learning.md).
+## MLflow with Azure Machine Learning deployment
+
+MLflow is an open-source library for managing the life cycle of your machine learning experiments. Its integration with Azure Machine Learning allows for you to extend this management beyond model training to the deployment phase of your production model.
+
+The following diagram demonstrates that with the MLflow deploy API and Azure Machine Learning, you can deploy models created with popular frameworks, like PyTorch, Tensorflow, scikit-learn, etc., as Azure web services and manage them in your workspace.
+
+![ deploy mlflow models with azure machine learning](./media/how-to-deploy-mlflow-models/mlflow-diagram-deploy.png)
++
+>[!NOTE]
+> As an open source library, MLflow changes frequently. As such, the functionality made available via the Azure Machine Learning and MLflow integration should be considered as a preview, and not fully supported by Microsoft.
+
+## Prerequisites
+
+* A machine learning model. If you don't have a trained model, find the notebook example that best fits your compute scenario in [this repo](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/ml-frameworks/using-mlflow) and follow its instructions.
+* [Set up the MLflow Tracking URI to connect Azure Machine Learning](how-to-use-mlflow.md#track-local-runs).
+* Install the `azureml-mlflow` package.
+ * This package automatically brings in `azureml-core` of the [The Azure Machine Learning Python SDK](/python/api/overview/azure/ml/install?preserve-view=true&view=azure-ml-py), which provides the connectivity for MLflow to access your workspace.
+* See which [access permissions you need to perform your MLflow operations with your workspace](how-to-assign-roles.md#mlflow-operations).
+
+## Deploy to Azure Container Instance (ACI)
+
+To deploy your MLflow model to an Azure Machine Learning web service, your model must be set up with the [MLflow Tracking URI to connect with Azure Machine Learning](how-to-use-mlflow.md).
+
+Set up your deployment configuration with the [deploy_configuration()](/python/api/azureml-core/azureml.core.webservice.aciwebservice?preserve-view=true&view=azure-ml-py#&preserve-view=truedeploy-configuration-cpu-cores-none--memory-gb-none--tags-none--properties-none--description-none--location-none--auth-enabled-none--ssl-enabled-none--enable-app-insights-none--ssl-cert-pem-file-none--ssl-key-pem-file-none--ssl-cname-none--dns-name-label-none-) method. You can also add tags and descriptions to help keep track of your web service.
+
+```python
+from azureml.core.webservice import AciWebservice, Webservice
+
+# Set the model path to the model folder created by your run
+model_path = "model"
+
+# Configure
+aci_config = AciWebservice.deploy_configuration(cpu_cores=1,
+ memory_gb=1,
+ tags={'method' : 'sklearn'},
+ description='Diabetes model',
+ location='eastus2')
+```
+
+Then, register and deploy the model in one step with MLflow's [deploy](https://www.mlflow.org/docs/latest/python_api/mlflow.azureml.html#mlflow.azureml.deploy) method for Azure Machine Learning.
+
+```python
+(webservice,model) = mlflow.azureml.deploy( model_uri='runs:/{}/{}'.format(run.id, model_path),
+ workspace=ws,
+ model_name='sklearn-model',
+ service_name='diabetes-model-1',
+ deployment_config=aci_config,
+ tags=None, mlflow_home=None, synchronous=True)
+
+webservice.wait_for_deployment(show_output=True)
+```
+
+## Deploy to Azure Kubernetes Service (AKS)
+
+To deploy your MLflow model to an Azure Machine Learning web service, your model must be set up with the [MLflow Tracking URI to connect with Azure Machine Learning](how-to-use-mlflow.md).
+
+To deploy to AKS, first create an AKS cluster. Create an AKS cluster using the [ComputeTarget.create()](/python/api/azureml-core/azureml.core.computetarget?preserve-view=true&view=azure-ml-py#&preserve-view=truecreate-workspace--name--provisioning-configuration-) method. It may take 20-25 minutes to create a new cluster.
+
+```python
+from azureml.core.compute import AksCompute, ComputeTarget
+
+# Use the default configuration (can also provide parameters to customize)
+prov_config = AksCompute.provisioning_configuration()
+
+aks_name = 'aks-mlflow'
+
+# Create the cluster
+aks_target = ComputeTarget.create(workspace=ws,
+ name=aks_name,
+ provisioning_configuration=prov_config)
+
+aks_target.wait_for_completion(show_output = True)
+
+print(aks_target.provisioning_state)
+print(aks_target.provisioning_errors)
+```
+Set up your deployment configuration with the [deploy_configuration()](/python/api/azureml-core/azureml.core.webservice.aciwebservice?preserve-view=true&view=azure-ml-py#&preserve-view=truedeploy-configuration-cpu-cores-none--memory-gb-none--tags-none--properties-none--description-none--location-none--auth-enabled-none--ssl-enabled-none--enable-app-insights-none--ssl-cert-pem-file-none--ssl-key-pem-file-none--ssl-cname-none--dns-name-label-none-) method. You can also add tags and descriptions to help keep track of your web service.
+
+```python
+from azureml.core.webservice import Webservice, AksWebservice
+
+# Set the web service configuration (using default here with app insights)
+aks_config = AksWebservice.deploy_configuration(enable_app_insights=True, compute_target_name='aks-mlflow')
+
+```
+
+Then, register and deploy the model in one step with MLflow's [deploy](https://www.mlflow.org/docs/latest/python_api/mlflow.azureml.html#mlflow.azureml.deploy) method for Azure Machine Learning.
+
+```python
+
+# Webservice creation using single command
+from azureml.core.webservice import AksWebservice, Webservice
+
+# set the model path
+model_path = "model"
+
+(webservice, model) = mlflow.azureml.deploy( model_uri='runs:/{}/{}'.format(run.id, model_path),
+ workspace=ws,
+ model_name='sklearn-model',
+ service_name='my-aks',
+ deployment_config=aks_config,
+ tags=None, mlflow_home=None, synchronous=True)
++
+webservice.wait_for_deployment()
+```
+
+The service deployment can take several minutes.
+
+## Clean up resources
+
+If you don't plan to use your deployed web service, use `service.delete()` to delete it from your notebook. For more information, see the documentation for [WebService.delete()](/python/api/azureml-core/azureml.core.webservice%28class%29?preserve-view=true&view=azure-ml-py#&preserve-view=truedelete--).
+
+## Example notebooks
+
+The [MLflow with Azure Machine Learning notebooks](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/ml-frameworks/using-mlflow) demonstrate and expand upon concepts presented in this article.
+
+> [!NOTE]
+> A community-driven repository of examples using mlflow can be found at https://github.com/Azure/azureml-examples.
+
+## Next steps
+
+* [Manage your models](concept-model-management-and-deployment.md).
+* Monitor your production models for [data drift](./how-to-enable-data-collection.md).
+* [Track Azure Databricks runs with MLflow](how-to-use-mlflow-azure-databricks.md).
+
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-machine-learning-fairness-aml https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-machine-learning-fairness-aml.md
@@ -42,7 +42,7 @@ Later versions of Fairlearn should also work in the following example code.
The following example shows how to use the fairness package. We will upload model fairness insights into Azure Machine Learning and see the fairness assessment dashboard in Azure Machine Learning studio.
-1. Train a sample model in a Jupyter notebook.
+1. Train a sample model in Jupyter Notebook.
For the dataset, we use the well-known adult census dataset, which we fetch from OpenML. We pretend we have a loan decision problem with the label indicating whether an individual repaid a previous loan. We will train a model to predict if previously unseen individuals will repay a loan. Such a model might be used in making loan decisions.
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-set-up-training-targets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-set-up-training-targets.md
@@ -203,6 +203,18 @@ method, or from the Experiment tab view in Azure Machine Learning studio client
Internally, Azure ML concatenates the blocks with the same metric name into a contiguous list.
+* **Run fails with `jwt.exceptions.DecodeError`**: Exact error message: `jwt.exceptions.DecodeError: It is required that you pass in a value for the "algorithms" argument when calling decode()`.
+
+ Consider upgrading to the latest version of azureml-core: `pip install -U azureml-core`.
+
+ If you are running into this issue for local runs, check the version of PyJWT installed in your environment where you are starting runs. The supported versions of PyJWT are < 2.0.0. Uninstall PyJWT from the environment if the version is >= 2.0.0. You may check the version of PyJWT, uninstall and install the right version as follows:
+ 1. Start a command shell, activate conda environment where azureml-core is installed.
+ 2. Enter `pip freeze` and look for `PyJWT`, if found, the version listed should be < 2.0.0
+ 3. If the listed version is not a supported version, `pip uninstall PyJWT` in the command shell and enter y for confirmation.
+ 4. Install using `pip install 'PyJWT<2.0.0'`
+
+ If you are submitting a user-created environment with your run, consider using the latest version of azureml-core in that environment. Versions >= 1.18.0 of azureml-core already pin PyJWT < 2.0.0. If you need to use a version of azureml-core < 1.18.0 in the environment you submit, make sure to specify PyJWT < 2.0.0 in your pip dependencies.
+ ## Next steps * [Tutorial: Train a model](tutorial-train-models-with-aml.md) uses a managed compute target to train a model.
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-use-mlflow-azure-databricks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-use-mlflow-azure-databricks.md
@@ -19,7 +19,7 @@ In this article, learn how to enable MLflow's tracking URI and logging API, coll
[MLflow](https://www.mlflow.org) is an open-source library for managing the life cycle of your machine learning experiments. MLFlow Tracking is a component of MLflow that logs and tracks your training run metrics and model artifacts. Learn more about [Azure Databricks and MLflow](/azure/databricks/applications/mlflow/).
-See [Track experiment runs and create endpoints with MLflow and Azure Machine Learning](how-to-use-mlflow.md) for additional MLflow and Azure Machine Learning functionality integrations.
+See [Track experiment runs with MLflow and Azure Machine Learning](how-to-use-mlflow.md) for additional MLflow and Azure Machine Learning functionality integrations.
>[!NOTE] > As an open source library, MLflow changes frequently. As such, the functionality made available via the Azure Machine Learning and MLflow integration should be considered as a preview, and not fully supported by Microsoft.
@@ -177,8 +177,8 @@ When you are ready to create an endpoint for your ML models. You can deploy as,
You can leverage the [mlflow.azureml.deploy](https://www.mlflow.org/docs/latest/python_api/mlflow.azureml.html#mlflow.azureml.deploy) API to deploy a model to your Azure Machine Learning workspace. If you only registered the model to the Azure Databricks workspace, as described in the [register models with MLflow](#register-models-with-mlflow) section, specify the `model_name` parameter to register the model into Azure Machine Learning workspace. Azure Databricks runs can be deployed to the following endpoints,
-* [Azure Container Instance](how-to-deploy-models-with-mlflow.md#deploy-to-aci)
-* [Azure Kubernetes Service](how-to-deploy-models-with-mlflow.md#deploy-to-aks)
+* [Azure Container Instance](how-to-deploy-mlflow-models.md#deploy-to-azure-container-instance-aci)
+* [Azure Kubernetes Service](how-to-deploy-mlflow-models.md#deploy-to-azure-kubernetes-service-aks)
### Deploy models to ADB endpoints for batch scoring
@@ -228,7 +228,7 @@ If you don't plan to use the logged metrics and artifacts in your workspace, the
The [MLflow with Azure Machine Learning notebooks](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/track-and-monitor-experiments/using-mlflow) demonstrate and expand upon concepts presented in this article. ## Next steps-
+* [Deploy MLflow models as an Azure web service](how-to-deploy-mlflow-models.md).
* [Manage your models](concept-model-management-and-deployment.md).
-* [Track experiment runs and create endpoints with MLflow and Azure Machine Learning](how-to-use-mlflow.md).
+* [Track experiment runs with MLflow and Azure Machine Learning](how-to-use-mlflow.md).
* Learn more about [Azure Databricks and MLflow](/azure/databricks/applications/mlflow/).\ No newline at end of file
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-use-mlflow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-use-mlflow.md
@@ -206,7 +206,7 @@ run.get_metrics()
Register and track your models with the [Azure Machine Learning model registry](concept-model-management-and-deployment.md#register-package-and-deploy-models-from-anywhere) which supports the MLflow model registry. Azure Machine Learning models are aligned with the MLflow model schema making it easy to export and import these models across different workflows. The MLflow related metadata such as, run ID is also tagged with the registered model for traceability. Users can submit training runs, register, and deploy models produced from MLflow runs.
-If you want to deploy and register your production ready model in one step, see [Deploy and register MLflow models](how-to-deploy-models-with-mlflow.md).
+If you want to deploy and register your production ready model in one step, see [Deploy and register MLflow models](how-to-deploy-mlflow-models.md).
To register and view a model from a run, use the following steps:
@@ -255,7 +255,7 @@ The [MLflow with Azure ML notebooks](https://github.com/Azure/MachineLearningNot
## Next steps
-* [Deploy models with MLflow](how-to-deploy-models-with-mlflow.md).
+* [Deploy models with MLflow](how-to-deploy-mlflow-models.md).
* Monitor your production models for [data drift](./how-to-enable-data-collection.md). * [Track Azure Databricks runs with MLflow](how-to-use-mlflow-azure-databricks.md). * [Manage your models](concept-model-management-and-deployment.md).\ No newline at end of file
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/overview-what-is-azure-ml https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/overview-what-is-azure-ml.md
@@ -113,7 +113,7 @@ Azure Machine Learning works with other services on the Azure platform, and also
+ __Azure Virtual Networks__. For more information, see [Virtual network isolation and privacy overview](how-to-network-security-overview.md). + __Azure Pipelines__. For more information, see [Train and deploy machine learning models](/azure/devops/pipelines/targets/azure-machine-learning). + __Git repository logs__. For more information, see [Git integration](concept-train-model-git-integration.md).
-+ __MLFlow__. For more information, see [MLflow to track metrics and deploy models](how-to-use-mlflow.md)
++ __MLFlow__. For more information, see [MLflow to track metrics](how-to-use-mlflow.md) and [Deploy Mlflow models as a web service](how-to-deploy-mlflow-models.md) + __Kubeflow__. For more information, see [build end-to-end workflow pipelines](https://www.kubeflow.org/docs/azure/). ### Secure communications
marketplace https://docs.microsoft.com/en-us/azure/marketplace/azure-vm-get-sas-uri https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-vm-get-sas-uri.md
@@ -6,7 +6,7 @@ ms.subservice: partnercenter-marketplace-publisher
ms.topic: how-to author: iqshahmicrosoft ms.author: krsh
-ms.date: 10/19/2020
+ms.date: 1/5/2021
--- # How to generate a SAS URI for a VM image
@@ -58,7 +58,7 @@ There are two common tools used to create a SAS address (URL):
2. Create a PowerShell file (.ps1 file extension), copy in the following code, then save it locally. ```azurecli-interactive
- az storage container generate-sas --connection-string ΓÇÿDefaultEndpointsProtocol=https;AccountName=<account-name>;AccountKey=<account-key>;EndpointSuffix=core.windows.netΓÇÖ --name <vhd-name> --permissions rl --start ΓÇÿ<start-date>ΓÇÖ --expiry ΓÇÿ<expiry-date>ΓÇÖ
+ az storage container generate-sas --connection-string ΓÇÿDefaultEndpointsProtocol=https;AccountName=<account-name>;AccountKey=<account-key>;EndpointSuffix=core.windows.netΓÇÖ --name <container-name> --permissions rl --start ΓÇÿ<start-date>ΓÇÖ --expiry ΓÇÿ<expiry-date>ΓÇÖ
``` 3. Edit the file to use the following parameter values. Provide dates in UTC datetime format, such as 2020-04-01T00:00:00Z.
@@ -71,7 +71,7 @@ There are two common tools used to create a SAS address (URL):
Here's an example of proper parameter values (at the time of this writing): ```azurecli-interactive
- az storage container generate-sas --connection-string ΓÇÿDefaultEndpointsProtocol=https;AccountName=st00009;AccountKey=6L7OWFrlabs7Jn23OaR3rvY5RykpLCNHJhxsbn9ON c+bkCq9z/VNUPNYZRKoEV1FXSrvhqq3aMIDI7N3bSSvPg==;EndpointSuffix=core.windows.netΓÇÖ --name vhds -- permissions rl --start ΓÇÿ2020-04-01T00:00:00ZΓÇÖ --expiry ΓÇÿ2021-04-01T00:00:00ZΓÇÖ
+ az storage container generate-sas --connection-string ΓÇÿDefaultEndpointsProtocol=https;AccountName=st00009;AccountKey=6L7OWFrlabs7Jn23OaR3rvY5RykpLCNHJhxsbn9ON c+bkCq9z/VNUPNYZRKoEV1FXSrvhqq3aMIDI7N3bSSvPg==;EndpointSuffix=core.windows.netΓÇÖ --name <container-name> -- permissions rl --start ΓÇÿ2020-04-01T00:00:00ZΓÇÖ --expiry ΓÇÿ2021-04-01T00:00:00ZΓÇÖ
``` 1. Save the changes.
marketplace https://docs.microsoft.com/en-us/azure/marketplace/create-managed-service-offer-listing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/create-managed-service-offer-listing.md new file mode 100644
@@ -0,0 +1,80 @@
+---
+title: How to configure your Managed Service offer listing details in Microsoft Partner Center
+description: Learn how to configure your Managed Service offer listing details on Azure Marketplace using Partner Center.
+author: Microsoft-BradleyWright
+ms.author: brwrigh
+ms.reviewer: anbene
+ms.service: marketplace
+ms.subservice: partnercenter-marketplace-publisher
+ms.topic: how-to
+ms.date: 12/23/2020
+---
+
+# How to configure your Managed Service offer listing details
+
+The information you provide on the **Offer listing** page of Partner Center will be displayed on Azure Marketplace. This includes your offer name, description, media, and other marketing assets.
+
+> [!NOTE]
+> If your offer is in a language other than English, the offer listing can be in that language, but the description must begin or end with the English phrase ΓÇ£This service is available in &lt;language of your offer content>ΓÇ¥. You can also provide supporting documents in a language that's different from the one used in the offer listing details.
+
+On the **Offer listing** page in Partner Center, provide the information described below. To learn more about the listing details for your Managed Service offer, review [Plan a Managed Service offer](./plan-managed-service-offer.md).
+
+## Marketplace details
+
+1. The **Name** box is pre-filled with the name you entered earlier in the New offer dialog box, but you can change it at any time. This name will appear as the title of your offer listing on the online store.
+2. In the **Search results summary** box, describe the purpose or goal of your offer in 100 characters or less.
+3. In the **Short description** field, provide a short description of your offer (up to 256 characters). ItΓÇÖll be displayed on your offer listing in the Azure portal.
+4. In the **Description** field, describe your Managed Service offer. You can enter up to 2,000 characters of text in this box, including HTML tags and spaces. For information about HTML formatting, see [HTML tags supported in the offer descriptions](./supported-html-tags.md).
+5. In the **Privacy policy link** box, enter a link (starting with https) to your organization's privacy policy. You're responsible to ensure your offer complies with privacy laws and regulations, and for providing a valid privacy policy.
+
+## Useful links
+
+You have the option to provide supplemental online documents about your solution:
+
+1. Select **Add a link**.
+2. Provide a name and web address (starting with https) for each document.
+
+## Contact information
+
+Enter the name, email address, and phone number of two people in your company (you can be one of them): a support contact and an engineering contact. We'll use this information to communicate with you about your offer. This information isnΓÇÖt shown to customers but may be provided to Cloud Solution Provider (CSP) partners.
+
+## Support URLs
+
+If you have support websites for Azure Global Customers and/or Azure Government customers, enter their URL, starting with https.
+
+## Marketplace media
+
+> [!NOTE]
+> If you have an issue uploading files, make sure your local network does not block the `https://upload.xboxlive.com` service used by Partner Center.
+
+### Add logos
+
+Under **Logos**, upload a **Large** logo in .PNG format between 216 x 216 and 350 x 350 pixels. Partner Center will automatically create **Medium** and **Small** logos, which you can replace later.
+
+* The large logo (from 216 x 216 to 350 x 350 px) appears on your offer listing on Azure Marketplace.
+* The medium logo (90 x 90 px) is shown when a new resource is created.
+* The small logo (48 x 48 px) is used on the Azure Marketplace search results.
+
+### Add screenshots (optional)
+
+Add up to five images that demonstrate your offer. All images must be 1280 x 720 pixels in size and in .PNG format.
+
+1. Under **Screenshots**, drag and drop your PNG file onto the **Screenshot** box.
+2. Select **Add image caption**.
+3. In the dialog box that appears, enter a caption.
+4. Repeat steps 1 through 3 to add additional screenshots.
+
+### Add videos (optional)
+
+You can add links to YouTube or Vimeo videos that demonstrate your offer. These videos are shown to customers along with your offer. You must enter a thumbnail image of the video, sized 1280 x 720 pixels and in .PNG format. Add a maximum of five videos per offer.
+
+1. Under **Videos**, select the **Add video link**.
+2. In the boxes that appear, enter the name and link for your video.
+3. Drag and drop a PNG file (1280 x 720 pixels) onto the gray **Thumbnail** box.
+4. To add another video, repeat steps 1 through 3.
+
+Select **Save draft** before continuing to the next tab: **Preview audience**.
+
+## Next steps
+
+* [Add a preview audience](create-managed-service-offer-preview.md)
marketplace https://docs.microsoft.com/en-us/azure/marketplace/create-managed-service-offer-plans https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/create-managed-service-offer-plans.md new file mode 100644
@@ -0,0 +1,118 @@
+---
+title: How to create plans for your Managed Service offer on Azure Marketplace
+description: Learn how to create plans for your Managed Service offer on Azure Marketplace using Microsoft Partner Center.
+author: Microsoft-BradleyWright
+ms.author: brwrigh
+ms.reviewer: anbene
+ms.service: marketplace
+ms.subservice: partnercenter-marketplace-publisher
+ms.topic: how-to
+ms.date: 12/23/2020
+---
+
+# How to create plans for your Managed Service offer
+
+Managed Service offers sold through the Microsoft commercial marketplace must have at least one plan. You can create a variety of plans with different options within the same offer. These plans (sometimes referred to as SKUs) can differ in terms of version, monetization, or tiers of service. For detailed guidance on plans, see [Plans and pricing for commercial marketplace offers](./plans-pricing.md).
+
+## Create a plan
+
+1. On the **Plan overview** tab of your offer in Partner Center, select **+ Create new plan**.
+2. In the dialog box that appears, under **Plan ID**, enter a unique plan ID. Use up to 50 lowercase alphanumeric characters, dashes, or underscores. You cannot modify the plan ID after you select **Create**. This ID will be visible to your customers.
+3. In the **Plan name** box, enter a unique name for this plan. Use a maximum of 50 characters. This name will be visible to your customers.
+4. Select **Create**.
+
+## Define the plan listing
+
+On the **Plan listing** tab, define the plan name and description as you want them to appear in the commercial marketplace.
+
+1. The **Plan name** box displays the name you provided earlier for this plan. You can change it at any time. This name will appear in the commercial marketplace as the title of your offer's plan.
+2. In the **Plan summary** box, provide a short description of your plan, which may be used in marketplace search results.
+3. In the **Plan description** box, explain what makes this plan unique and different from other plans within your offer.
+4. Select **Save draft** before continuing to the next tab.
+
+## Define pricing and availability
+
+The only pricing model available for Managed Service offers is **Bring your own license (BYOL)**. This means that you bill your customers directly for costs related to this offer, and Microsoft doesnΓÇÖt charge you any fees.
+
+You can configure each plan to be visible to everyone (public) or to only a specific audience (private).
+
+> [!NOTE]
+> Private plans are not supported with subscriptions established through a reseller of the Cloud Solution Provider (CSP) program.
+
+> [!IMPORTANT]
+> Once a plan has been published as public, you can't change it to private. To control which customers can accept your offer and delegate resources, use a private plan. With a public plan, you can't restrict availability to certain customers or even to a certain number of customers (although you can stop selling the plan completely if you choose to do so). You can remove access to a delegation after a customer accepts an offer only if you included an Authorization with the Role Definition set to Managed Services Registration Assignment Delete Role when you published the offer. You can also reach out to the customer and ask them to remove your access.
+
+## Make your plan public
+
+1. Under **Plan visibility**, select **Public**.
+2. Select **Save draft**. To return to the Plan overview tab, select **Plan overview** in the upper left.
+3. To create another plan for this offer, select **+ Create new plan** in the **Plan overview** tab.
+
+## Make your plan private
+
+You grant access to a private plan using Azure subscription IDs. You can add a maximum of 10 subscription IDs manually or up to 10,000 subscription IDs using a .CSV file.
+
+To add up to 10 subscription IDs manually:
+
+1. Under **Plan visibility**, select **Private**.
+2. Enter the Azure subscription ID of the audience you want to grant access to.
+3. Optionally, enter a description of this audience in the **Description** box.
+4. To add another ID, select **Add ID (Max 10)**.
+5. When youΓÇÖre done adding IDs, select **Save draft**.
+
+To add up to 10,000 subscription IDs with a .CSV file:
+
+1. Under **Plan visibility**, select **Private**.
+2. Select the **Export Audience (csv)** link. This will download a .CSV file.
+3. Open the .CSV file. In the **Id** column, enter the Azure subscription IDs you want to grant access to.
+4. In the **Description** column, you have the option to add a description for each entry.
+5. In the **Type** column, add **SubscriptionId** to each row that has an ID.
+6. Save the file as a .CSV file.
+7. In Partner Center, select the **Import Audience (csv)** link.
+8. In the **Confirm** dialog box, select **Yes**, then upload the .CSV file.
+9. SelectΓÇ»**Save draft**.
+
+## Technical configuration
+
+This section creates a manifest with authorization information for managing customer resources. This information is required in order to enable [Azure delegated resource management](../lighthouse/concepts/azure-delegated-resource-management.md).
+
+Review [Tenants, roles, and users in Azure Lighthouse scenarios](../lighthouse/concepts/tenants-users-roles.md#best-practices-for-defining-users-and-roles) to understand which roles are supported and the best practices for defining your authorizations.
+
+> [!NOTE]
+> The users and roles in your Authorization entries will apply to every customer who activates the plan. If you want to limit access to a specific customer, you'll need to publish a private plan for their exclusive use.
+
+### Manifest
+
+1. Under **Manifest**, provide a **Version** for the manifest. Use the format n.n.n (for example, 1.2.5).
+2. Enter your **Tenant ID**. This is a GUID associated with the Azure Active Directory (Azure AD) tenant ID of your organization; that is, the managing tenant from which you will access your customers' resources. If you don't have this handy, you can find it by hovering over your account name on the upper right-hand side of the Azure portal, or by selecting **Switch directory**.
+
+If you publish a new version of your offer and need to create an updated manifest, select **+ New manifest**. Be sure to increase the version number from the previous manifest version.
+
+### Authorizations
+
+Authorizations define the entities in your managing tenant who can access resources and subscriptions for customers who purchase the plan. Each of these entities is assigned a built-in role that grants specific levels of access.
+
+You can create up to 20 authorizations for each plan.
+
+> [!TIP]
+> In most cases, you'll want to assign roles to an Azure AD user group or service principal, rather than to a series of individual user accounts. This lets you add or remove access for individual users without having to update and republish the plan when your access requirements change. When assigning roles to Azure AD groups, [the group type should be Security and not Office 365](../active-directory/fundamentals/active-directory-groups-create-azure-portal.md). For additional recommendations, see [Tenants, roles, and users in Azure Lighthouse scenarios](../lighthouse/concepts/tenants-users-roles.md).
+
+For each Authorization, you'll need to provide the following information. You can then select **+ Add authorization** as many times as needed to add more users and role definitions.
+
+* **AAD Object ID**: the Azure AD identifier of a user, user group, or application that will be granted certain permissions (as defined by the Role Definition) to your customers' resources.
+* **AAD Object Display Name**: a friendly name to help the customer understand the purpose of this authorization. The customer will see this name when delegating resources.
+* **Role definition**: select one of the available Azure AD built-in roles from the list. This role will determine the permissions that the user in the **Principal ID** field will have on your customers' resources. For descriptions of these roles, see [Built-in roles](../role-based-access-control/built-in-roles.md) and [Role support for Azure Lighthouse](../lighthouse/concepts/tenants-users-roles.md#role-support-for-azure-lighthouse).
+
+> [!NOTE]
+> As applicable new built-in roles are added to Azure, they will become available here. There may be some delay before they appear.
+
+* **Assignable Roles**: this option will appear only if you have selected User Access Administrator in the **Role Definition** for this authorization. If so, you must add one or more assignable roles here. The user in the **Azure AD Object ID** field will be able to assign these roles to managed identities, which is required in order to [deploy policies that can be remediated](../lighthouse/how-to/deploy-policy-remediation.md). No other permissions normally associated with the User Access Administrator role will apply to this user.
+
+> [!TIP]
+> To ensure you can [remove access to a delegation](../lighthouse/how-to/remove-delegation.md) if needed, include an **Authorization** with the **Role Definition** set to [Managed Services Registration Assignment Delete Role](../role-based-access-control/built-in-roles.md#managed-services-registration-assignment-delete-role). If this role is not assigned, delegated resources can only be removed by a user in the customer's tenant.
+
+Once you've completed all sections for your plan, you can select **+ Create new plan** to create additional plans. When youΓÇÖre done, select **Save draft** before continuing.
+
+## Next steps
+
+* [Review and publish](review-publish-offer.md)
marketplace https://docs.microsoft.com/en-us/azure/marketplace/create-managed-service-offer-preview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/create-managed-service-offer-preview.md new file mode 100644
@@ -0,0 +1,45 @@
+---
+title: How to add a preview audience for your Managed Service offer
+description: Learn how to add a preview audience for your Managed Service offer in Microsoft Partner Center.
+author: Microsoft-BradleyWright
+ms.author: brwrigh
+ms.reviewer: anbene
+ms.service: marketplace
+ms.subservice: partnercenter-marketplace-publisher
+ms.topic: how-to
+ms.date: 12/23/2020
+---
+
+# How to add a preview audience for your Managed Service offer
+
+This article describes how to configure a preview audience for a Managed Service offer in the commercial marketplace using Partner Center. The preview audience can review your offer before it goes live.
+
+## Define a preview audience
+
+On the **Preview audience** page, you can define a limited audience who can review your Managed Service offer before you publish it live to the broader marketplace audience. You define the preview audience using Azure subscription IDs, along with an optional description for each. Neither of these fields can be seen by customers. You can find your Azure subscription ID on the **Subscriptions** page on the Azure portal.
+
+Add at least one Azure subscription ID, either individually (up to 10) or by uploading a CSV file (up to 100) to define who can preview your offer before itΓÇÖs published live. If your offer is already live, you may still define a preview audience for testing updates to your offer.
+
+> [!NOTE]
+> A *preview* audience differs from a *private* audience. A preview audience is allowed access to your offer before it's published live in the online stores. They can see and validate all plans, including those which will be available only to a private audience after your offer is fully published in the marketplace. You can make a plan available only to a private audience. A private audience (defined in a planΓÇÖs Availability tab) has exclusive access to a particular plan.
+
+## Add email addresses manually
+
+1. On the **Preview audience** page, add a single Azure subscription ID and an optional description in the boxes provided.
+2. To add another email address, select the **Add ID (Max 10)** link.
+3. Select **Save draft** before continuing to the next tab.
+
+## Add email addresses using a CSV file
+
+1. On the Preview audience**Preview audience** page, select the **Export Audience (csv)** link.
+2. Open the CSV file. In the **Id** column, enter the Azure subscription IDs you want to add to the preview audience.
+3. In the **Description** column, you have the option to add a description for each entry.
+4. In the **Type** column, add **SubscriptionId** to each row that has an ID.
+5. Save the file as a CSV file.
+6. On the **Preview audience** page, select the **Import Audience (csv)** link.
+7. In the **Confirm** dialog box, select **Yes**, then upload the CSV file.
+8. Select **Save draft** before continuing to the next tab.
+
+## Next steps
+
+* [Create plans](create-managed-service-offer-plans.md)
marketplace https://docs.microsoft.com/en-us/azure/marketplace/create-managed-service-offer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/create-managed-service-offer.md new file mode 100644
@@ -0,0 +1,78 @@
+---
+title: How to create a Managed Service offer in the Microsoft commercial marketplace
+description: Learn how to create a new Managed Service offer for Azure Marketplace using the commercial marketplace program in Microsoft Partner Center.
+author: Microsoft-BradleyWright
+ms.author: brwrigh
+ms.reviewer: anbene
+ms.service: marketplace
+ms.subservice: partnercenter-marketplace-publisher
+ms.topic: how-to
+ms.date: 12/23/2020
+---
+
+# How to create a Managed Service offer for the commercial marketplace
+
+This article explains how to create a Managed Service offer for the Microsoft commercial marketplace using Partner Center.
+
+To publish a Managed Service offer, you must have earned a Gold or Silver Microsoft Competency in Cloud Platform. If you havenΓÇÖt already done so, read [Plan a Managed Service offer for the commercial marketplace](./plan-managed-service-offer.md). It will help you prepare the assets you need when you create the offer in Partner Center.
+
+## Create a new offer
+
+1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/home).
+2. In the left-navigation menu, select **Commercial Marketplace** > **Overview**.
+3. On the Overview tab, select **+ New offer** > **Managed Service**.
+
+:::image type="content" source="./media/new-offer-managed-service.png" alt-text="Illustrates the left-navigation menu.":::
+
+4. In the **New offer** dialog box, enter an **Offer ID**. This is a unique identifier for each offer in your account. This ID is visible in the URL of the commercial marketplace listing and Azure Resource Manager templates, if applicable. For example, if you enter test-offer-1 in this box, the offer web address will be `https://azuremarketplace.microsoft.com/marketplace/../test-offer-1`.
+
+ * Each offer in your account must have a unique offer ID.
+ * Use only lowercase letters and numbers. It can include hyphens and underscores, but no spaces, and is limited to 50 characters.
+ * The Offer ID can't be changed after you select **Create**.
+
+5. Enter an **Offer alias**. This is the name used for the offer in Partner Center. It isn't visible in the online stores and is different from the offer name shown to customers.
+6. To generate the offer and continue, select **Create**.
+
+## Configure lead management
+
+Connect your customer relationship management (CRM) system with your commercial marketplace offer so you can receive customer contact information when a customer expresses interest in your consulting service. You can modify this connection at any time during or after you create the offer. For detailed guidance, see [Customer leads from your commercial marketplace offer](./partner-center-portal/commercial-marketplace-get-customer-leads.md).
+
+To configure the lead management in Partner Center:
+
+1. In Partner Center, go to the **Offer setup** tab.
+2. Under **Customer leads**, select the **Connect** link.
+3. In the **Connection details** dialog box, select a lead destination from the list.
+4. Complete the fields that appear. For detailed steps, see the following articles:
+
+ * [Configure your offer to send leads to the Azure table](./partner-center-portal/commercial-marketplace-lead-management-instructions-azure-table.md#configure-your-offer-to-send-leads-to-the-azure-table)
+ * [Configure your offer to send leads to Dynamics 365 Customer Engagement](./partner-center-portal/commercial-marketplace-lead-management-instructions-dynamics.md#configure-your-offer-to-send-leads-to-dynamics-365-customer-engagement) (formerly Dynamics CRM Online)
+ * [Configure your offer to send leads to HTTPS endpoint](./partner-center-portal/commercial-marketplace-lead-management-instructions-https.md#configure-your-offer-to-send-leads-to-the-https-endpoint)
+ * [Configure your offer to send leads to Marketo](./partner-center-portal/commercial-marketplace-lead-management-instructions-marketo.md#configure-your-offer-to-send-leads-to-marketo)
+ * [Configure your offer to send leads to Salesforce](./partner-center-portal/commercial-marketplace-lead-management-instructions-salesforce.md#configure-your-offer-to-send-leads-to-salesforce)
+
+5. To validate the configuration you provided, select the **Validate link**.
+6. When youΓÇÖve configured the connection details, select **Connect**.
+7. Select **Save draft**.
+
+After you submit your offer for publication in Partner Center, we'll validate the connection and send you a test lead. While you preview the offer before it goes live, test your lead connection by trying to purchase the offer yourself in the preview environment.
+
+> [!TIP]
+> Make sure the connection to the lead destination stays updated so you don't lose any leads.
+
+## Configure offer properties
+
+On the Properties page of your offer in Partner Center, youΓÇÖll define the categories applicable to your offer, and legal contracts. This information ensures your Managed Service is displayed correctly on the online store and offered to the right set of customers.
+
+### Select a category
+
+Under **Categories**, select at least one and up to five categories for grouping your offer into the appropriate commercial marketplace search areas.
+
+### Provide terms and conditions
+
+Under **Legal**, provide your terms and conditions for this offer. Customers will be required to accept them before using the offer. You can also provide the URL where your terms and conditions can be found.
+
+Select **Save draft** before continuing.
+
+## Next step
+
+* [Configure your Managed Service offer listing](./create-managed-service-offer-listing.md)
\ No newline at end of file
marketplace https://docs.microsoft.com/en-us/azure/marketplace/partner-center-portal/create-new-managed-service-offer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/partner-center-portal/create-new-managed-service-offer.md deleted file mode 100644
@@ -1,267 +0,0 @@
-title: Create a Managed Service offer in Microsoft commercial marketplace
-description: How to create a new Managed Service offer for listing in Azure Marketplace using the Commercial Marketplace portal in Partner Center.
-ms.service: marketplace
-ms.subservice: partnercenter-marketplace-publisher
-ms.topic: how-to
-author: Microsoft-BradleyWright
-ms.author: brwrigh
-ms.date: 08/07/2020
-
-# Create a Managed Service offer
-
-Managed Service offers help to enable [Azure Lighthouse](../../lighthouse/overview.md) scenarios. When a customer accepts a Managed Service offer, they are then able to onboard resources for [Azure delegated resource management](../../lighthouse/concepts/azure-delegated-resource-management.md). Before starting, [Create a Commercial Marketplace account in Partner Center](create-account.md) if you haven't done so yet. Ensure your account is enrolled in the commercial marketplace program.
-
-You must have a [Silver or Gold Cloud Platform competency level](https://partner.microsoft.com/membership/cloud-platform-competency) or be an [Azure Expert MSP](https://partner.microsoft.com/membership/azure-expert-msp) to publish a Managed Service offer.
-
-## Create a new offer
-
-1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/home).
-2. In the left-nav menu, select **Commercial Marketplace** > **Overview**.
-3. On the Overview page, select **+ New offer** > **Managed Service**.
-
- ![Illustrates the left-navigation menu.](./media/new-offer-managed-service.png)
-
->[!NOTE]
->After an offer is published, edits made to it in Partner Center only appear in online stores after republishing the offer. Make sure you always republish after making changes.
-
-## New offer
-
-Enter an **Offer ID**. This is a unique identifier for each offer in your account.
-
-* This ID is visible to customers in the web address for the marketplace offer and Azure Resource Manager templates, if applicable.
-* Use only lowercase letters and numbers. It can include hyphens and underscores, but no spaces, and is limited to 50 characters. For example, if you enter **test-offer-1**, the offer web address will be `https://azuremarketplace.microsoft.com/marketplace/../test-offer-1`.
-* The Offer ID can't be changed after you select **Create**.
-
-Enter an **Offer alias**. This is the name used for the offer in Partner Center.
-
-* This name isn't used in the marketplace and is different from the offer name and other values shown to customers.
-* The Offer alias can't be changed after you select **Create**.
-
-Select **Create** to generate the offer and continue.
-
-## Offer setup
-
-### Customer leads
-
-[!INCLUDE [Connect lead management](./includes/connect-lead-management.md)]
-
-Per the [Managed Services certification policies](/legal/marketplace/certification-policies#700-managed-services), a **Lead Destination** is required. This will create a record in your CRM system each time a customer deploys your offer.
-
-For more information, see [Lead management overview](./commercial-marketplace-get-customer-leads.md).
-
-Select **Save draft** before continuing.
-
-## Properties
-
-This page lets you define the categories used to group your offer on the marketplace and the legal contracts supporting your offer.
-
-### Category
-
-Select a minimum of one and a maximum of five categories which will be used to place your offer into the appropriate marketplace search areas. Be sure to call out how your offer supports these categories in the offer description.
-
-### Terms and conditions
-
-Provide your own legal terms and conditions in the **Terms and conditions** field. You can also provide the URL where your terms and conditions can be found. Customers will be required to accept these terms before they can try your offer.
-
-Select **Save draft** before continuing.
-
-## Offer listing
-
-This page lets you define marketplace details (such as offer name, description, and images) for your offer.
-
-> [!NOTE]
-> Offer listing content (such as the description, documents, screenshots, and terms of use) is not required to be in English, as long as the offer description begins with the phrase, "This application is available only in [non-English language]." It is also acceptable to provide a *Useful Link URL* to offer content in a language other than the one used in the Offer listing content.
-
-Here's an example of how offer information appears in the Azure portal:
-
-:::image type="content" source="media/example-managed-services.png" alt-text="Illustrates how this offer appears in the Azure portal.":::
-
-#### Call-out descriptions
-
-1. Title
-2. Description
-3. Useful links
-4. Screenshots
-
-### Name
-
-The name you enter here will be shown to customers as the title of your offer listing. This field is pre-populated with the text you entered for **Offer alias** when you created the offer, but you can change this value. This name may be trademarked (and you may include trademark or copyright symbols). The name can't be more than 50 characters and can't include any emojis.
-
-### Search results summary
-
-Provide a short description of your offer (up to 100 characters), which may be used in marketplace search results.
-
-### Long summary
-
-Provide a longer description of your offer (up to 256 characters). This long summary may also be used in marketplace search results.
-
-### Description
-
-[!INCLUDE [Long description-1](./includes/long-description-1.md)]
-
-[!INCLUDE [Long description-2](./includes/long-description-2.md)]
-
-[!INCLUDE [Long description-3](./includes/long-description-3.md)]
-
-### Privacy policy link
-
-Enter the URL to your organization's privacy policy (hosted on your site). You are responsible for ensuring your app complies with privacy laws and regulations, and for providing a valid privacy policy.
-
-### Useful links
-
-Provide optional supplemental online documents about your solution. Add additional useful links by clicking **+ Add a link**.
-
-### Contact Information
-
-In this section, you must provide the name, email, and phone number for a **Support contact** and an **Engineering contact**. This info is not shown to customers, but will be available to Microsoft, and may be provided to CSP partners.
-
-### Support URLs
-
-If you have support websites for **Azure Global Customers** and/or **Azure Government customers**, provide those URLs here.
-
-### Marketplace images
-
-In this section, you can provide logos and images that will be used when showing your offer to customer. All images must be in .png format.
-
-[!INCLUDE [logo tips](../includes/graphics-suggestions.md)]
-
->[!NOTE]
->If you have an issue uploading files, make sure your local network does not block the https://upload.xboxlive.com service used by Partner Center.
-
-#### Store logos
-
-Provide a PNG file for the **Large** size logo. Partner Center will use this to create a **Small** and a **Medium** logo. You can optionally replace these with different images later.
--- **Large** (from 216 x 216 to 350 x 350 px, required)-- **Medium** (90 x 90 px, optional)-- **Small** (48 x 48 px, optional)-
-These logos are used in different places in the listing:
-
-[!INCLUDE [logos-azure-marketplace-only](../includes/logos-azure-marketplace-only.md)]
-
-[!INCLUDE [logo tips](../includes/graphics-suggestions.md)]
-
-#### Screenshots
-
-Add up to five screenshots that show how your offer works. All screenshots must be 1280 x 720 pixels.
-
-#### Videos
-
-You can optionally add up to five videos that demonstrate your offer. These videos should be hosted on YouTube and/or Vimeo. For each one, enter the video's name, its URL, and a thumbnail image of the video (1280 x 720 pixels).
-
-#### Additional marketplace listing resources
--- [Best practices for marketplace offer listings](../gtm-offer-listing-best-practices.md)-
-Select **Save draft** before continuing.
-
-## Preview
-
-Before you publish your offer live to the broader marketplace offer, you'll first need to make it available to a limited preview audience. This lets you confirm how you offer appears in the Azure Marketplace before making it available to customers. Microsoft support and engineering teams will also be able to view your offer during this preview period.
-
-You can define the preview audience by entering Azure subscription IDs in the **Preview Audience** section. You can enter up to 10 subscription IDs manually, or upload a .csv file with up to 100 subscription IDs.
-
-Any customers associated with these subscriptions will be able to view the offer in Azure Marketplace before it goes live. Be sure to include your own subscriptions here so you can preview your offer.
-
-Select **Save draft** before continuing.
-
-## Plan overview
-
-Each offer must have one or more plans (formerly called SKUs). You might add multiple plans to support different feature sets at different prices or to customize a specific plan for a limited audience of specific customers. Customers can view the plans that are available to them under the parent offer.
-
-You can create up to 100 plans for each offer: up to 45 of these can be private. Learn more about private plans in [Private offers in the Microsoft commercial marketplace](../private-offers.md).
-
-On the **Plan overview** page, select **+ Create new plan**. Then enter a **Plan ID** and a **Plan name**. Both of these values can only contain lowercase alphanumeric characters, dashes, and underscores, with a maximum of 50 characters. These values may be visible to customers, and they can't be changed after you publish the offer.
-
-Select **Create** once you have entered these values to continue working on your plan. There are three sections to complete: **Plan listing**, **Pricing and availability**, and **Technical configuration**.
-
-### Plan listing
-
-First, provide a **Search results summary** for the plan. This is a short description of your plan (up to 100 characters), which may be used in marketplace search results.
-
-Next, enter a **Description** that provides a more detailed explanation of the plan.
-
-### Pricing and availability
-
-Currently, there is only one pricing model that can be used for Managed Service offer: **Bring your own license (BYOL)**. This means that you will bill your customers directly for costs related to this offer, and Microsoft does not charge any fees to you.
-
-The **Plan visibility** section lets you indicate if this plan should be [private](../../marketplace/private-offers.md). If you leave the **This is a private plan** box unchecked, your plan will not be restricted to specific customers (or to a certain number of customers).
-
-> [!NOTE]
-> Private plans are not supported with subscriptions established through a reseller of the Cloud Solution Provider (CSP) program.
-
-To make this plan available only to specific customers, select **Yes**. When you do so, you'll need to identify the customers by providing their subscription IDs. These can be entered one by one (for up to 10 subscriptions) or by uploading a .csv file (for a maximum of 10,000 subscriptions across all plans). Be sure to include your own subscriptions here so you can test and validate the offer.
-
-> [!IMPORTANT]
-> Once a plan has been published as public, you can't change it to private. To control which customers can accept your offer and delegate resources, use a private plan. With a public plan, you can't restrict availability to certain customers or even to a certain number of customers (although you can stop selling the plan completely if you choose to do so). You can [remove access to a delegation](../../lighthouse/how-to/remove-delegation.md) after a customer accepts an offer only if you included an **Authorization** with the **Role Definition** set to [Managed Services Registration Assignment Delete Role](../../role-based-access-control/built-in-roles.md#managed-services-registration-assignment-delete-role) when you published the offer. You can also reach out to the customer and ask them to [remove your access](../../lighthouse/how-to/view-manage-service-providers.md#add-or-remove-service-provider-offers).
-
-### Technical configuration
-
-This section of your plan creates a manifest with authorization information for managing customer resources. This information is required in order to enable [Azure delegated resource management](../../lighthouse/concepts/azure-delegated-resource-management.md).
-
-Be sure to review [Tenants, roles, and users in Azure Lighthouse scenarios](../../lighthouse/concepts/tenants-users-roles.md#best-practices-for-defining-users-and-roles) to understand which roles are supported and the best practices for defining your authorizations.
-
-> [!NOTE]
-> As noted above, the users and roles in your **Authorization** entries will apply to every customer who purchases the plan. If you want to limit access to a specific customer, you'll need to publish a private plan for their exclusive use.
-
-#### Manifest
-
-First, provide a **Version** for the manifest. Use the format *n.n.n* (for example, 1.2.5).
-
-Next, enter your **Tenant ID**. This is a GUID associated with the Azure Active Directory (Azure AD) tenant ID of your organization; that is, the managing tenant from which you will access your customers' resources. If you don't have this handy, you can find it by hovering over your account name on the upper right-hand side of the Azure portal, or by selecting **Switch directory**.
-
-If you publish a new version of your offer and need to create an updated manifest, select **+ New manifest**. Be sure to increase the version number from the previous manifest version.
-
-#### Authorization
-
-Authorizations define the entities in your managing tenant who can access resources and subscriptions for customers who purchase the plan. Each of these entities are assigned a built-in role that grants specific levels of access.
-
-You can create up to twenty authorizations for each plan.
-
-> [!TIP]
-> In most cases, you'll want to assign roles to an Azure AD user group or service principal, rather than to a series of individual user accounts. This lets you add or remove access for individual users without having to update and republish the plan when your access requirements change. When assigning roles to Azure AD groups, [be sure that the the **Group type** is **Security** and not **Office 365**](../../active-directory/fundamentals/active-directory-groups-create-azure-portal.md). For additional recommendations, see [Tenants, roles, and users in Azure Lighthouse scenarios](../../lighthouse/concepts/tenants-users-roles.md).
-
-For each **Authorization**, you'll need to provide the following. You can then select **+ Add authorization** as many times as needed to add more users and role definitions.
--- **Azure AD Object ID**: The Azure AD identifier of a user, user group, or application which will be granted certain permissions (as defined by the Role Definition) to your customers' resources.-- **Azure AD Object Display Name**: A friendly name to help the customer understand the purpose of this authorization. The customer will see this name when delegating resources.-- **Role Definition**: Select one of the available Azure AD built-in roles from the list. This role will determine the permissions that the user in the **Azure AD Object ID** field will have on your customers' resources. For descriptions of these roles, see [Built-in roles](../../role-based-access-control/built-in-roles.md) and [Role support for Azure Lighthouse](../../lighthouse/concepts/tenants-users-roles.md#role-support-for-azure-lighthouse).
- > [!NOTE]
- > As applicable new built-in roles are added to Azure, they will become available here, although there may be some delay before they appear.
-- **Assignable Roles**: This option will appear only if you have selected User Access Administrator in the **Role Definition** for this authorization. If so, you must add one or more assignable roles here. The user in the **Azure AD Object ID** field will be able to assign these roles to [managed identities](../../active-directory/managed-identities-azure-resources/overview.md), which is required in order to [deploy policies that can be remediated](../../lighthouse/how-to/deploy-policy-remediation.md). Note that no other permissions normally associated with the User Access Administrator role will apply to this user.-
-> [!TIP]
-> To ensure you can [remove access to a delegation](../../lighthouse/how-to/remove-delegation.md) if needed, include an **Authorization** with the **Role Definition** set to [Managed Services Registration Assignment Delete Role](../../role-based-access-control/built-in-roles.md#managed-services-registration-assignment-delete-role). If this role is not assigned, delegated resources can only be removed by a user in the customer's tenant.
-
-Once you've completed all of the sections for your plan, you can select **+ Create new plan** as many times as you need to create additional plans. When you're done, select **Save**.
-
-Select **Save draft** before continuing.
-
-## Publish
-
-### Submit offer to preview
-
-Once you have completed all the required sections of the offer, select **Review and publish** in the upper-right corner of the portal.
-
-If it's your first time publishing this offer, you can:
--- See the completion status for each section of the offer.
- - **Not started** - The section has not been touched and needs to be completed.
- - **Incomplete** - The section has errors that need to be fixed or requires more information to be provided. Go back to the section(s) and update it.
- - **Complete** - The section is complete, all required data has been provided and there are no errors. All sections of the offer must be in a complete state before you can submit the offer.
-- In the **Notes for certification** section, provide testing instructions to the certification team to ensure that your app is tested correctly, in addition to any supplementary notes helpful for understanding your app.-- Submit the offer for publishing by selecting **Submit**. We will send you an email when a preview version of the offer is available for you to review and approve. Return to Partner Center and select **Go-live** for the offer to publish your offer to the public (or if a private offer, to the private audience).-
-### Customer experience and offer management
-
-When a customer deploys your offer, they will be able to delegate subscriptions or resource groups for [Azure delegated resource management](../../lighthouse/concepts/azure-delegated-resource-management.md). For more about this process, see [The customer onboarding process](../../lighthouse/how-to/publish-managed-services-offers.md#the-customer-onboarding-process).
-
-You can [publish an updated version of your offer](update-existing-offer.md) at any time. For example, you may want to add a new role definition to a previously-published offer. When you do so, customers who have already added the offer will see an icon in the [**Service providers**](../../lighthouse/how-to/view-manage-service-providers.md) page in the Azure portal that lets them know an update is available. Each customer will be able to review the changes and decide whether they want to update to the new version.
-
-## Next steps
--- [Update an existing offer in the Commercial Marketplace](./update-existing-offer.md)-- [Learn about Azure Lighthouse](../../lighthouse/overview.md)\ No newline at end of file
marketplace https://docs.microsoft.com/en-us/azure/marketplace/plan-managed-service-offer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/plan-managed-service-offer.md new file mode 100644
@@ -0,0 +1,130 @@
+---
+title: Plan a Managed Service offer for the Microsoft commercial marketplace
+description: How to plan a new Managed Service offer for Azure Marketplace using the commercial marketplace program in Microsoft Partner Center.
+ms.service: marketplace
+ms.subservice: partnercenter-marketplace-publisher
+ms.topic: conceptual
+author: Microsoft-BradleyWright
+ms.author: brwrigh
+ms.reviewer: anbene
+ms.date: 12/23/2020
+---
+
+# How to plan a Managed Service offer for the Microsoft commercial marketplace
+
+This article introduces the requirements for publishing a Managed Service offer to the Microsoft commercial marketplace through Partner Center.
+
+Managed Services are Azure Marketplace offers that enable cross-tenant and multi-tenant management with Azure Lighthouse. To learn more, see [What is Azure Lighthouse?](../lighthouse/overview.md) When a customer purchases a Managed Service offer, theyΓÇÖre able to delegate one or more subscription or resources group
+
+## Eligibility requirements
+
+To publish a Managed Service, you must have earned a Gold or Silver Microsoft Competency in Cloud Platform. This competency demonstrates your expertise to customers. For more information, see [Microsoft Partner Network Competencies](https://partner.microsoft.com/membership/competencies).
+
+Offers must meet all applicable [commercial marketplace certification policies](https://docs.microsoft.com/legal/marketplace/certification-policies) to be published on Azure Marketplace.
+
+## Customer leads
+
+You must connect your offer to your customer relationship management (CRM) system to collect customer information. The customer will be asked for permission to share their information. These customer details, along with the offer name, ID, and online store where they found your offer, will be sent to the CRM system that you've configured. The commercial marketplace supports different kinds of CRM systems, along with the option to use an Azure table or configure an HTTPS endpoint using Power Automate.
+
+You can add or modify a CRM connection at any time during or after offer creation. For detailed guidance, see [Customer leads from your commercial marketplace offer](partner-center-portal/commercial-marketplace-get-customer-leads.md).
+
+## Legal contracts
+
+In the Properties page of Partner Center, youΓÇÖll be asked to provide **terms and conditions** for the use of your offer. You can either enter your terms directly in Partner Center or provide the URL where they can be found. Customers will be required to accept these terms and conditions before purchasing your offer.
+
+## Offer listing details
+
+When you create your Managed Service offer in Partner Center, youΓÇÖll enter text, images, documents, and other offer details. This is what customers will see when they discover your offer on Azure Marketplace. See the following example:
+
+:::image type="content" source="media/example-managed-service.png" alt-text="Illustrates how a Managed Service offer appears on Azure Marketplace.":::
+
+**Call-out descriptions**
+
+1. Logo
+1. Name
+1. Short description
+1. Categories
+1. Legal contracts and privacy policy
+1. Description
+1. Screenshots/videos
+1. Useful links
+
+Here's an example of how the offer listing appears in the Azure portal:
+
+:::image type="content" source="media/example-managed-service-azure-portal.png" alt-text="Illustrates how this offer appears in the Azure portal.":::
+
+**Call-out descriptions**
+
+1. Name
+2. Description
+3. Useful links
+4. Screenshots/videos
+
+> [!NOTE]
+> If your offer is in a language other than English, the offer listing can be in that language, but the description must begin or end with the English phrase ΓÇ£This service is available in &lt;language of your offer content>ΓÇ¥. You can also provide supporting documents in a language that's different from the one used in the offer listing details.
+
+To help create your offer more easily, prepare some of these items ahead of time. The following items are required unless otherwise noted.
+
+**Name**: this will appear as the title of your offer listing in the commercial marketplace. The name may be trademarked. It can't contain emojis (unless they're the trademark and copyright symbols) and must be limited to 50 characters.
+
+**Search results summary**: describe the purpose or goal of your offer in 100 characters or less. This summary is used in the commercial marketplace listing search results. It shouldnΓÇÖt be identical to the title. Consider including your top SEO keywords.
+
+**Short description**: provide a short description of your offer (up to 256 characters). ItΓÇÖll be displayed on your offer listing in Azure portal.
+
+**Description**: describe your offer in 3,000 characters or less. This description will be displayed in the commercial marketplace listing. Consider including a value proposition, key benefit, category or industry associations, and any necessary disclosures.
+
+Here are some tips for writing your description:
+
+* Clearly describe the value of your offer in the first few sentences, including:
+ * The type of user who benefits from the offer.
+ * What customer needs or issues the offer addresses.
+* Remember that the first few sentences might be displayed in search results.
+* Use industry-specific vocabulary.
+
+You can use HTML tags to format your description. For information about HTML formatting, see [HTML tags supported in the commercial marketplace offer descriptions](./supported-html-tags.md).
+
+**Privacy policy link**: provide an URL to the privacy policy, hosted on your site. YouΓÇÖre responsible for ensuring your offer complies with privacy laws and regulations, and for providing a valid privacy policy.
+
+**Useful links** (optional): upload supplemental online documents about your offer.
+
+**Contact information**: provide name, email address, and phone number of two people in your company (you can be one of them): a support contact and an engineering contact. We'll use this information to communicate with you about your offer. This information isnΓÇÖt shown to customers but may be provided to Cloud Solution Provider (CSP) partners
+
+**Support URLs** (optional): if you have support websites for Azure Global Customers and/or Azure Government customers, provide those URLs.
+
+**Marketplace media ΓÇô logos**: provide a PNG file for the large-size logo of your offer. Partner Center will use it to create medium and small logos. You can optionally replace these logos with a different image later.
+
+* The large logo (from 216 x 216 to 350 x 350 px) appears on your offer listing on Azure Marketplace.
+* The medium logo (90 x 90 px) is shown when a new resource is created.
+* The small logo (48 x 48 px) is used on the Azure Marketplace search results.
+
+Follow these guidelines for your logos:
+
+* Make sure the image isn't stretched.
+* The Azure design has a simple color palette. Limit the number of primary and secondary colors on your logo.
+* The Azure portal colors are white and black. Don't use these as the background of your logo. We recommend simple primary colors that make your logo prominent.
+* If you use a transparent background, make sure that the logo and text aren't white, black, or blue.
+* The look and feel of your logo should be flat. Avoid gradients. Don't place text on the logo, not even your company or brand name.
+
+**Marketplace media ΓÇô screenshots** (optional): Add up to five images that demonstrate how your offer works. All images must be 1280 x 720 pixels in size and in .PNG format.
+
+**Marketplace media ΓÇô videos** (optional): upload up to five videos that demonstrate your offer. The videos must be hosted on YouTube or Vimeo and have a thumbnail (1280 x 720 PNG file).
+
+## Preview audience
+
+A preview audience can access your offer before itΓÇÖs published on Azure Marketplace in order to test it. On the **Preview audience** page of Partner Center, you can define a limited preview audience.
+
+> [!NOTE]
+> A preview audience differs from a private plan. A private plan is one you make available only to a specific audience you choose. This enables you to negotiate a custom plan with specific customers.
+
+You can send invites to Microsoft Account (MSA) or Azure Active Directory (Azure AD) email addresses. Add up to 10 email addresses manually or import up to 20 with a .csv file. If your offer is already live, you can still define a preview audience for testing any changes or updates to your offer.
+
+## Plans and pricing
+
+Managed Service offers require at least one plan. A plan defines the solution scope, limits, and the associated pricing, if applicable. You can create multiple plans for your offer to give your customers different technical and pricing options. For general guidance about plans, including private plans, see [Plans and pricing for commercial marketplace offers](plans-pricing.md).
+
+Managed Services support only one pricing model: **Bring your own license (BYOL)**. This means that youΓÇÖll bill your customers directly, and Microsoft wonΓÇÖt charge you any fees.
+
+## Next steps
+
+* [Create a Managed Service offer](./create-managed-service-offer.md)
+* [Offer listing best practices](./gtm-offer-listing-best-practices.md)
marketplace https://docs.microsoft.com/en-us/azure/marketplace/plan-saas-offer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/plan-saas-offer.md
@@ -152,7 +152,7 @@ When you [create a new SaaS offer](create-new-saas-offer.md) in Partner Center,
The following example shows an offer listing in the Azure portal.
-![Illustrates an offer listing in the Azure portal.](./media/example-managed-services.png)
+![Illustrates an offer listing in the Azure portal.](./media/example-managed-service-azure-portal.png)
**Call out descriptions**
marketplace https://docs.microsoft.com/en-us/azure/marketplace/sell-from-countries https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/sell-from-countries.md
@@ -8,12 +8,12 @@ ms.custom: references_regions
ms.topic: how-to author: mingshen-ms ms.author: mingshen
-ms.date: 09/02/2020
+ms.date: 01/04/2021
--- # Supported publisher countries and regions
-To publish an offer to the Microsoft commercial marketplace, you must have your residence in one of the following countries or regions.
+To publish an offer to the Microsoft commercial marketplace, your company must legally reside in one of the following countries or regions:
- Afghanistan - Åland Islands
@@ -129,7 +129,6 @@ To publish an offer to the Microsoft commercial marketplace, you must have your
- Kazakhstan - Kenya - Kiribati-- Korea - Kosovo - Kuwait - Kyrgyzstan
@@ -219,6 +218,7 @@ To publish an offer to the Microsoft commercial marketplace, you must have your
- Somalia - South Africa - South Georgia and South Sandwich Islands
+- South Korea (Republic of Korea)
- South Sudan - Spain - Sri Lanka
@@ -259,4 +259,4 @@ To publish an offer to the Microsoft commercial marketplace, you must have your
- Wallis and Futuna - Yemen - Zambia-- Zimbabwe
+- Zimbabwe
\ No newline at end of file
media-services https://docs.microsoft.com/en-us/azure/media-services/azure-media-player/azure-media-player-playback-technology https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/azure-media-player/azure-media-player-playback-technology.md
@@ -39,8 +39,8 @@ Given the recommended tech order with streaming content from Azure Media Service
| Browser | OS | Expected Tech (Clear) | Expected Tech (AES) | Expected Tech (DRM) | |----------------|----------------------------------------------------------|------------------------|----------------------|------------------------------|
-| EdgeIE 11 | Windows 10, Windows 8.1, Windows Phone 101 | azureHtml5JS | azureHtml5JS | azureHtml5JS (PlayReady) |
-| IE 11IE 9-101 | Windows 7, Windows Vista<sup>1</sup> | flashSS | flashSS | silverlightSS (PlayReady) |
+| EdgeIE 11 | Windows 10, Windows 8.1, Windows Phone 10<sup>1</sup> | azureHtml5JS | azureHtml5JS | azureHtml5JS (PlayReady) |
+| IE 11 | Windows 7, Windows Vista<sup>1</sup> | flashSS | flashSS | silverlightSS (PlayReady) |
| IE 11 | Windows Phone 8.1 | azureHtml5JS | azureHtml5JS | not supported | | Edge | Xbox One<sup>1</sup> (Nov 2015 update) | azureHtml5JS | azureHtml5JS | not supported | | Chrome 37+ | Windows 10, Windows 8.1, macOS X Yosemite<sup>1</sup> | azureHtml5JS | azureHtml5JS | azureHtml5JS (Widevine) |
@@ -53,7 +53,7 @@ Given the recommended tech order with streaming content from Azure Media Service
| Chrome 37+ | Android 4.4.4+<sup>2</sup> | azureHtml5JS | azureHtml5JS | azureHtml5JS (Widevine) | | Chrome 37+ | Android 4.02 | html5 | html5 (no token)<sup>3</sup> | not supported | | Firefox 42+ | Android 5.0+<sup>2</sup> | azureHtml5JS | azureHtml5JS | not supported |
-| IE 8 | Windows | not supported | not supported | not supported |
+| IE 8, IE 9, IE 10 | Windows | not supported | not supported | not supported |
<sup>1</sup> Configuration not supported or tested; listed as reference for completion.
media-services https://docs.microsoft.com/en-us/azure/media-services/latest/media-services-metrics-howto https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/media-services-metrics-howto.md
@@ -20,7 +20,7 @@ ms.custom: devx-track-azurecli
[!INCLUDE [media services api v3 logo](./includes/v3-hr.md)]
-[Azure Monitor](../../azure-monitor/overview.md) enables you to monitor metrics and diagnostic logs that help you understand how your applications are performing. For detailed description of this feature and to see why you would want to use Azure Media Services metrics and diagnostics logs, see [Monitor Media Services metrics and diagnostic logs](media-services-metrics-diagnostic-logs.md).
+[Azure Monitor](../../azure-monitor/overview.md) enables you to monitor metrics and diagnostic logs that help you understand how your applications are performing. For a detailed description of this feature and to understand why you should use Azure Media Services metrics and diagnostics logs, see [Monitor Media Services metrics and diagnostic logs](media-services-metrics-diagnostic-logs.md).
Azure Monitor provides several ways to interact with metrics, including charting them in the portal, accessing them through the REST API, or querying them using Azure CLI. This article shows how to monitor metrics with the Azure portal charts and Azure CLI.
@@ -33,24 +33,21 @@ Azure Monitor provides several ways to interact with metrics, including charting
1. Sign in to the Azure portal at https://portal.azure.com. 1. Navigate to your Azure Media Services account and select **Metrics**.
-1. Click the **RESOURCE** box and select the resource for which you want to monitor metrics.
+1. Click the **Scope** box and select the resource you want to monitor.
- The **Select a resource** window appears on the right with the list of resources available to you. In this case you see:
+ The **Select a scope** window appears on the right with the list of resources available to you. In this case you see:
* &lt;Media Services account name&gt; * &lt;Media Services account name&gt;/&lt;streaming endpoint name&gt; * &lt;storage account name&gt;
- Select the resource and press **Apply**. For details about supported resources and metrics, see [Monitor Media Services metrics](media-services-metrics-diagnostic-logs.md).
-
- ![Screenshot that shows the selected resource and highlights the Apply button.](media/media-services-metrics/metrics02.png)
+ Filter then select the resource and press **Apply**. For details about supported resources and metrics, see [Monitor Media Services metrics](media-services-metrics-diagnostic-logs.md).
> [!NOTE]
- > To switch between resources for which you want to monitor metrics, click on the **RESOURCE** box again and repeat this step.
-1. (Optionally) give your chart a name (edit the name by pressing the pencil at the top).
-1. Add metrics that you want to view.
+ > To switch between resources you want to monitor, click on the **Source** box again and repeat this step.
- ![Metrics](media/media-services-metrics/metrics03.png)
+1. Optional: give your chart a name (edit the name by pressing the pencil at the top).
+1. Add the metrics that you want to view.
1. You can pin your chart to your dashboard. ## View metrics with Azure CLI
@@ -67,8 +64,8 @@ To get other metrics, substitute "Egress" for the metric name you are interested
## See also
-* [Azure Monitor Metrics](../../azure-monitor/platform/data-platform.md)
-* [Create, view, and manage metric alerts using Azure Monitor](../../azure-monitor/platform/alerts-metric.md).
+- [Azure Monitor Metrics](../../azure-monitor/platform/data-platform.md)
+- [Create, view, and manage metric alerts using Azure Monitor](../../azure-monitor/platform/alerts-metric.md).
## Next steps
media-services https://docs.microsoft.com/en-us/azure/media-services/latest/storage-account-concept https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/storage-account-concept.md
@@ -11,7 +11,7 @@ editor: ''
ms.service: media-services ms.workload: ms.topic: conceptual
-ms.date: 08/31/2020
+ms.date: 01/05/2021
ms.author: inhenkel ---
@@ -51,6 +51,9 @@ To protect your assets at rest, the assets should be encrypted by the storage si
<sup>1</sup> In Media Services v3, storage encryption (AES-256 encryption) is only supported for backwards compatibility when your assets were created with Media Services v2, which means v3 works with existing storage encrypted assets but won't allow creation of new ones.
+## Double encryption
+Media Services supports double encryption. To learn more about double encryption, see [Azure double encryption](https://docs.microsoft.com/azure/security/fundamentals/double-encryption).
+ ## Storage account errors The "Disconnected" state for a Media Services account indicates that the account no longer has access to one or more of the attached storage accounts due to a change in storage access keys. Up-to-date storage access keys are required by Media Services to perform many tasks in the account.
media-services https://docs.microsoft.com/en-us/azure/media-services/live-video-analytics-edge/monitoring-logging https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/live-video-analytics-edge/monitoring-logging.md
@@ -1,29 +1,30 @@
--- title: Monitoring and logging - Azure
-description: This article provides an overview of Live Video Analytics on IoT Edge monitoring and logging.
+description: This article provides an overview of monitoring and logging in Live Video Analytics on IoT Edge.
ms.topic: reference ms.date: 04/27/2020 --- # Monitoring and logging
-In this article, you will learn about how you can receive events from the Live Video Analytics on IoT Edge module for remote monitoring.
+In this article, you'll learn how to receive events for remote monitoring from the Live Video Analytics on IoT Edge module.
-You will also learn about how you can control the logs that the module generates.
+You'll also learn how to control the logs that the module generates.
## Taxonomy of events
-Live Video Analytics on IoT Edge emits events, or telemetry data according to the following taxonomy.
+Live Video Analytics on IoT Edge emits events, or telemetry data, according to the following taxonomy:
> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/telemetry-schema/taxonomy.png" alt-text="Taxonomy of events":::
+> :::image type="content" source="./media/telemetry-schema/taxonomy.png" alt-text="Diagram that shows the taxonomy of events.":::
-* Operational: events that are generated as part of actions taken by a user, or during the execution of a [media graph](media-graph-concept.md).
+* Operational: Events generated by the actions of a user or during the execution of a [media graph](media-graph-concept.md)
- * Volume: expected to be low (a few times a minute, or even lower rate).
+ * Volume: Expected to be low (a few times a minute, or even less)
* Examples:
- Recording started (below), recording stopped
+ - Recording started (shown in the following example)
+ - Recording stopped
``` {
@@ -40,12 +41,13 @@ Live Video Analytics on IoT Edge emits events, or telemetry data according to th
} } ```
-* Diagnostics: events that help to diagnose problems and/or issues with performance.
+* Diagnostics: Events that help to diagnose problems with performance
- * Volume: can be high (several times a minute).
+ * Volume: Can be high (several times a minute)
* Examples:
- RTSP [SDP](https://en.wikipedia.org/wiki/Session_Description_Protocol) information (below), or gaps in the incoming video feed.
+ - RTSP [SDP](https://en.wikipedia.org/wiki/Session_Description_Protocol) information (shown in the following example)
+ - Gaps in the incoming video feed
``` {
@@ -61,12 +63,13 @@ Live Video Analytics on IoT Edge emits events, or telemetry data according to th
} } ```
-* Analytics: events that are generated as part of video analysis.
+* Analytics: Events generated as part of video analysis
- * Volume: can be high (several times a minute or more often).
+ * Volume: Can be high (several times a minute or more)
* Examples:
- Motion detected (below), Inference result.
+ - Motion detected (shown in the following example)
+ - Inference result
``` {
@@ -96,19 +99,19 @@ Live Video Analytics on IoT Edge emits events, or telemetry data according to th
} ```
-The events emitted by the module are sent to the [IoT Edge Hub](../../iot-edge/iot-edge-runtime.md#iot-edge-hub), and from there it can be routed to other destinations.
+The events emitted by the module are sent to the [IoT Edge hub](../../iot-edge/iot-edge-runtime.md#iot-edge-hub). They can be routed from there to other destinations.
### Timestamps in analytic events
-As indicated above, events generated as part of video analysis have a timestamp associated with them. If you [recorded the live video](video-recording-concept.md) as part of your graph topology, then this timestamp helps you locate where in the recorded video that particular event occurred. Following are the guidelines on how to map the timestamp in an analytic event to the timeline of the video recorded into an [Azure Media Service asset](terminology.md#asset).
+As indicated previously, events generated as part of video analysis have timestamps associated with them. If you [recorded the live video](video-recording-concept.md) as part of your graph topology, these timestamps help you locate where in the recorded video the particular event occurred. Following are guidelines on how to map the timestamp in an analytic event to the timeline of the video recorded into an [Azure Media Services asset](terminology.md#asset).
-First, extract the `eventTime` value. Use this value in a [time range filter](playback-recordings-how-to.md#time-range-filters) to retrieve a suitable portion of the recording. For example, you may want to fetch video that starts 30 seconds before `eventTime` and ends 30 seconds afterwards. With the above example, where `eventTime` is 2020-05-12T23:33:09.381Z, a request for a HLS manifest for the +/- 30s window would look like the following:
+First, extract the `eventTime` value. Use this value in a [time range filter](playback-recordings-how-to.md#time-range-filters) to retrieve a suitable portion of the recording. For example, you might want to retrieve video that starts 30 seconds before `eventTime` and ends 30 seconds after it. For the previous example, where `eventTime` is 2020-05-12T23:33:09.381Z, a request for an HLS manifest for the 30 seconds before and after `eventTime` would look like this request:
``` https://{hostname-here}/{locatorGUID}/content.ism/manifest(format=m3u8-aapl,startTime=2020-05-12T23:32:39Z,endTime=2020-05-12T23:33:39Z).m3u8 ```
-The URL above would return a so-called [master playlist](https://developer.apple.com/documentation/http_live_streaming/example_playlists_for_http_live_streaming), containing URLs for media playlists. The media playlist would contain entries like the following:
+The preceding URL would return a [master playlist](https://developer.apple.com/documentation/http_live_streaming/example_playlists_for_http_live_streaming) that contains URLs for media playlists. The media playlist would contain entries like this one:
``` ...
@@ -116,21 +119,21 @@ The URL above would return a so-called [master playlist](https://developer.apple
Fragments(video=143039375031270,format=m3u8-aapl) ... ```
-In the above, the entry reports that a video fragment is available that starts at a timestamp value of `143039375031270`. The `timestamp` value in the analytic event uses the same timescale as the media playlist, and can be used to identify the relevant video fragment, and seek to the correct frame.
+The preceding entry reports that a video fragment is available that starts at a `timestamp` value of `143039375031270`. The `timestamp` value in the analytic event uses the same timescale as the media playlist. It can be used to identify the relevant video fragment and seek to the correct frame.
-For more information, you can read one of the many [articles](https://www.bing.com/search?q=frame+accurate+seeking+in+HLS) on frame accurate seeking in HLS.
+For more information, see these [articles on frame-accurate seeking](https://www.bing.com/search?q=frame+accurate+seeking+in+HLS) in HLS.
## Controlling events
-You can use the following module twin properties, as documented in [module twin JSON schema](module-twin-configuration-schema.md), to control the operational and diagnostic events that are published by the Live Video Analytics on IoT Edge module.
+You can use the following module twin properties to control the operational and diagnostic events published by the Live Video Analytics on IoT Edge module. These properties are documented in the [module twin JSON schema](module-twin-configuration-schema.md).
-`diagnosticsEventsOutputName` ΓÇô include and provide (any) value for this property, in order to get diagnostic events from the module. Omit it, or leave it empty to stop the module from publishing diagnostic events.
+- `diagnosticsEventsOutputName`: To get diagnostic events from the module, include this property and provide any value for it. Omit it or leave it empty to stop the module from publishing diagnostic events.
-`operationalEventsOutputName` ΓÇô include and provide (any) value for this property, in order to get operational events from the module. Omit it, or leave it empty to stop the module from publishing operational events.
+- `operationalEventsOutputName`: To get operational events from the module, include this property and provide any value for it. Omit it or leave it empty to stop the module from publishing operational events.
-The analytics events are generated by nodes such as the motion detection processor, or the HTTP extension processor, and the IoT hub sink is used to send them to the IoT Edge Hub.
+Analytics events are generated by nodes like the motion detection processor or the HTTP extension processor. The IoT hub sink is used to send them to the IoT Edge hub.
-You can control the [routing of all the above events](../../iot-edge/module-composition.md#declare-routes) via a desired property of the $edgeHub module twin (in the deployment manifest):
+You can control the [routing of all the preceding events](../../iot-edge/module-composition.md#declare-routes) by using the `desired` property of the `$edgeHub` module twin in the deployment manifest:
``` "$edgeHub": {
@@ -146,96 +149,96 @@ You can control the [routing of all the above events](../../iot-edge/module-comp
} ```
-In the above, lvaEdge is the name for the Live Video Analytics on IoT Edge module, and the routing rule follows the schema defined in [declare routes](../../iot-edge/module-composition.md#declare-routes).
+In the preceding JSON, `lvaEdge` is the name of the Live Video Analytics on IoT Edge module. The routing rule follows the schema defined in [Declare routes](../../iot-edge/module-composition.md#declare-routes).
> [!NOTE]
-> In order to ensure that analytics events reach the IoT Edge Hub, there needs to be an IoT hub sink node downstream of any motion detection processor node and/or any HTTP extension processor node.
+> To ensure that analytics events reach the IoT Edge hub, you need to have an IoT hub sink node downstream of any motion detection processor node and/or any HTTP extension processor node.
## Event schema
-Events originate on the Edge device, and can be consumed on the Edge or in the cloud. Events generated by Live Video Analytics on IoT Edge conform to the [streaming messaging pattern](../../iot-hub/iot-hub-devguide-messages-construct.md) established by Azure IoT Hub, with system properties, application properties, and a body.
+Events originate on the edge device and can be consumed at the edge or in the cloud. Events generated by Live Video Analytics on IoT Edge conform to the [streaming messaging pattern](../../iot-hub/iot-hub-devguide-messages-construct.md) established by Azure IoT Hub. The pattern consists of system properties, application properties, and a body.
### Summary
-Every event, when observed via the IoT Hub, will have a set of common properties as described below.
+Every event, when observed via IoT Hub, has a set of common properties:
-|Property |Property Type| Data Type |Description|
+|Property |Property type| Data type |Description|
|---|---|---|---|
-|message-id |system |guid| Unique event ID.|
-|topic| applicationProperty |string| Azure Resource Manager path for the Media Services account.|
-|subject| applicationProperty |string| Sub-path to the entity emitting the event.|
-|eventTime| applicationProperty| string| Time the event was generated.|
-|eventType| applicationProperty |string| Event Type identifier (see below).|
-|body|body |object| Particular event data.|
-|dataVersion |applicationProperty| string |{Major}.{Minor}|
+|`message-id` |system |guid| Unique event ID.|
+|`topic`| applicationProperty |string| Azure Resource Manager path for the Azure Media Services account.|
+|`subject`| applicationProperty |string| Subpath of the entity emitting the event.|
+|`eventTime`| applicationProperty| string| Time the event was generated.|
+|`eventType`| applicationProperty |string| Event type identifier. (See the following section.)|
+|`body`|body |object| Particular event data.|
+|`dataVersion` |applicationProperty| string |{Major}.{Minor}|
### Properties #### message-id
-Event globally unique identifier (GUID)
+A globally unique identifier (GUID) for the event.
#### topic
-Represents the Azure Media Service account associated with the graph.
+Represents the Azure Media Services account associated with the graph.
`/subscriptions/{subId}/resourceGroups/{rgName}/providers/Microsoft.Media/mediaServices/{accountName}` #### subject
-Entity which is emitting the event:
+The entity that's emitting the event:
`/graphInstances/{graphInstanceName}`<br/> `/graphInstances/{graphInstanceName}/sources/{sourceName}`<br/> `/graphInstances/{graphInstanceName}/processors/{processorName}`<br/> `/graphInstances/{graphInstanceName}/sinks/{sinkName}`
-The subject property allows for generic events to be mapped to its generating module. For instance, in case of invalid RTSP username or password the generated event would be `Microsoft.Media.Graph.Diagnostics.ProtocolError` on the `/graphInstances/myGraph/sources/myRtspSource` node.
+The `subject` property allows you to map generic events to the generating module. For example, for an invalid RTSP user name or password, the generated event would be `Microsoft.Media.Graph.Diagnostics.ProtocolError` on the `/graphInstances/myGraph/sources/myRtspSource` node.
#### Event types
-Event types are assigned to a namespace according with the following schema:
+Event types are assigned to a namespace according to this schema:
`Microsoft.Media.Graph.{EventClass}.{EventType}` #### Event classes
-|Class Name|Description|
+|Class name|Description|
|---|---| |Analytics |Events generated as part of content analysis.|
-|Diagnostics |Events that aid with diagnostics of problems and performance.|
+|Diagnostics |Events that help with the diagnostics of problems and performance.|
|Operational |Events generated as part of resource operation.| The event types are specific to each event class. Examples:
-* Microsoft.Media.Graph.Analytics.Inference
-* Microsoft.Media.Graph.Diagnostics.AuthorizationError
-* Microsoft.Media.Graph.Operational.GraphInstanceStarted
+* `Microsoft.Media.Graph.Analytics.Inference`
+* `Microsoft.Media.Graph.Diagnostics.AuthorizationError`
+* `Microsoft.Media.Graph.Operational.GraphInstanceStarted`
### Event time
-Event time is described in ISO8601 string and it the time the event occurred.
+Event time is formatted in an ISO 8601 string. It represents the time when the event occurred.
-### Azure Monitor Collection using Telegraf
+### Azure Monitor collection via Telegraf
-These metrics will be reported the Live Video Analytics on IoT Edge module:
+These metrics will be reported from the Live Video Analytics on IoT Edge module:
-|Metric Name|Type|Label|Description|
+|Metric name|Type|Label|Description|
|-----------|----|-----|-----------| |lva_active_graph_instances|Gauge|iothub, edge_device, module_name, graph_topology|Total number of active graphs per topology.|
-|lva_received_bytes_total|Counter|iothub, edge_device, module_name, graph_topology, graph_instance, graph_node|The total number of bytes received by a node. Only supported for RTSP Sources|
-|lva_data_dropped_total|Counter|iothub, edge_device, module_name, graph_topology, graph_instance, graph_node, data_kind|Counter of any dropped data (events, media, etc.)|
+|lva_received_bytes_total|Counter|iothub, edge_device, module_name, graph_topology, graph_instance, graph_node|Total number of bytes received by a node. Supported only for RTSP sources.|
+|lva_data_dropped_total|Counter|iothub, edge_device, module_name, graph_topology, graph_instance, graph_node, data_kind|Counter of any dropped data (events, media, and so on).|
> [!NOTE]
-> A [Prometheus endpoint](https://prometheus.io/docs/practices/naming/) is exposed at port **9600** of the container. If you name your Live Video Analytics on IoT Edge module ΓÇ£lvaEdgeΓÇ¥, they would be able to access metrics by sending a GET request to http://lvaEdge:9600/metrics.
+> A [Prometheus endpoint](https://prometheus.io/docs/practices/naming/) is exposed at port 9600 of the container. If you name your Live Video Analytics on IoT Edge module "lvaEdge," they will be able to access metrics by sending a GET request to http://lvaEdge:9600/metrics.
Follow these steps to enable the collection of metrics from the Live Video Analytics on IoT Edge module:
-1. Create a folder on your development machine and navigate to that folder
+1. Create a folder on your development computer, and go to that folder.
-1. In that folder, create `telegraf.toml` file with the following contents
+1. In the folder, create a `telegraf.toml` file that contains the following configurations:
``` [agent] interval = "30s"
@@ -251,25 +254,26 @@ Follow these steps to enable the collection of metrics from the Live Video Analy
resource_id = "/subscriptions/{SUBSCRIPTON_ID}/resourceGroups/{RESOURCE_GROUP}/providers/Microsoft.Devices/IotHubs/{IOT_HUB_NAME}" ``` > [!IMPORTANT]
- > Make sure you replace the variables (marked by the `{ }`) in the content file
+ > Be sure to replace the variables in the .toml file. The variables are denoted by braces (`{}`).
-1. In that folder, create a `.dockerfile` with the following content
+1. In the same folder, create a `.dockerfile` that contains the following commands:
``` FROM telegraf:1.15.3-alpine COPY telegraf.toml /etc/telegraf/telegraf.conf ```
-1. Now using docker CLI command **build the docker file** and publish the image to your Azure Container Registry.
- 1. Learn how to [Push and Pull Docker images - Azure Container Registry](https://docs.microsoft.com/azure/container-registry/container-registry-get-started-docker-cli). More on Azure Container Registry (ACR) can be found [here](https://docs.microsoft.com/azure/container-registry/).
+1. Use Docker CLI commands to build the Docker file and publish the image to your Azure container registry.
+
+ For more information about using the Docker CLI to push to a container registry, see [Push and pull Docker images](https://docs.microsoft.com/azure/container-registry/container-registry-get-started-docker-cli). For other information about Azure Container Registry, see the [documentation](https://docs.microsoft.com/azure/container-registry/).
-1. Once the push to ACR is complete, in your deployment manifest file, add the following node:
+1. After the push to Azure Container Registry is complete, add the following node to your deployment manifest file:
``` "telegraf": { "settings": {
- "image": "{ACR_LINK_TO_YOUR_TELEGRAF_IMAGE}"
+ "image": "{AZURE_CONTAINER_REGISTRY_LINK_TO_YOUR_TELEGRAF_IMAGE}"
}, "type": "docker", "version": "1.0",
@@ -283,64 +287,72 @@ Follow these steps to enable the collection of metrics from the Live Video Analy
} ``` > [!IMPORTANT]
- > Make sure you replace the variables (marked by the `{ }`) in the content file
+ > Be sure to replace the variables in the manifest file. The variables are denoted by braces (`{}`).
-1. **Authentication**
- 1. Azure Monitor may be [authenticated by Service Principal](https://github.com/influxdata/telegraf/blob/master/plugins/outputs/azure_monitor/README.md#azure-authentication).
- 1. The Azure Monitor Telegraf plugin exposes [several methods of authentication](https://github.com/influxdata/telegraf/blob/master/plugins/outputs/azure_monitor/README.md#azure-authentication). The following environment variables must be set to use Service Principal authentication.
- ΓÇó AZURE_TENANT_ID: Specifies the Tenant to which to authenticate.
- ΓÇó AZURE_CLIENT_ID: Specifies the app client ID to use.
- ΓÇó AZURE_CLIENT_SECRET: Specifies the app secret to use.
- >[!TIP]
- > The Service Principal can be given the ΓÇ£**Monitoring Metrics Publisher**ΓÇ¥ role.
+ Azure Monitor can be [authenticated via service principal](https://github.com/influxdata/telegraf/blob/master/plugins/outputs/azure_monitor/README.md#azure-authentication).
+
+ The Azure Monitor Telegraf plug-in exposes [several methods of authentication](https://github.com/influxdata/telegraf/blob/master/plugins/outputs/azure_monitor/README.md#azure-authentication).
+
+ 1. To use service principal authentication, set these environment variables:
+ `AZURE_TENANT_ID`: Specifies the tenant to authenticate to.
+ `AZURE_CLIENT_ID`: Specifies the app client ID to use.
+ `AZURE_CLIENT_SECRET`: Specifies the app secret to use.
+
+ >[!TIP]
+ > You can give the service principal the **Monitoring Metrics Publisher** role.
+
+1. After the modules are deployed, metrics will appear in Azure Monitor under a single namespace. Metric names will match the ones emitted by Prometheus.
+
+ In this case, in the Azure portal, go to the IoT hub and select **Metrics** in the left pane. You should see the metrics there.
-1. Once the modules are deployed, metrics will appear in Azure Monitor under a single namespace with metric names matching the ones emitted by Prometheus.
- 1. In this case, in your Azure portal, navigate to the IoT Hub and click on the "**Metrics**" link in the left navigation pane. You should see the metrics there.
## Logging
-Like with other IoT Edge modules, you can also [examine the container logs](../../iot-edge/troubleshoot.md#check-container-logs-for-issues) on the Edge device. The information that is written to the logs can be controlled by the [following module twin](module-twin-configuration-schema.md) properties:
+As with other IoT Edge modules, you can also [examine the container logs](../../iot-edge/troubleshoot.md#check-container-logs-for-issues) on the edge device. You can configure the information that's written to the logs by using the [following module twin](module-twin-configuration-schema.md) properties:
-* logLevel
+* `logLevel`
- * Allowed values are Verbose, Information, Warning, Error, None.
- * Default value is Information ΓÇô the logs will contain error, warning, and information. messages.
- * If you set the value to Warning, the logs will contain error and warning messages
- * If you set the value to Error, the logs will only contain error messages.
- * If you set the value to None, no logs will be generated (this is not recommended).
- * You should only use Verbose if you need to share logs with Azure Support for diagnosing an issue.
-* logCategories
+ * Allowed values are `Verbose`, `Information`, `Warning`, `Error`, and `None`.
+ * The default value is `Information`. The logs will contain error, warning, and information messages.
+ * If you set the value to `Warning`, the logs will contain error and warning messages.
+ * If you set the value to `Error`, the logs will contain only error messages.
+ * If you set the value to `None`, no logs will be generated. (We don't recommend this configuration.)
+ * Use `Verbose` only if you need to share logs with Azure support to diagnose a problem.
- * A comma-separated list of one or more of the following: Application, Events, MediaPipeline.
- * Default: Application, Events.
- * Application ΓÇô this is high-level information from the module, such as module startup messages, environment errors, and direct method calls.
- * Events ΓÇô these are all the events that were described earlier in this article.
- * MediaPipeline ΓÇô these are some low-level logs that may offer insight when troubleshooting issues, such as difficulties establishing a connection with an RTSP-capable camera.
+* `logCategories`
+
+ * A comma-separated list of one or more of these values: `Application`, `Events`, `MediaPipeline`.
+ * The default value is `Application, Events`.
+ * `Application`: High-level information from the module, like module startup messages, environment errors, and direct method calls.
+ * `Events`: All the events that were described earlier in this article.
+ * `MediaPipeline`: Low-level logs that might offer insight when you're troubleshooting problems, like difficulties establishing a connection with an RTSP-capable camera.
### Generating debug logs
-In certain cases, you may need to generate more detailed logs than the ones described above, to help Azure support resolve an issue. There are two steps to accomplish this.
+In certain cases, to help Azure support resolve a problem, you might need to generate more detailed logs than the ones described previously. To generate these logs:
-First, you [link the module storage to the device storage](../../iot-edge/how-to-access-host-storage-from-module.md#link-module-storage-to-device-storage) via createOptions. If you examine a [deployment manifest template](https://github.com/Azure-Samples/live-video-analytics-iot-edge-csharp/blob/master/src/edge/deployment.template.json) from the quick-starts, you will see:
+1. [Link the module storage to the device storage](../../iot-edge/how-to-access-host-storage-from-module.md#link-module-storage-to-device-storage) via `createOptions`. If you look at a [deployment manifest template](https://github.com/Azure-Samples/live-video-analytics-iot-edge-csharp/blob/master/src/edge/deployment.template.json) from the quickstarts, you'll see this code:
-```
-"createOptions": {
- …
- "Binds": [
- "/var/local/mediaservices/:/var/lib/azuremediaservices/"
- ]
- }
-```
+ ```
+ "createOptions": {
+ …
+ "Binds": [
+ "/var/local/mediaservices/:/var/lib/azuremediaservices/"
+ ]
+ }
+ ```
+
+ This code lets the Edge module write logs to the device storage path `/var/local/mediaservices/`.
-Above lets the Edge module write logs to the (device) storage path ΓÇ£/var/local/mediaservices/ΓÇ¥. If you add the following desired property to the module:
+ 1. Add the following `desired` property to the module:
-`"debugLogsDirectory": "/var/lib/azuremediaservices/debuglogs/",`
+ `"debugLogsDirectory": "/var/lib/azuremediaservices/debuglogs/",`
-Then, the module will write debug logs in a binary format to the (device) storage path /var/local/mediaservices/debuglogs/, which you can share with Azure Support.
+The module will now write debug logs in a binary format to the device storage path `/var/local/mediaservices/debuglogs/`. You can share these logs with Azure support.
## FAQ
-[FAQs](faq.md#monitoring-and-metrics)
+If you have questions, see the [monitoring and metrics FAQ](faq.md#monitoring-and-metrics).
## Next steps
media-services https://docs.microsoft.com/en-us/azure/media-services/live-video-analytics-edge/upgrading-lva-module https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/live-video-analytics-edge/upgrading-lva-module.md
@@ -165,7 +165,7 @@ With this release, Telegraf can be used to send metrics to Azure Monitor. From t
> [!div class="mx-imgBorder"] > :::image type="content" source="./media/telemetry-schema/telegraf.png" alt-text="Taxonomy of events":::
-You can produce a Telegraf image with a custom configuration easily using docker. Learn more about this in the [Monitoring and logging](monitoring-logging.md#azure-monitor-collection-using-telegraf) page.
+You can produce a Telegraf image with a custom configuration easily using docker. Learn more about this in the [Monitoring and logging](monitoring-logging.md#azure-monitor-collection-via-telegraf) page.
## Next steps
media-services https://docs.microsoft.com/en-us/azure/media-services/video-indexer/release-notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/video-indexer/release-notes.md
@@ -26,6 +26,12 @@ To stay up-to-date with the most recent developments, this article provides you
* Bug fixes * Deprecated functionality
+## December 2020
+
+### Video Indexer deployed in the Switzerland West and Switzerland North
+
+You can now create a Video Indexer paid account in the Switzerland West and Switzerland North regions.
+ ## October 2020 ### Animated character identification improvements
migrate https://docs.microsoft.com/en-us/azure/migrate/migrate-support-matrix-vmware-migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/migrate-support-matrix-vmware-migration.md
@@ -54,7 +54,7 @@ The table summarizes agentless migration requirements for VMware VMs.
**Linux VMs in Azure** | Some VMs might require changes so that they can run in Azure.<br/><br/> For Linux, Azure Migrate makes the changes automatically for these operating systems:<br/> - Red Hat Enterprise Linux 7.8, 7.7, 7.6, 7.5, 7.4, 7.0, 6.x<br/> - Cent OS 7.7, 7.6, 7.5, 7.4, 6.x</br> - SUSE Linux Enterprise Server 12 SP1+<br/> - SUSE Linux Enterprise Server 15 SP1 <br/>- Ubuntu 19.04, 19.10, 14.04LTS, 16.04LTS, 18.04LTS<br/> - Debian 7, 8 <br/> Oracle Linux 7.7, 7.7-CI<br/> For other operating systems you make the [required changes](prepare-for-migration.md#verify-required-changes-before-migrating) manually. **Linux boot** | If /boot is on a dedicated partition, it should reside on the OS disk, and not be spread across multiple disks.<br/> If /boot is part of the root (/) partition, then the '/' partition should be on the OS disk, and not span other disks. **UEFI boot** | Supported. UEFI-based VMs will be migrated to Azure generation 2 VMs.
-**Disk size** | 2 TB OS disk (BIOS boot); 4 TB OS disk (UEFI boot); 32 TB for data disks.
+**Disk size** | 2 TB OS disk; 32 TB for data disks.
**Disk limits** | Up to 60 disks per VM. **Encrypted disks/volumes** | VMs with encrypted disks/volumes aren't supported for migration. **Shared disk cluster** | Not supported.
@@ -117,7 +117,7 @@ The table summarizes VMware VM support for VMware VMs you want to migrate using
**UEFI boot** | Supported. UEFI-based VMs will be migrated to Azure generation 2 VMs. **UEFI - Secure boot** | Not supported for migration. **Target disk** | VMs can only be migrated to managed disks (standard HDD, standard SSD, premium SSD) in Azure.
-**Disk size** | 2 TB OS disk (BIOS boot); 4 TB OS disk (UEFI boot); 8 TB for data disks.
+**Disk size** | 2 TB OS disk; 32 TB for data disks.
**Disk limits** | Up to 63 disks per VM. **Encrypted disks/volumes** | VMs with encrypted disks/volumes aren't supported for migration. **Shared disk cluster** | Not supported.
migrate https://docs.microsoft.com/en-us/azure/migrate/server-migrate-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/server-migrate-overview.md
@@ -39,7 +39,7 @@ After reviewing the limitations, understanding the steps involved in deploying e
**Task** | **Details** |**Agentless** | **Agent-based** --- | --- | --- | --- **Deploy the Azure Migrate appliance** | A lightweight appliance that runs on a VMware VM.<br/><br/> The appliance is used to discover and assess machines, and to migrate machines using agentless migration. | Required.<br/><br/> If you've already set up the appliance for assessment, you can use the same appliance for agentless migration. | Not required.<br/><br/> If you've set up an appliance for assessment, you can leave it in place, or remove it if you're done with assessment.
-**Use the Server Assessment tool** | Assess machines with the Azure Migrate:Server Assessment tool. | You can assess machines before you migrate them, but you don't have to. | Assessment is optional | Assessment is optional.
+**Use the Server Assessment tool** | Assess machines with the Azure Migrate:Server Assessment tool. | You can assess machines before you migrate them, but you don't have to. | Assessment is optional.
**Use the Server Migration tool** | Add the Azure Migrate Server Migration tool in the Azure Migrate project. | Required | Required **Prepare VMware for migration** | Configure settings on VMware servers and VMs. | Required | Required **Install the Mobility service on VMs** | Mobility service runs on each VM you want to replicate | Not required | Required
network-watcher https://docs.microsoft.com/en-us/azure/network-watcher/connection-monitor-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/network-watcher/connection-monitor-overview.md
@@ -13,7 +13,7 @@ ms.devlang: na
ms.topic: how-to ms.tgt_pltfrm: na ms.workload: infrastructure-services
-ms.date: 11/23/2020
+ms.date: 01/04/2021
ms.author: vinigam ms.custom: mvc #Customer intent: I need to monitor communication between one VM and another. If the communication fails, I need to know why so that I can resolve the problem.
network-watcher https://docs.microsoft.com/en-us/azure/network-watcher/connection-monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/network-watcher/connection-monitor.md
@@ -13,7 +13,7 @@ ms.devlang: na
ms.topic: tutorial ms.tgt_pltfrm: na ms.workload: infrastructure-services
-ms.date: 11/23/2020
+ms.date: 01/04/2021
ms.author: damendo ms.custom: mvc ---
network-watcher https://docs.microsoft.com/en-us/azure/network-watcher/network-watcher-connectivity-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/network-watcher/network-watcher-connectivity-portal.md
@@ -10,7 +10,7 @@ ms.devlang: na
ms.topic: troubleshooting ms.tgt_pltfrm: na ms.workload: infrastructure-services
-ms.date: 08/03/2017
+ms.date: 01/04/2021
ms.author: damendo ---
network-watcher https://docs.microsoft.com/en-us/azure/network-watcher/network-watcher-ip-flow-verify-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/network-watcher/network-watcher-ip-flow-verify-overview.md
@@ -9,7 +9,7 @@ ms.devlang: na
ms.topic: article ms.tgt_pltfrm: na ms.workload: infrastructure-services
-ms.date: 11/30/2017
+ms.date: 01/04/2021
ms.author: damendo ---
network-watcher https://docs.microsoft.com/en-us/azure/network-watcher/network-watcher-monitoring-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/network-watcher/network-watcher-monitoring-overview.md
@@ -13,7 +13,7 @@ ms.devlang: na
ms.topic: overview ms.tgt_pltfrm: na ms.workload: infrastructure-services
-ms.date: 04/24/2018
+ms.date: 01/04/2021
ms.author: damendo ms.custom: mvc ---
network-watcher https://docs.microsoft.com/en-us/azure/network-watcher/network-watcher-nsg-flow-logging-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/network-watcher/network-watcher-nsg-flow-logging-overview.md
@@ -11,7 +11,7 @@ ms.devlang: na
ms.topic: article ms.tgt_pltfrm: na ms.workload: infrastructure-services
-ms.date: 02/22/2017
+ms.date: 01/04/2021
ms.author: damendo ---
network-watcher https://docs.microsoft.com/en-us/azure/network-watcher/network-watcher-read-nsg-flow-logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/network-watcher/network-watcher-read-nsg-flow-logs.md
@@ -9,7 +9,7 @@ ms.devlang: na
ms.topic: how-to ms.tgt_pltfrm: na ms.workload: infrastructure-services
-ms.date: 12/13/2017
+ms.date: 01/04/2021
ms.author: damendo ---
network-watcher https://docs.microsoft.com/en-us/azure/network-watcher/traffic-analytics-faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/network-watcher/traffic-analytics-faq.md
@@ -9,7 +9,7 @@ ms.devlang: na
ms.topic: article ms.tgt_pltfrm: na ms.workload: infrastructure-services
-ms.date: 03/08/2018
+ms.date: 01/04/2021
ms.author: damendo ---
network-watcher https://docs.microsoft.com/en-us/azure/network-watcher/traffic-analytics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/network-watcher/traffic-analytics.md
@@ -10,7 +10,7 @@ ms.devlang: na
ms.topic: article ms.tgt_pltfrm: na ms.workload: infrastructure-services
-ms.date: 06/15/2018
+ms.date: 01/04/2021
ms.author: damendo ms.reviewer: vinigam ms.custom: references_regions
networking https://docs.microsoft.com/en-us/azure/networking/microsoft-global-network https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/networking/microsoft-global-network.md
@@ -10,7 +10,7 @@ ms.devlang:
ms.topic: article ms.tgt_pltfrm: na ms.workload: infrastructure-services
-ms.date: 06/13/2019
+ms.date: 01/05/2020
ms.author: kumud ms.reviewer: ypitsch ---
@@ -66,4 +66,5 @@ These principles apply to all layers of the network: from the host Network Inter
The exponential growth of Azure and its network has reached a point where we eventually realized that human intuition could no longer be relied on to manage the global network operations. To fulfill the need to validate long, medium, and short-term changes on the network, we developed a platform to mirror and emulate our production network synthetically. The ability to create mirrored environments and run millions of simulations, allows us to test software and hardware changes and their impact, before committing them to our production platform and network. ## Next steps-- [Learn more about the networking services provided in Azure](https://azure.microsoft.com/product-categories/networking/)
+- [Learn about how Microsoft is advancing global network reliability through intelligent software](https://azure.microsoft.com/blog/advancing-global-network-reliability-through-intelligent-software-part-1-of-2/)
+- [Learn more about the networking services provided in Azure](https://azure.microsoft.com/product-categories/networking/)
\ No newline at end of file
private-link https://docs.microsoft.com/en-us/azure/private-link/private-link-faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/private-link/private-link-faq.md
@@ -39,8 +39,8 @@ Yes. You can have multiple private endpoints in same VNet or subnet. They can co
### Do I require a dedicated subnet for private endpoints? No. You don't require a dedicated subnet for private endpoints. You can choose a private endpoint IP from any subnet from the VNet where your service is deployed.
-### Can Private Endpoint connect to Private Link service across Azure Active Directory Tenants?
-Yes. Private endpoints can connect to Private Link services or Azure PaaS across AD tenants.
+### Can a private endpoint connect to Private Link services across Azure Active Directory tenants?
+Yes. Private endpoints can connect to Private Link services or to an Azure PaaS across Azure Active Directory tenants. Private endpoints that connect across tenants require a manual request approval.
### Can private endpoint connect to Azure PaaS resources across Azure regions? Yes. Private endpoints can connect to Azure PaaS resources across Azure regions.
search https://docs.microsoft.com/en-us/azure/search/cognitive-search-common-errors-warnings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-common-errors-warnings.md
@@ -4,8 +4,8 @@ titleSuffix: Azure Cognitive Search
description: This article provides information and solutions to common errors and warnings you might encounter during AI enrichment in Azure Cognitive Search. manager: nitinme
-author: nitinme
-ms.author: nitinme
+author: HeidiSteen
+ms.author: heidist
ms.service: cognitive-search ms.topic: conceptual ms.date: 09/23/2020
security-center https://docs.microsoft.com/en-us/azure/security-center/alerts-reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/alerts-reference.md
@@ -10,7 +10,7 @@ ms.devlang: na
ms.topic: overview ms.tgt_pltfrm: na ms.workload: na
-ms.date: 12/30/2020
+ms.date: 01/05/2021
ms.author: memildin ---
@@ -384,20 +384,20 @@ At the bottom of this page, there's a table describing the Azure Security Center
| Alert | Description | MITRE tactics<br>([Learn more](#intentions)) | Severity | |-------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------:|------------|
-| **PREVIEW ΓÇô Access from a suspicious IP address**<br>(Storage.Blob_AccessInspectionAnomaly<br>Storage.Files_AccessInspectionAnomaly) | Indicates that this storage account has been successfully accessed from an IP address that is considered suspicious. This alert is powered by Microsoft Threat Intelligence.<br>Learn more about [Microsoft's threat intelligence capabilities](https://go.microsoft.com/fwlink/?linkid=2128684).<br>Applies to: Azure Blob Storage, Azure Files, Azure Data Lake Storage Gen2 | Initial Access | Medium |
-| **Access from a Tor exit node to a storage account**<br>(Storage.Blob_AnonymousAccessAnomaly) | Indicates that this account has been accessed successfully from an IP address that is known as an active exit node of Tor (an anonymizing proxy). The severity of this alert considers the authentication type used (if any), and whether this is the first case of such access. Potential causes can be an attacker who has accessed your storage account by using Tor, or a legitimate user who has accessed your storage account by using Tor.<br>Applies to: Azure Blob Storage, Azure Files, Azure Data Lake Storage Gen2 | Probing / Exploitation | High |
-| **Access from an unusual location to a storage account**<br>(Storage.Blob_ApplicationAnomaly<br>Storage.Files_ApplicationAnomaly) | Indicates that there was a change in the access pattern to an Azure Storage account. Someone has accessed this account from an IP address considered unfamiliar when compared with recent activity. Either an attacker has gained access to the account, or a legitimate user has connected from a new or unusual geographic location. An example of the latter is remote maintenance from a new application or developer.<br>Applies to: Azure Blob Storage, Azure Files, Azure Data Lake Storage Gen2 | Exploitation | Low |
-| **Anonymous access to a storage account**<br>(Storage.Blob_CspkgUploadAnomaly) | Indicates that there's a change in the access pattern to a storage account. For instance, the account has been accessed anonymously (without any authentication), which is unexpected compared to the recent access pattern on this account. A potential cause is that an attacker has exploited public read access to a container that holds blob storage.<br>Applies to: Azure Blob Storage | Exploitation | High |
-| **PREVIEW ΓÇô Phishing content hosted on a storage account**<br>(Storage.Blob_DataExfiltration.AmountOfDataAnomaly<br>Storage.Files_DataExfiltration.AmountOfDataAnomaly) | A URL used in a phishing attack points to your Azure Storage account. This URL was part of a phishing attack affecting users of Microsoft 365.<br>Typically, content hosted on such pages is designed to trick visitors into entering their corporate credentials or financial information into a web form that looks legitimate.<br>This alert is powered by Microsoft Threat Intelligence.<br>Learn more about [Microsoft's threat intelligence capabilities](https://go.microsoft.com/fwlink/?linkid=2128684).<br>Applies to: Azure Blob Storage, Azure Files | Collection | High |
-| **Potential malware uploaded to a storage account**<br>(Storage.Blob_DataExfiltration.NumberOfBlobsAnomaly<br>Storage.Files_DataExfiltration.NumberOfFilesAnomaly) | Indicates that a blob containing potential malware has been uploaded to a blob container or a file share in a storage account. This alert is based on hash reputation analysis leveraging the power of Microsoft threat intelligence, which includes hashes for viruses, trojans, spyware and ransomware. Potential causes may include an intentional malware upload by an attacker, or an unintentional upload of a potentially malicious blob by a legitimate user.<br>Applies to: Azure Blob Storage, Azure Files (Only for transactions over REST API)<br>Learn more about [Azure's hash reputation analysis for malware](defender-for-storage-introduction.md#what-is-hash-reputation-analysis-for-malware).<br>Learn more about [Microsoft's threat intelligence capabilities](https://go.microsoft.com/fwlink/?linkid=2128684). | LateralMovement | High |
-| **Unusual access inspection in a storage account**<br>(Storage.Blob_DataExplorationAnomaly<br>Storage.Files_DataExplorationAnomaly) | Indicates that the access permissions of a storage account have been inspected in an unusual way, compared to recent activity on this account. A potential cause is that an attacker has performed reconnaissance for a future attack.<br>Applies to: Azure Blob Storage, Azure Files | Collection | Medium |
-| **Unusual amount of data extracted from a storage account**<br>(Storage.Blob_DeletionAnomaly<br>Storage.Files_DeletionAnomaly) | Indicates that an unusually large amount of data has been extracted compared to recent activity on this storage container. A potential cause is that an attacker has extracted a large amount of data from a container that holds blob storage.<br>Applies to: Azure Blob Storage, Azure Files, Azure Data Lake Storage Gen2 | Exfiltration | Medium |
-| **Unusual application accessed a storage account**<br>(Storage.Blob_ExeUploadAnomaly<br>Storage.Files_ExeUploadAnomaly) | Indicates that an unusual application has accessed this storage account. A potential cause is that an attacker has accessed your storage account by using a new application.<br>Applies to: Azure Blob Storage, Azure Files | Exploitation | Medium |
-| **Unusual change of access permissions in a storage account**<br>(Storage.Blob_GeoAnomaly<br>Storage.Files_GeoAnomaly) | Indicates that the access permissions of this storage container have been changed in an unusual way. A potential cause is that an attacker has changed container permissions to weaken its security posture or to gain persistence.<br>Applies to: Azure Blob Storage, Azure Files, Azure Data Lake Storage Gen2 | Persistence | Medium |
-| **Unusual data exploration in a storage account**<br>(Storage.Blob_MalwareHashReputation<br>Storage.Files_MalwareHashReputation) | Indicates that blobs or containers in a storage account have been enumerated in an abnormal way, compared to recent activity on this account. A potential cause is that an attacker has performed reconnaissance for a future attack.<br>Applies to: Azure Blob Storage, Azure Files | Collection | Medium |
-| **Unusual deletion in a storage account**<br>(Storage.Blob_PermissionsChangeAnomaly<br>Storage.Files_PermissionsChangeAnomaly) | Indicates that one or more unexpected delete operations has occurred in a storage account, compared to recent activity on this account. A potential cause is that an attacker has deleted data from your storage account.<br>Applies to: Azure Blob Storage, Azure Files, Azure Data Lake Storage Gen2 | Exfiltration | Medium |
-| **Unusual upload of .cspkg to a storage account**<br>(Storage.Blob_SuspiciousIp<br>Storage.Files_SuspiciousIp) | Indicates that an Azure Cloud Services package (.cspkg file) has been uploaded to a storage account in an unusual way, compared to recent activity on this account. A potential cause is that an attacker has been preparing to deploy malicious code from your storage account to an Azure cloud service.<br>Applies to: Azure Blob Storage, Azure Data Lake Storage Gen2 | LateralMovement / Execution | Medium |
-| **Unusual upload of .exe to a storage account**<br>(Storage.Blob_TorAnomaly<br>Storage.Files_TorAnomaly) | Indicates that an .exe file has been uploaded to a storage account in an unusual way, compared to recent activity on this account. A potential cause is that an attacker has uploaded a malicious executable file to your storage account, or that a legitimate user has uploaded an executable file.<br>Applies to: Azure Blob Storage, Azure Files, Azure Data Lake Storage Gen2 | LateralMovement / Execution | Medium |
+| **PREVIEW ΓÇô Access from a suspicious IP address**<br>(Storage.Blob_SuspiciousIp<br>Storage.Files_SuspiciousIp) | Indicates that this storage account has been successfully accessed from an IP address that is considered suspicious. This alert is powered by Microsoft Threat Intelligence.<br>Learn more about [Microsoft's threat intelligence capabilities](https://go.microsoft.com/fwlink/?linkid=2128684).<br>Applies to: Azure Blob Storage, Azure Files, Azure Data Lake Storage Gen2 | Initial Access | Medium |
+| **Access from a Tor exit node to a storage account**<br>(Storage.Blob_TorAnomaly<br>Storage.Files_TorAnomaly) | Indicates that this account has been accessed successfully from an IP address that is known as an active exit node of Tor (an anonymizing proxy). The severity of this alert considers the authentication type used (if any), and whether this is the first case of such access. Potential causes can be an attacker who has accessed your storage account by using Tor, or a legitimate user who has accessed your storage account by using Tor.<br>Applies to: Azure Blob Storage, Azure Files, Azure Data Lake Storage Gen2 | Probing / Exploitation | High |
+| **Access from an unusual location to a storage account**<br>(Storage.Blob_GeoAnomaly<br>Storage.Files_GeoAnomaly) | Indicates that there was a change in the access pattern to an Azure Storage account. Someone has accessed this account from an IP address considered unfamiliar when compared with recent activity. Either an attacker has gained access to the account, or a legitimate user has connected from a new or unusual geographic location. An example of the latter is remote maintenance from a new application or developer.<br>Applies to: Azure Blob Storage, Azure Files, Azure Data Lake Storage Gen2 | Exploitation | Low |
+| **Anonymous access to a storage account**<br>(Storage.Blob_AnonymousAccessAnomaly) | Indicates that there's a change in the access pattern to a storage account. For instance, the account has been accessed anonymously (without any authentication), which is unexpected compared to the recent access pattern on this account. A potential cause is that an attacker has exploited public read access to a container that holds blob storage.<br>Applies to: Azure Blob Storage | Exploitation | High |
+| **PREVIEW ΓÇô Phishing content hosted on a storage account**<br>(Storage.Blob_PhishingContent<br>Storage.Files_PhishingContent) | A URL used in a phishing attack points to your Azure Storage account. This URL was part of a phishing attack affecting users of Microsoft 365.<br>Typically, content hosted on such pages is designed to trick visitors into entering their corporate credentials or financial information into a web form that looks legitimate.<br>This alert is powered by Microsoft Threat Intelligence.<br>Learn more about [Microsoft's threat intelligence capabilities](https://go.microsoft.com/fwlink/?linkid=2128684).<br>Applies to: Azure Blob Storage, Azure Files | Collection | High |
+| **Potential malware uploaded to a storage account**<br>(Storage.Blob_MalwareHashReputation<br>Storage.Files_MalwareHashReputation) | Indicates that a blob containing potential malware has been uploaded to a blob container or a file share in a storage account. This alert is based on hash reputation analysis leveraging the power of Microsoft threat intelligence, which includes hashes for viruses, trojans, spyware and ransomware. Potential causes may include an intentional malware upload by an attacker, or an unintentional upload of a potentially malicious blob by a legitimate user.<br>Applies to: Azure Blob Storage, Azure Files (Only for transactions over REST API)<br>Learn more about [Azure's hash reputation analysis for malware](defender-for-storage-introduction.md#what-is-hash-reputation-analysis-for-malware).<br>Learn more about [Microsoft's threat intelligence capabilities](https://go.microsoft.com/fwlink/?linkid=2128684). | LateralMovement | High |
+| **Unusual access inspection in a storage account**<br>(Storage.Blob_AccessInspectionAnomaly<br>Storage.Files_AccessInspectionAnomaly) | Indicates that the access permissions of a storage account have been inspected in an unusual way, compared to recent activity on this account. A potential cause is that an attacker has performed reconnaissance for a future attack.<br>Applies to: Azure Blob Storage, Azure Files | Collection | Medium |
+| **Unusual amount of data extracted from a storage account**<br>(Storage.Blob_DataExfiltration.AmountOfDataAnomaly<br>Storage.Blob_DataExfiltration.NumberOfBlobsAnomaly<br>Storage.Files_DataExfiltration.AmountOfDataAnomaly<br>Storage.Files_DataExfiltration.NumberOfFilesAnomaly) | Indicates that an unusually large amount of data has been extracted compared to recent activity on this storage container. A potential cause is that an attacker has extracted a large amount of data from a container that holds blob storage.<br>Applies to: Azure Blob Storage, Azure Files, Azure Data Lake Storage Gen2 | Exfiltration | Medium |
+| **Unusual application accessed a storage account**<br>(Storage.Blob_ApplicationAnomaly<br>Storage.Files_ApplicationAnomaly) | Indicates that an unusual application has accessed this storage account. A potential cause is that an attacker has accessed your storage account by using a new application.<br>Applies to: Azure Blob Storage, Azure Files | Exploitation | Medium |
+| **Unusual change of access permissions in a storage account**<br>(Storage.Blob_PermissionsChangeAnomaly<br>Storage.Files_PermissionsChangeAnomaly) | Indicates that the access permissions of this storage container have been changed in an unusual way. A potential cause is that an attacker has changed container permissions to weaken its security posture or to gain persistence.<br>Applies to: Azure Blob Storage, Azure Files, Azure Data Lake Storage Gen2 | Persistence | Medium |
+| **Unusual data exploration in a storage account**<br>(Storage.Blob_DataExplorationAnomaly<br>Storage.Files_DataExplorationAnomaly) | Indicates that blobs or containers in a storage account have been enumerated in an abnormal way, compared to recent activity on this account. A potential cause is that an attacker has performed reconnaissance for a future attack.<br>Applies to: Azure Blob Storage, Azure Files | Collection | Medium |
+| **Unusual deletion in a storage account**<br>(Storage.Blob_DeletionAnomaly<br>Storage.Files_DeletionAnomaly) | Indicates that one or more unexpected delete operations has occurred in a storage account, compared to recent activity on this account. A potential cause is that an attacker has deleted data from your storage account.<br>Applies to: Azure Blob Storage, Azure Files, Azure Data Lake Storage Gen2 | Exfiltration | Medium |
+| **Unusual upload of .cspkg to a storage account**<br>(Storage.Blob_CspkgUploadAnomaly) | Indicates that an Azure Cloud Services package (.cspkg file) has been uploaded to a storage account in an unusual way, compared to recent activity on this account. A potential cause is that an attacker has been preparing to deploy malicious code from your storage account to an Azure cloud service.<br>Applies to: Azure Blob Storage, Azure Data Lake Storage Gen2 | LateralMovement / Execution | Medium |
+| **Unusual upload of .exe to a storage account**<br>(Storage.Blob_ExeUploadAnomaly<br>Storage.Files_ExeUploadAnomaly) | Indicates that an .exe file has been uploaded to a storage account in an unusual way, compared to recent activity on this account. A potential cause is that an attacker has uploaded a malicious executable file to your storage account, or that a legitimate user has uploaded an executable file.<br>Applies to: Azure Blob Storage, Azure Files, Azure Data Lake Storage Gen2 | LateralMovement / Execution | Medium |
| | | | |
security-center https://docs.microsoft.com/en-us/azure/security-center/secure-score-security-controls https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/secure-score-security-controls.md
@@ -11,7 +11,7 @@ ms.devlang: na
ms.topic: article ms.tgt_pltfrm: na ms.workload: na
-ms.date: 11/10/2020
+ms.date: 01/05/2021
ms.author: memildin ---
@@ -68,7 +68,7 @@ To recap, your secure score is shown in the following locations in Security Cent
### Get your secure score from the REST API
-You can access your score via the secure score API (currently in preview). The API methods provide the flexibility to query the data and build your own reporting mechanism of your secure scores over time. For example, you can use the [Secure Scores API](/rest/api/securitycenter/securescores) to get the score for a specific subscription. In addition, you can use the [Secure Score Controls API](/rest/api/securitycenter/securescorecontrols) to list the security controls and the current score of your subscriptions.
+You can access your score via the secure score API. The API methods provide the flexibility to query the data and build your own reporting mechanism of your secure scores over time. For example, you can use the [Secure Scores API](/rest/api/securitycenter/securescores) to get the score for a specific subscription. In addition, you can use the [Secure Score Controls API](/rest/api/securitycenter/securescorecontrols) to list the security controls and the current score of your subscriptions.
![Retrieving a single secure score via the API](media/secure-score-security-controls/single-secure-score-via-api.png)
security-center https://docs.microsoft.com/en-us/azure/security-center/upcoming-changes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/upcoming-changes.md
@@ -10,7 +10,7 @@ ms.devlang: na
ms.topic: overview ms.tgt_pltfrm: na ms.workload: na
-ms.date: 12/14/2020
+ms.date: 01/05/2021
ms.author: memildin ---
@@ -40,7 +40,7 @@ The only impact will be seen in Azure Policy where the number of compliant resou
### 35 preview recommendations being added to increase coverage of Azure Security Benchmark
-**Estimated date for change:** December 2020
+**Estimated date for change:** January 2021
Azure Security Benchmark is the Microsoft-authored, Azure-specific set of guidelines for security and compliance best practices based on common compliance frameworks. [Learn more about Azure Security Benchmark](../security/benchmarks/introduction.md).
sentinel https://docs.microsoft.com/en-us/azure/sentinel/connect-cef-agent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-cef-agent.md
@@ -10,10 +10,10 @@ editor: ''
ms.service: azure-sentinel ms.subservice: azure-sentinel ms.devlang: na
-ms.topic: conceptual
+ms.topic: how-to
ms.tgt_pltfrm: na ms.workload: na
-ms.date: 10/01/2020
+ms.date: 01/05/2021
ms.author: yelevin ---
@@ -47,13 +47,13 @@ In this step, you will designate and configure the Linux machine that will forwa
1. Under **1.2 Install the CEF collector on the Linux machine**, copy the link provided under **Run the following script to install and apply the CEF collector**, or from the text below (applying the Workspace ID and Primary Key in place of the placeholders): ```bash
- sudo wget -O https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py [WorkspaceID] [Workspace Primary Key]`
+ sudo wget -O https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py [WorkspaceID] [Workspace Primary Key]
``` 1. While the script is running, check to make sure you don't get any error or warning messages. - You may get a message directing you to run a command to correct an issue with the mapping of the *Computer* field. See the [explanation in the deployment script](#mapping-command) for details.
-1. Continue to [STEP 2: Configure your security solution to forward CEF messages](connect-cef-solution-config.md) .
+1. Continue to [STEP 2: Configure your security solution to forward CEF messages](connect-cef-solution-config.md).
> [!NOTE]
@@ -185,8 +185,7 @@ Choose a syslog daemon to see the appropriate description.
Contents of the `security-config-omsagent.conf` file: ```bash
- filter f_oms_filter {match(\"CEF\|ASA\" ) ;};
- destination oms_destination {tcp(\"127.0.0.1\" port("25226"));};
+ filter f_oms_filter {match(\"CEF\|ASA\" ) ;};destination oms_destination {tcp(\"127.0.0.1\" port(25226));};
log {source(s_src);filter(f_oms_filter);destination(oms_destination);}; ```
sentinel https://docs.microsoft.com/en-us/azure/sentinel/connect-cef-verify https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-cef-verify.md
@@ -10,10 +10,10 @@ editor: ''
ms.service: azure-sentinel ms.subservice: azure-sentinel ms.devlang: na
-ms.topic: conceptual
+ms.topic: how-to
ms.tgt_pltfrm: na ms.workload: na
-ms.date: 10/01/2020
+ms.date: 01/05/2021
ms.author: yelevin ---
@@ -40,7 +40,7 @@ Be aware that it may take about 20 minutes until your logs start to appear in **
1. Run the following script on the log forwarder (applying the Workspace ID in place of the placeholder) to check connectivity between your security solution, the log forwarder, and Azure Sentinel. This script checks that the daemon is listening on the correct ports, that the forwarding is properly configured, and that nothing is blocking communication between the daemon and the Log Analytics agent. It also sends mock messages 'TestCommonEventFormat' to check end-to-end connectivity. <br> ```bash
- sudo wget -O https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py [WorkspaceID]`
+ sudo wget -O https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py [WorkspaceID]
``` - You may get a message directing you to run a command to correct an issue with the **mapping of the *Computer* field**. See the [explanation in the validation script](#mapping-command) for details.
@@ -203,8 +203,7 @@ The validation script performs the following checks:
- Configuration file: `/etc/syslog-ng/conf.d/security-config-omsagent.conf` ```bash
- filter f_oms_filter {match(\"CEF\|ASA\" ) ;};
- destination oms_destination {tcp(\"127.0.0.1\" port("25226"));};
+ filter f_oms_filter {match(\"CEF\|ASA\" ) ;};destination oms_destination {tcp(\"127.0.0.1\" port(25226));};
log {source(s_src);filter(f_oms_filter);destination(oms_destination);}; ```
sentinel https://docs.microsoft.com/en-us/azure/sentinel/identify-threats-with-entity-behavior-analytics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/identify-threats-with-entity-behavior-analytics.md
@@ -175,6 +175,8 @@ Entity pages are designed to be part of multiple usage scenarios, and can be acc
| **InvestigationPriority** | anomaly score, between 0-10 (0=benign, 10=highly anomalous) | |
+You can see the full set of contextual enrichments referenced in **UsersInsights**, **DevicesInsights**, and **ActivityInsights** in the [UEBA enrichments reference document](ueba-enrichments.md).
+ ### Querying behavior analytics data Using [KQL](/azure/data-explorer/kusto/query/), we can query the Behavioral Analytics Table.
sentinel https://docs.microsoft.com/en-us/azure/sentinel/ueba-enrichments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/ueba-enrichments.md new file mode 100644
@@ -0,0 +1,160 @@
+---
+title: Azure Sentinel UEBA enrichments reference | Microsoft Docs
+description: This article displays the entity enrichments generated by Azure Sentinel's entity behavior analytics.
+services: sentinel
+cloud: na
+documentationcenter: na
+author: yelevin
+manager: rkarlin
+
+ms.assetid:
+ms.service: azure-sentinel
+ms.subservice: azure-sentinel
+ms.workload: na
+ms.tgt_pltfrm: na
+ms.devlang: na
+ms.topic: reference
+ms.date: 01/04/2021
+ms.author: yelevin
+---
+
+# Azure Sentinel UEBA enrichments reference
+
+These tables list and describe entity enrichments that can be used to focus and sharpen your investigation of security incidents.
+
+The first two tables, **User insights** and **Device insights**, contain entity information from Active Directory / Azure AD and Microsoft Threat Intelligence sources.
+
+<a name="baseline-explained"></a>The rest of the tables, under **Activity insights tables**, contain entity information based on the behavioral profiles built by Azure Sentinel's entity behavior analytics. The activities are analyzed against a baseline that is dynamically compiled each time it is used. Each activity has its defined lookback period from which this dynamic baseline is derived. This period is specified in the [**Baseline**](#activity-insights-tables) column in this table.
+
+> [!NOTE]
+> The **Enrichment name** field in all three tables displays two rows of information. The first, in **bold**, is the "friendly name" of the enrichment. The second *(in italics and parentheses)* is the field name of the enrichment as stored in the [**Behavior Analytics table**](identify-threats-with-entity-behavior-analytics.md#data-schema).
+
+## User insights table
+
+| Enrichment name | Description | Sample value |
+| --- | --- | --- | --- |
+| **Account display name**<br>*(AccountDisplayName)* | The account display name of the user. | Admin, Hayden Cook |
+| **Account domain**<br>*(AccountDomain)* | The account domain name of the user. | |
+| **Account object ID**<br>*(AccountObjectID)* | The account object ID of the user. | a58df659-5cab-446c-9dd0-5a3af20ce1c2 |
+| **Blast radius**<br>*(BlastRadius)* | The blast radius is calculated based on several factors: the position of the user in the org tree, and the user's Azure Active Directory roles and permissions. | Low, Medium, High |
+| **Is dormant account**<br>*(IsDormantAccount)* | The account has not been used for the past 180 days. | True, False |
+| **Is local admin**<br>*(IsLocalAdmin)* | The account has local administrator privileges. | True, False |
+| **Is new account**<br>*(IsNewAccount)* | The account was created within the past 30 days. | True, False |
+| **On premises SID**<br>*(OnPremisesSID)* | The on-premises SID of the user related to the action. | S-1-5-21-1112946627-1321165628-2437342228-1103 |
+|
+
+## Device insights table
+
+| Enrichment name | Description | Sample value |
+| --- | --- | --- | --- |
+| **Browser**<br>*(Browser)* | The browser used in the action. | Edge, Chrome |
+| **Device family**<br>*(DeviceFamily)* | The device family used in the action. | Windows |
+| **Device type**<br>*(DeviceType)* | The client device type used in the action | Desktop |
+| **ISP**<br>*(ISP)* | The internet service provider used in the action. | |
+| **Operating system**<br>*(OperatingSystem)* | The operating system used in the action. | Windows 10 |
+| **Threat intel indicator description**<br>*(ThreatIntelIndicatorDescription)* | Description of the observed threat indicator resolved from the IP address used in the action. | Host is member of botnet: azorult |
+| **Threat intel indicator type**<br>*(ThreatIntelIndicatorType)* | The type of the threat indicator resolved from the IP address used in the action. | Botnet, C2, CryptoMining, Darknet, Ddos, MaliciousUrl, Malware, Phishing, Proxy, PUA, Watchlist |
+| **User agent**<br>*(UserAgent)* | The user agent used in the action. | Microsoft Azure Graph Client Library 1.0,<br>ΓÇïSwagger-Codegen/1.4.0.0/csharp,<br>EvoSTS |
+| **User agent family**<br>*(UserAgentFamily)* | The user agent family used in the action. | Chrome, Edge, Firefox |
+|
+
+## Activity insights tables
+
+#### Action performed
+
+| Enrichment name | [Baseline](#baseline-explained) (days) | Description | Sample value |
+| --- | --- | --- | --- |
+| **First time user performed action**<br>*(FirstTimeUserPerformedAction)* | 180 | The action was performed for the first time by the user. | True, False |
+| **Action uncommonly performed by user**<br>*(ActionUncommonlyPerformedByUser)* | 10 | The action is not commonly performed by the user. | True, False |
+| **Action uncommonly performed among peers**<br>*(ActionUncommonlyPerformedAmongPeers)* | 180 | The action is not commonly performed among user's peers. | True, False |
+| **First time action performed in tenant**<br>*(FirstTimeActionPerformedInTenant)* | 180 | The action was performed for the first time by anyone in the organization. | True, False |
+| **Action uncommonly performed in tenant**<br>*(ActionUncommonlyPerformedInTenant)* | 180 | The action is not commonly performed in the organization. | True, False |
+|
+
+#### App used
+
+| Enrichment name | [Baseline](#baseline-explained) (days) | Description | Sample value |
+| --- | --- | --- | --- |
+| **First time user used app**<br>*(FirstTimeUserUsedApp)* | 180 | The app was used for the first time by the user. | True, False |
+| **App uncommonly used by user**<br>*(AppUncommonlyUsedByUser)* | 10 | The app is not commonly used by the user. | True, False |
+| **App uncommonly used among peers**<br>*(AppUncommonlyUsedAmongPeers)* | 180 | The app is not commonly used among user's peers. | True, False |
+| **First time app observed in tenant**<br>*(FirstTimeAppObservedInTenant)* | 180 | The app was observed for the first time in the organization. | True, False |
+| **App uncommonly used in tenant**<br>*(AppUncommonlyUsedInTenant)* | 180 | The app is not commonly used in the organization. | True, False |
+|
+
+#### Browser used
+
+| Enrichment name | [Baseline](#baseline-explained) (days) | Description | Sample value |
+| --- | --- | --- | --- |
+| **First time user connected via browser**<br>*(FirstTimeUserConnectedViaBrowser)* | 30 | The browser was observed for the first time by the user. | True, False |
+| **Browser uncommonly used by user**<br>*(BrowserUncommonlyUsedByUser)* | 10 | The browser is not commonly used by the user. | True, False |
+| **Browser uncommonly used among peers**<br>*(BrowserUncommonlyUsedAmongPeers)* | 30 | The browser is not commonly used among user's peers. | True, False |
+| **First time browser observed in tenant**<br>*(FirstTimeBrowserObservedInTenant)* | 30 | The browser was observed for the first time in the organization. | True, False |
+| **Browser uncommonly used in tenant**<br>*(BrowserUncommonlyUsedInTenant)* | 30 | The browser is not commonly used in the organization. | True, False |
+|
+
+#### Country connected from
+
+| Enrichment name | [Baseline](#baseline-explained) (days) | Description | Sample value |
+| --- | --- | --- | --- |
+| **First time user connected from country**<br>*(FirstTimeUserConnectedFromCountry)* | 90 | The geo location, as resolved from the IP address, was connected from for the first time by the user. | True, False |
+| **Country uncommonly connected from by user**<br>*(CountryUncommonlyConnectedFromByUser)* | 10 | The geo location, as resolved from the IP address, is not commonly connected from by the user. | True, False |
+| **Country uncommonly connected from among peers**<br>*(CountryUncommonlyConnectedFromAmongPeers)* | 90 | The geo location, as resolved from the IP address, is not commonly connected from among user's peers. | True, False |
+| **First time connection from country observed in tenant**<br>*(FirstTimeConnectionFromCountryObservedInTenant)* | 90 | The country was connected from for the first time by anyone in the organization. | True, False |
+| **Country uncommonly connected from in tenant**<br>*(CountryUncommonlyConnectedFromInTenant)* | 90 | The geo location, as resolved from the IP address, is not commonly connected from in the organization. | True, False |
+|
+
+#### Device used to connect
+
+| Enrichment name | [Baseline](#baseline-explained) (days) | Description | Sample value |
+| --- | --- | --- | --- |
+| **First time user connected from device**<br>*(FirstTimeUserConnectedFromDevice)* | 30 | The source device was connected from for the first time by the user. | True, False |
+| **Device uncommonly used by user**<br>*(DeviceUncommonlyUsedByUser)* | 10 | The device is not commonly used by the user. | True, False |
+| **Device uncommonly used among peers**<br>*(DeviceUncommonlyUsedAmongPeers)* | 180 | The device is not commonly used among user's peers. | True, False |
+| **First time device observed in tenant**<br>*(FirstTimeDeviceObservedInTenant)* | 30 | The device was observed for the first time in the organization. | True, False |
+| **Device uncommonly used in tenant**<br>*(DeviceUncommonlyUsedInTenant)* | 180 | The device is not commonly used in the organization. | True, False |
+|
+
+#### Other device-related
+
+| Enrichment name | [Baseline](#baseline-explained) (days) | Description | Sample value |
+| --- | --- | --- | --- |
+| **First time user logged on to device**<br>*(FirstTimeUserLoggedOnToDevice)* | 180 | The destination device was connected to for the first time by the user. | True, False |
+| **Device family uncommonly used in tenant**<br>*(DeviceFamilyUncommonlyUsedInTenant)* | 30 | The device family is not commonly used in the organization. | True, False |
+|
+
+#### Internet Service Provider used to connect
+
+| Enrichment name | [Baseline](#baseline-explained) (days) | Description | Sample value |
+| --- | --- | --- | --- |
+| **First time user connected via ISP**<br>*(FirstTimeUserConnectedViaISP)* | 30 | The ISP was observed for the first time by the user. | True, False |
+| **ISP uncommonly used by user**<br>*(ISPUncommonlyUsedByUser)* | 10 | The ISP is not commonly used by the user. | True, False |
+| **ISP uncommonly used among peers**<br>*(ISPUncommonlyUsedAmongPeers)* | 30 | The ISP is not commonly used among user's peers. | True, False |
+| **First time connection via ISP in tenant**<br>*(FirstTimeConnectionViaISPInTenant)* | 30 | The ISP was observed for the first time in the organization. | True, False |
+| **ISP uncommonly used in tenant**<br>*(ISPUncommonlyUsedInTenant)* | 30 | The ISP is not commonly used in the organization. | True, False |
+|
+
+#### Resource accessed
+
+| Enrichment name | [Baseline](#baseline-explained) (days) | Description | Sample value |
+| --- | --- | --- | --- |
+| **First time user accessed resource**<br>*(FirstTimeUserAccessedResource)* | 180 | The resource was accessed for the first time by the user. | True, False |
+| **Resource uncommonly accessed by user**<br>*(ResourceUncommonlyAccessedByUser)* | 10 | The resource is not commonly accessed by the user. | True, False |
+| **Resource uncommonly accessed among peers**<br>*(ResourceUncommonlyAccessedAmongPeers)* | 180 | The resource is not commonly accessed among user's peers. | True, False |
+| **First time resource accessed in tenant**<br>*(FirstTimeResourceAccessedInTenant)* | 180 | The resource was accessed for the first time by anyone in the organization. | True, False |
+| **Resource uncommonly accessed in tenant**<br>*(ResourceUncommonlyAccessedInTenant)* | 180 | The resource is not commonly accessed in the organization. | True, False |
+|
+
+#### Miscellaneous
+
+| Enrichment name | [Baseline](#baseline-explained) (days) | Description | Sample value |
+| --- | --- | --- | --- |
+| **Last time user performed action**<br>*(LastTimeUserPerformedAction)* | 180 | Last time the user performed the same action. | <Timestamp> |
+| **Similar action wasn't performed in the past**<br>*(SimilarActionWasn'tPerformedInThePast)* | 30 | No action in the same resource provider was performed by the user. | True, False |
+| **Source IP location**<br>*(SourceIPLocation)* | *N/A* | The country resolved from the source IP of the action. | [Surrey, England] |
+| **Uncommon high volume of operations**<br>*(UncommonHighVolumeOfOperations)* | 7 | A user performed a burst of similar operations within the same provider | True, False |
+| **Unusual number of Azure AD conditional access failures**<br>*(UnusualNumberOfAADConditionalAccessFailures)* | 5 | An unusual number of users failed to authenticate due to conditional access | True, False |
+| **Unusual number of devices added**<br>*(UnusualNumberOfDevicesAdded)* | 5 | A user added an unusual number of devices. | True, False |
+| **Unusual number of devices deleted**<br>*(UnusualNumberOfDevicesDeleted)* | 5 | A user deleted an unusual number of devices. | True, False |
+| **Unusual number of users added to group**<br>*(UnusualNumberOfUsersAddedToGroup)* | 5 | A user added an unusual number of users to a group. | True, False |
+|
\ No newline at end of file
service-fabric https://docs.microsoft.com/en-us/azure/service-fabric/cluster-security-certificate-management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/cluster-security-certificate-management.md
@@ -103,10 +103,13 @@ As a side note: IETF [RFC 3647](https://tools.ietf.org/html/rfc3647) formally de
We've seen earlier that Azure Key Vault supports automatic certificate rotation: the associate certificate policy defines the point in time, whether by days before expiration or percentage of total lifetime, when the certificate is rotated in the vault. The provisioning agent must be invoked after this point in time, and prior to the expiration of the now-previous certificate, to distribute this new certificate to all of the nodes of the cluster. Service Fabric will assist by raising health warnings when the expiration date of a certificate (and which is currently in use in the cluster) occurs sooner than a predetermined interval. An automatic provisioning agent (i.e. the KeyVault VM extension), configured to observe the vault certificate, will periodically poll the vault, detect the rotation, and retrieve and install the new certificate. Provisioning done via VM/VMSS 'secrets' feature will require an authorized operator to update the VM/VMSS with the versioned KeyVault URI corresponding to the new certificate.
-In either case, the rotated certificate is now provisioned to all of the nodes, and we have described the mechanism Service Fabric employs to detect rotations; let us examine what happens next - assuming the rotation applied to the cluster certificate declared by subject common name (all applicable as of the time of this writing, and Service Fabric runtime version 7.1.409):
- - for new connections within, as well as into the cluster, the Service Fabric runtime will find and select the matching certificate with the farthest expiration date (the 'NotAfter' property of the certificate, often abbreviated as 'na')
+In either case, the rotated certificate is now provisioned to all of the nodes, and we have described the mechanism Service Fabric employs to detect rotations; let us examine what happens next - assuming the rotation applied to the cluster certificate declared by subject common name
+ - for new connections within, as well as into the cluster, the Service Fabric runtime will find and select the most recently issued matching certificate (largest value of the 'NotBefore' property). Note this is a change from previous versions of the Service Fabric runtime.
- existing connections will be kept alive/allowed to naturally expire or otherwise terminate; an internal handler will have been notified that a new match exists
+> [!NOTE]
+> Prior to version 7.2.445 (7.2 CU4), Service Fabric selected the farthest expiring certificate (the certificate with the farthest 'NotAfter' property)
+ This translates into the following important observations: - The renewal certificate may be ignored if its expiration date is sooner than that of the certificate currently in use. - The availability of the cluster, or of the hosted applications, takes precedence over the directive to rotate the certificate; the cluster will converge on the new certificate eventually, but without timing guarantees. It follows that:
@@ -128,8 +131,11 @@ We've described mechanisms, restrictions, outlined intricate rules and definitio
The sequence is fully scriptable/automated and allows a user-touch-free initial deployment of a cluster configured for certificate autorollover. Detailed steps are provided below. We'll use a mix of PowerShell cmdlets and fragments of json templates. The same functionality is achievable with all supported means of interacting with Azure.
-[!NOTE] This example assumes a certificate exists already in the vault; enrolling and renewing a KeyVault-managed certificate requires prerequisite manual steps as described earlier in this article. For production environments, use KeyVault-managed certificates - a sample script specific to a Microsoft-internal PKI is included below.
-Certificate autorollover only makes sense for CA-issued certificates; using self-signed certificates, including those generated when deploying a Service Fabric cluster in the Azure portal, is nonsensical, but still possible for local/developer-hosted deployments, by declaring the issuer thumbprint to be the same as of the leaf certificate.
+> [!NOTE]
+> This example assumes a certificate exists already in the vault; enrolling and renewing a KeyVault-managed certificate requires prerequisite manual steps as described earlier in this article. For production environments, use KeyVault-managed certificates - a sample script specific to a Microsoft-internal PKI is included below.
+
+> [!NOTE]
+> Certificate autorollover only makes sense for CA-issued certificates; using self-signed certificates, including those generated when deploying a Service Fabric cluster in the Azure portal, is nonsensical, but still possible for local/developer-hosted deployments, by declaring the issuer thumbprint to be the same as of the leaf certificate.
### Starting point For brevity, we will assume the following starting state:
service-fabric https://docs.microsoft.com/en-us/azure/service-fabric/cluster-security-certificates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/cluster-security-certificates.md
@@ -167,7 +167,10 @@ The node type certificates can also be declared by subject common name, as exemp
</NodeTypes> ```
-For either type of declaration, a Service Fabric node will read the configuration at startup, locate and load the specified certificates, and sort them in descending order of their NotAfter attribute; expired certificates are ignored, and the first element of the list is selected as the client credential for any Service Fabric connection attempted by this node. (In effect, Service Fabric favors the farthest expiring certificate.)
+For either type of declaration, a Service Fabric node will read the configuration at startup, locate and load the specified certificates, and sort them in descending order of their NotBefore attribute; expired certificates are ignored, and the first element of the list is selected as the client credential for any Service Fabric connection attempted by this node. (In effect, Service Fabric favors the most recently issued certificate.)
+
+> [!NOTE]
+> Prior to version 7.2.445 (7.2 CU4), Service Fabric selected the farthest expiring certificate (the certificate with the farthest 'NotAfter' property)
Note that, for common-name based presentation declarations, a certificate is considered a match if its subject common name is equal to the X509FindValue (or X509FindValueSecondary) field of the declaration as a case-sensitive, exact string comparison. This is in contrast with the validation rules, which does support wildcard matching, as well as case-insensitive string comparisons.
service-fabric https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-cluster-change-cert-thumbprint-to-cn https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/service-fabric-cluster-change-cert-thumbprint-to-cn.md
@@ -58,8 +58,11 @@ There are multiple valid starting states for a conversion. The invariant is that
#### Valid starting states - `Thumbprint: GoalCert, ThumbprintSecondary: None`-- `Thumbprint: GoalCert, ThumbprintSecondary: OldCert1`, where `GoalCert` has a later `NotAfter` date than that of `OldCert1`-- `Thumbprint: OldCert1, ThumbprintSecondary: GoalCert`, where `GoalCert` has a later `NotAfter` date than that of `OldCert1`
+- `Thumbprint: GoalCert, ThumbprintSecondary: OldCert1`, where `GoalCert` has a later `NotBefore` date than that of `OldCert1`
+- `Thumbprint: OldCert1, ThumbprintSecondary: GoalCert`, where `GoalCert` has a later `NotBefore` date than that of `OldCert1`
+
+> [!NOTE]
+> Prior to version 7.2.445 (7.2 CU4), Service Fabric selected the farthest expiring certificate (the certificate with the farthest 'NotAfter' property), so the above starting states prior to 7.2 CU4 require GoalCert to have a later `NotAfter` date than `OldCert1`
If your cluster isn't in one of the valid states previously described, see information on achieving that state in the section at the end of this article.
@@ -212,12 +215,15 @@ New-AzResourceGroupDeployment -ResourceGroupName $groupname -Verbose `
| Starting state | Upgrade 1 | Upgrade 2 | | :--- | :--- | :--- |
-| `Thumbprint: OldCert1, ThumbprintSecondary: None` and `GoalCert` has a later `NotAfter` date than `OldCert1` | `Thumbprint: OldCert1, ThumbprintSecondary: GoalCert` | - |
-| `Thumbprint: OldCert1, ThumbprintSecondary: None` and `OldCert1` has a later `NotAfter` date than `GoalCert` | `Thumbprint: GoalCert, ThumbprintSecondary: OldCert1` | `Thumbprint: GoalCert, ThumbprintSecondary: None` |
-| `Thumbprint: OldCert1, ThumbprintSecondary: GoalCert`, where `OldCert1` has a later `NotAfter` date than `GoalCert` | Upgrade to `Thumbprint: GoalCert, ThumbprintSecondary: None` | - |
-| `Thumbprint: GoalCert, ThumbprintSecondary: OldCert1`, where `OldCert1` has a later `NotAfter` date than `GoalCert` | Upgrade to `Thumbprint: GoalCert, ThumbprintSecondary: None` | - |
+| `Thumbprint: OldCert1, ThumbprintSecondary: None` and `GoalCert` has a later `NotBefore` date than `OldCert1` | `Thumbprint: OldCert1, ThumbprintSecondary: GoalCert` | - |
+| `Thumbprint: OldCert1, ThumbprintSecondary: None` and `OldCert1` has a later `NotBefore` date than `GoalCert` | `Thumbprint: GoalCert, ThumbprintSecondary: OldCert1` | `Thumbprint: GoalCert, ThumbprintSecondary: None` |
+| `Thumbprint: OldCert1, ThumbprintSecondary: GoalCert`, where `OldCert1` has a later `NotBefore` date than `GoalCert` | Upgrade to `Thumbprint: GoalCert, ThumbprintSecondary: None` | - |
+| `Thumbprint: GoalCert, ThumbprintSecondary: OldCert1`, where `OldCert1` has a later `NotBefore` date than `GoalCert` | Upgrade to `Thumbprint: GoalCert, ThumbprintSecondary: None` | - |
| `Thumbprint: OldCert1, ThumbprintSecondary: OldCert2` | Remove one of `OldCert1` or `OldCert2` to get to state `Thumbprint: OldCertx, ThumbprintSecondary: None` | Continue from the new starting state |
+> [!NOTE]
+> For a cluster on a version prior to version 7.2.445 (7.2 CU4), replace `NotBefore` with `NotAfter` in the above states.
+ For instructions on how to carry out any of these upgrades, see [Manage certificates in an Azure Service Fabric cluster](service-fabric-cluster-security-update-certs-azure.md). ## Next steps
service-fabric https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-concepts-partitioning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/service-fabric-concepts-partitioning.md
@@ -1,13 +1,15 @@
--- title: Partitioning Service Fabric services
-description: Describes how to partition Service Fabric stateful services. Partitions enables data storage on the local machines so data and compute can be scaled together.
-
+description: Learn how to partition Service Fabric stateless and stateful services
ms.topic: conceptual ms.date: 06/30/2017 ms.custom: devx-track-csharp --- # Partition Service Fabric reliable services
-This article provides an introduction to the basic concepts of partitioning Azure Service Fabric reliable services. The source code used in the article is also available on [GitHub](https://github.com/Azure-Samples/service-fabric-dotnet-getting-started/tree/classic/Services/AlphabetPartitions).
+This article provides an introduction to the basic concepts of partitioning Azure Service Fabric reliable services. Partitioning enables data storage on the local machines so data and compute can be scaled together.
+
+> [!TIP]
+> A [complete sample](https://github.com/Azure-Samples/service-fabric-dotnet-getting-started/tree/classic/Services/AlphabetPartitions) of the code in this article is available on GitHub.
## Partitioning Partitioning is not unique to Service Fabric. In fact, it is a core pattern of building scalable services. In a broader sense, we can think about partitioning as a concept of dividing state (data) and compute into smaller accessible units to improve scalability and performance. A well-known form of partitioning is [data partitioning][wikipartition], also known as sharding.
@@ -343,14 +345,14 @@ As we literally want to have one partition per letter, we can use 0 as the low k
![Browser screenshot](./media/service-fabric-concepts-partitioning/samplerunning.png)
-The entire source code of the sample is available on [GitHub](https://github.com/Azure-Samples/service-fabric-dotnet-getting-started/tree/classic/Services/AlphabetPartitions).
+The complete solution of the code used in this article is available here: https://github.com/Azure-Samples/service-fabric-dotnet-getting-started/tree/classic/Services/AlphabetPartitions.
## Next steps
-For information on Service Fabric concepts, see the following:
+Learn more about Service Fabric services:
+* [Connect and communicate with services in Service Fabric](service-fabric-connect-and-communicate-with-services.md)
* [Availability of Service Fabric services](service-fabric-availability-services.md) * [Scalability of Service Fabric services](service-fabric-concepts-scalability.md)
-* [Capacity planning for Service Fabric applications](service-fabric-capacity-planning.md)
[wikipartition]: https://en.wikipedia.org/wiki/Partition_(database)
service-fabric https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-reliable-actors-reentrancy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/service-fabric-reliable-actors-reentrancy.md
@@ -95,4 +95,4 @@ static class Program
## Next steps
-* Learn more about reentrancy in the [Actor API reference documentation](/previous-versions/azure/dn971626(v=azure.100))
+* Learn more about reentrancy in the [Actor API reference documentation](/dotnet/api/microsoft.servicefabric.actors?view=azure-dotnet))
service-fabric https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-reliable-services-communication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/service-fabric-reliable-services-communication.md
@@ -282,7 +282,7 @@ public class MyCommunicationClient implements CommunicationClient {
} ```
-The client factory is primarily responsible for creating communication clients. For clients that don't maintain a persistent connection, such as an HTTP client, the factory only needs to create and return the client. Other protocols that maintain a persistent connection, such as some binary protocols, should also be validated by the factory to determine whether the connection needs to be re-created.
+The client factory is primarily responsible for creating communication clients. For clients that don't maintain a persistent connection, such as an HTTP client, the factory only needs to create and return the client. Other protocols that maintain a persistent connection, such as some binary protocols, should also be validated (`ValidateClient(string endpoint, MyCommunicationClient client)`) by the factory to determine whether the connection needs to be re-created.
```csharp public class MyCommunicationClientFactory : CommunicationClientFactoryBase<MyCommunicationClient>
site-recovery https://docs.microsoft.com/en-us/azure/site-recovery/azure-to-azure-troubleshoot-errors https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/azure-to-azure-troubleshoot-errors.md
@@ -186,6 +186,9 @@ To check if the VM uses a custom DNS setting:
Try to access the DNS server from the virtual machine. If the DNS server isn't accessible, make it accessible by either failing over the DNS server or creating the line of site between DR network and DNS.
+> [!NOTE]
+> If you use private endpoints, ensure that the VMs can resolve the private DNS records.
+ :::image type="content" source="./media/azure-to-azure-troubleshoot-errors/custom_dns.png" alt-text="com-error."::: ### Issue 2: Site Recovery configuration failed (151196)
static-web-apps https://docs.microsoft.com/en-us/azure/static-web-apps/front-end-frameworks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/static-web-apps/front-end-frameworks.md
@@ -19,7 +19,7 @@ The following table lists the settings for a series of frameworks and libraries<
The intent of the table columns is explained by the following items: -- **App artifact location**: Lists the value for `app_artifact_location`, which is the [folder for built versions of application files](github-actions-workflow.md#build-and-deploy).
+- **Output location**: Lists the value for `output_location`, which is the [folder for built versions of application files](github-actions-workflow.md#build-and-deploy).
- **Custom build command**: When the framework requires a command different from `npm run build` or `npm run azure:build`, you can define a [custom build command](github-actions-workflow.md#custom-build-commands).
static-web-apps https://docs.microsoft.com/en-us/azure/static-web-apps/github-actions-workflow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/static-web-apps/github-actions-workflow.md
@@ -58,7 +58,7 @@ jobs:
###### Repository/Build Configurations - These values can be configured to match you app requirements. ###### app_location: '/' # App source code path api_location: 'api' # Api source code path - optional
- app_artifact_location: 'dist' # Built app content directory - optional
+ output_location: 'dist' # Built app content directory - optional
###### End of Repository/Build Configurations ###### close_pull_request_job:
@@ -127,7 +127,7 @@ with:
###### Repository/Build Configurations - These values can be configured to match you app requirements. ###### app_location: '/' # App source code path api_location: 'api' # Api source code path - optional
- app_artifact_location: 'dist' # Built app content directory - optional
+ output_location: 'dist' # Built app content directory - optional
###### End of Repository/Build Configurations ###### ```
@@ -135,7 +135,7 @@ with:
|---|---|---| | `app_location` | Location of your application code.<br><br>For example, enter `/` if your application source code is at the root of the repository, or `/app` if your application code is in a directory called `app`. | Yes | | `api_location` | Location of your Azure Functions code.<br><br>For example, enter `/api` if your app code is in a folder called `api`. If no Azure Functions app is detected in the folder, the build doesn't fail, the workflow assumes you do not want an API. | No |
-| `app_artifact_location` | Location of the build output directory relative to the `app_location`.<br><br>For example, if your application source code is located at `/app`, and the build script outputs files to the `/app/build` folder, then set `build` as the `app_artifact_location` value. | No |
+| `output_location` | Location of the build output directory relative to the `app_location`.<br><br>For example, if your application source code is located at `/app`, and the build script outputs files to the `/app/build` folder, then set `build` as the `output_location` value. | No |
The `repo_token`, `action`, and `azure_static_web_apps_api_token` values are set for you by Azure Static Web Apps shouldn't be manually changed.
@@ -158,7 +158,7 @@ You can customize the workflow to look for the [routes.json](routes.md) in any f
|---------------------|-------------| | `routes_location` | Defines the directory location where the _routes.json_ file is found. This location is relative to the root of the repository. |
- Being explicit about the location of your _routes.json_ file is particularly important if your front-end framework build step does not move this file to the `app_artifact_location` by default.
+ Being explicit about the location of your _routes.json_ file is particularly important if your front-end framework build step does not move this file to the `output_location` by default.
## Environment variables
@@ -184,7 +184,7 @@ jobs:
###### Repository/Build Configurations app_location: "/" api_location: "api"
- app_artifact_location: "public"
+ output_location: "public"
###### End of Repository/Build Configurations ###### env: # Add environment variables here HUGO_VERSION: 0.58.0
static-web-apps https://docs.microsoft.com/en-us/azure/static-web-apps/publish-hugo https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/static-web-apps/publish-hugo.md
@@ -171,7 +171,7 @@ jobs:
# For more information regarding Static Web App workflow configurations, please visit: https://aka.ms/swaworkflowconfig app_location: "/" # App source code path api_location: "api" # Api source code path - optional
- app_artifact_location: "public" # Built app content directory - optional
+ output_location: "public" # Built app content directory - optional
###### End of Repository/Build Configurations ###### env: HUGO_VERSION: 0.58.0
storage https://docs.microsoft.com/en-us/azure/storage/blobs/network-file-system-protocol-support-how-to https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/network-file-system-protocol-support-how-to.md
@@ -129,7 +129,7 @@ Create a directory on your Windows or Linux system, and then mount a container i
![Client for Network File System feature](media/network-file-system-protocol-how-to/client-for-network-files-system-feature.png)
-2. Mount a container by using the [mount](/windows-server/administration/windows-commands/mount) command.
+2. Open a **Command Prompt** window (cmd.exe). Then, mount a container by using the [mount](/windows-server/administration/windows-commands/mount) command.
``` mount -o nolock <storage-account-name>.blob.core.windows.net:/<storage-account-name>/<container-name> *
storage https://docs.microsoft.com/en-us/azure/storage/common/storage-explorer-troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-explorer-troubleshooting.md
@@ -56,22 +56,25 @@ If you donΓÇÖt have a role that grants any management layer permissions, Storage
If you want to access blob containers or queues, you can attach to those resources using your Azure credentials. 1. Open the Connect dialog.
-2. Select "Add a resource via Azure Active Directory (Azure AD). Click Next.
-3. Select the user account and tenant associated with the resource you're attaching to. Click Next.
-4. Select the resource type, enter the URL to the resource, and enter a unique display name for the connection. Click Next. Click Connect.
+2. Select "Add a resource via Azure Active Directory (Azure AD)". Select Next.
+3. Select the user account and tenant associated with the resource you're attaching to. Select Next.
+4. Select the resource type, enter the URL to the resource, and enter a unique display name for the connection. Select Next then Connect.
For other resource types, we don't currently have an Azure RBAC-related solution. As a workaround, you can request a SAS URI to [attach to your resource](../../vs-azure-tools-storage-manage-with-storage-explorer.md?tabs=linux#use-a-shared-access-signature-uri). ### Recommended Azure built-in roles There are several Azure built-in roles that can provide the permissions needed to use Storage Explorer. Some of those roles are:-- [Owner](../../role-based-access-control/built-in-roles.md#owner): Manage everything, including access to resources. **Note**: this role will give you key access.-- [Contributor](../../role-based-access-control/built-in-roles.md#contributor): Manage everything, excluding access to resources. **Note**: this role will give you key access.-- [Reader](../../role-based-access-control/built-in-roles.md#reader): Read and list resources.-- [Storage Account Contributor](../../role-based-access-control/built-in-roles.md#storage-account-contributor): Full management of storage accounts. **Note**: this role will give you key access.-- [Storage Blob Data Owner](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner): Full access to Azure Storage blob containers and data.-- [Storage Blob Data Contributor](../../role-based-access-control/built-in-roles.md#storage-blob-data-contributor): Read, write, and delete Azure Storage containers and blobs.-- [Storage Blob Data Reader](../../role-based-access-control/built-in-roles.md#storage-blob-data-reader): Read and list Azure Storage containers and blobs.
+- [Owner](/azure/role-based-access-control/built-in-roles#owner): Manage everything, including access to resources.
+- [Contributor](/azure/role-based-access-control/built-in-roles#contributor): Manage everything, excluding access to resources.
+- [Reader](/azure/role-based-access-control/built-in-roles#reader): Read and list resources.
+- [Storage Account Contributor](/azure/role-based-access-control/built-in-roles#storage-account-contributor): Full management of storage accounts.
+- [Storage Blob Data Owner](/azure/role-based-access-control/built-in-roles#storage-blob-data-owner): Full access to Azure Storage blob containers and data.
+- [Storage Blob Data Contributor](/azure/role-based-access-control/built-in-roles#storage-blob-data-contributor): Read, write, and delete Azure Storage containers and blobs.
+- [Storage Blob Data Reader](/azure/role-based-access-control/built-in-roles#storage-blob-data-reader): Read and list Azure Storage containers and blobs.
+
+> [!NOTE]
+> The Owner, Contributor, and Storage Account Contributor roles grant account key access.
## Error: Self-signed certificate in certificate chain (and similar errors)
@@ -182,46 +185,62 @@ If you can't remove an attached account or storage resource through the UI, you
## Proxy issues
-First, make sure that the following information you entered is correct:
+Storage Explorer supports connecting to Azure Storage resources via a proxy server. If you experience any issues connecting to Azure via proxy, here are some suggestions.
-* The proxy URL and port number
-* Username and password if the proxy requires them
+> [!NOTE]
+> Storage Explorer only supports basic authentication with proxy servers. Other authentication methods, such as NTLM, are not supported.
> [!NOTE] > Storage Explorer doesn't support proxy auto-config files for configuring proxy settings.
-### Common solutions
+### Verify Storage Explorer proxy settings
+
+The **Application → Proxy → Proxy configuration** setting determines which source Storage Explorer gets the proxy configuration from.
+
+If you select "Use environment variables", make sure to set the `HTTPS_PROXY` or `HTTP_PROXY` environment variables (environment variables are case-sensitive, so be sure to set the correct variables). If these variables are undefined or invalid, Storage Explorer won't use a proxy. Restart Storage Explorer after modifying any environment variables.
-If you're still experiencing issues, try the following troubleshooting methods:
+If you select "Use app proxy settings", make sure the in-app proxy settings are correct.
-* If you can connect to the internet without using your proxy, verify that Storage Explorer works without proxy settings enabled. If this is the case, there may be an issue with your proxy settings. Work with your administrator to identify the problems.
-* Verify that other applications that use the proxy server work as expected.
-* Verify that you can connect to the portal for the Azure environment you're trying to use.
-* Verify that you can receive responses from your service endpoints. Enter one of your endpoint URLs into your browser. If you can connect, you should receive InvalidQueryParameterValue or a similar XML response.
-* If someone else is also using Storage Explorer with your proxy server, verify that they can connect. If they can, you may have to contact your proxy server admin.
+### Steps for diagnosing issues
+
+If you're still experiencing issues, try these troubleshooting methods:
+
+1. If you can connect to the internet without using your proxy, verify that Storage Explorer works without proxy settings enabled. If Storage Explorer connects successfully, there may be an issue with your proxy server. Work with your administrator to identify the problems.
+2. Verify that other applications that use the proxy server work as expected.
+3. Verify that you can connect to the portal for the Azure environment you're trying to use.
+4. Verify that you can receive responses from your service endpoints. Enter one of your endpoint URLs into your browser. If you can connect, you should receive an `InvalidQueryParameterValue` or similar XML response.
+5. Check whether someone else using Storage Explorer with the same proxy server can connect. If they can, you may have to contact your proxy server admin.
### Tools for diagnosing issues
-If you have networking tools, such as Fiddler for Windows, you can diagnose the problems as follows:
+A networking tool, such as Fiddler, can help you diagnose problems.
-* If you have to work through your proxy, you may have to configure your networking tool to connect through the proxy.
-* Check the port number used by your networking tool.
-* Enter the local host URL and the networking tool's port number as proxy settings in Storage Explorer. When you do this correctly, your networking tool starts logging network requests made by Storage Explorer to management and service endpoints. For example, enter `https://cawablobgrs.blob.core.windows.net/` for your blob endpoint in a browser, and you'll receive a response that resembles the following:
+1. Configure your networking tool as a proxy server running on the local host. If you have to continue working behind an actual proxy, you may have to configure your networking tool to connect through the proxy.
+2. Check the port number used by your networking tool.
+3. Configure Storage Explorer proxy settings to use the local host and the networking tool's port number (such as "localhost:8888").
+
+When set correctly, your networking tool will log network requests made by Storage Explorer to management and service endpoints.
+
+If your networking tool doesn't appear to be logging Storage Explorer traffic, try testing your tool with a different application. For example, enter the endpoint URL for one of your storage resources (such as `https://contoso.blob.core.windows.net/`) in a web browser, and you'll receive a response similar to:
![Code sample](./media/storage-explorer-troubleshooting/4022502_en_2.png)
- This response suggests the resource exists, even though you can't access it.
+ The response suggests the resource exists, even though you can't access it.
+
+If your networking tool only shows traffic from other applications, you may need to adjust the proxy settings in Storage Explorer. Otherwise, you made need to adjust your tool's settings.
### Contact proxy server admin
-If your proxy settings are correct, you may have to contact your proxy server admin to:
+If your proxy settings are correct, you may have to contact your proxy server administrator to:
* Make sure your proxy doesn't block traffic to Azure management or resource endpoints.
-* Verify the authentication protocol used by your proxy server. Storage Explorer doesn't currently support NTLM proxies.
+* Verify the authentication protocol used by your proxy server. Storage Explorer only supports basic authentication protocols. Storage Explorer doesn't support NTLM proxies.
## "Unable to Retrieve Children" error message
-If you're connected to Azure through a proxy, verify that your proxy settings are correct. If you're granted access to a resource from the owner of the subscription or account, verify that you have read or list permissions for that resource.
+If you're connected to Azure through a proxy, verify that your proxy settings are correct.
+
+If the owner of a subscription or account has granted you access to a resource, verify that you have read or list permissions for that resource.
## Connection string doesn't have complete configuration settings
storage https://docs.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-use-azcopy-configure.md
@@ -23,13 +23,13 @@ AzCopy is a command-line utility that you can use to copy blobs or files to or f
## Configure proxy settings
-To configure the proxy settings for AzCopy, set the `https_proxy` environment variable. If you run AzCopy on Windows, AzCopy automatically detects proxy settings, so you don't have to use this setting in Windows. If you choose to use this setting in Windows, it will override automatic detection.
+To configure the proxy settings for AzCopy, set the `HTTPS_PROXY` environment variable. If you run AzCopy on Windows, AzCopy automatically detects proxy settings, so you don't have to use this setting in Windows. If you choose to use this setting in Windows, it will override automatic detection.
| Operating system | Command | |--------|-----------|
-| **Windows** | In a command prompt use: `set https_proxy=<proxy IP>:<proxy port>`<br> In PowerShell use: `$env:https_proxy="<proxy IP>:<proxy port>"`|
-| **Linux** | `export https_proxy=<proxy IP>:<proxy port>` |
-| **macOS** | `export https_proxy=<proxy IP>:<proxy port>` |
+| **Windows** | In a command prompt use: `set HTTPS_PROXY=<proxy IP>:<proxy port>`<br> In PowerShell use: `$env:HTTPS_PROXY="<proxy IP>:<proxy port>"`|
+| **Linux** | `export HTTPS_PROXY=<proxy IP>:<proxy port>` |
+| **macOS** | `export HTTPS_PROXY=<proxy IP>:<proxy port>` |
Currently, AzCopy doesn't support proxies that require authentication with NTLM or Kerberos.
storage https://docs.microsoft.com/en-us/azure/storage/files/storage-files-compare-protocols https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/files/storage-files-compare-protocols.md
@@ -19,7 +19,7 @@ Azure Files offers two protocols for connecting and mounting your Azure file sha
|Feature |NFS (preview) |SMB | |---------|---------|---------| |Access protocols |NFS 4.1 |SMB 2.1, SMB 3.0 |
-|Supported OS |Linux kernel version 4.3+ |Windows 2008 R2+, Linux kernel version 4.11+ |
+|Recommended OS |Linux kernel version 4.3+ |Windows 2008 R2+, Linux kernel version 4.11+ |
|[Available tiers](storage-files-planning.md#storage-tiers) |Premium storage |Premium storage, transaction optimized, hot, cool | |[Redundancy](storage-files-planning.md#redundancy) |LRS, ZRS |LRS, ZRS, GRS | |Authentication |Host-based authentication only |Identity-based authentication, user-based authentication |
storage https://docs.microsoft.com/en-us/azure/storage/files/storage-troubleshooting-files-nfs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/files/storage-troubleshooting-files-nfs.md
@@ -14,6 +14,22 @@ ms.custom: references_regions
This article lists some common problems related to Azure NFS file shares. It provides potential causes and workarounds when these problems are encountered.
+## chgrp "filename" failed: Invalid argument (22)
+
+### Cause 1: idmapping is not disabled
+Azure Files disallows alphanumeric UID/GID. So idmapping must be disabled.
+
+### Cause 2: idmapping was disabled, but got re-enabled after encountering bad file/dir name
+Even if idmapping has been correctly disabled, the settings for disabling idmapping gets overridden in some cases. For example, when the Azure Files encounters a bad file name, it sends back an error. Upon seeing this particular error code, NFS v 4.1 Linux client decides to re-enable idmapping and the future requests are sent again with alphanumeric UID/GID. For a list of unsupported characters on Azure Files, see this [article](https://docs.microsoft.com/rest/api/storageservices/naming-and-referencing-shares--directories--files--and-metadata#:~:text=The%20Azure%20File%20service%20naming%20rules%20for%20directory,be%20no%20more%20than%20255%20characters%20in%20length). Colon is one of the unsupported characters.
+
+### Workaround
+Check that idmapping is disabled and nothing is re-enabling it, then perform the following:
+
+- Unmount the share
+- Disable id-mapping with # echo Y > /sys/module/nfs/parameters/nfs4_disable_idmapping
+- Mount the share back
+- If running rsync, run rsync with ΓÇ£ΓÇönumeric-idsΓÇ¥ argument from directory which do not have any bad dir/file name.
+ ## Unable to create an NFS share ### Cause 1: Subscription is not enabled
@@ -47,7 +63,7 @@ NFS is only available on storage accounts with the following configuration:
- Tier - Premium - Account Kind - FileStorage - Redundancy - LRS-- Regions - East US, East US 2, UK South, SouthEast Asia
+- Regions - [List of supported regions](https://docs.microsoft.com/azure/storage/files/storage-files-how-to-create-nfs-shares?tabs=azure-portal#regional-availability)
#### Solution
@@ -131,4 +147,4 @@ The NFS protocol communicates to its server over port 2049, make sure that this
Verify that port 2049 is open on your client by running the following command: `telnet <storageaccountnamehere>.file.core.windows.net 2049`. If the port is not open, open it. ## Need help? Contact support.
-If you still need help, [contact support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade) to get your problem resolved quickly.
\ No newline at end of file
+If you still need help, [contact support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade) to get your problem resolved quickly.
stream-analytics https://docs.microsoft.com/en-us/azure/stream-analytics/connect-job-to-vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/stream-analytics/connect-job-to-vnet.md
@@ -6,7 +6,7 @@ ms.author: sidram
ms.reviewer: mamccrea ms.service: stream-analytics ms.topic: conceptual
-ms.date: 12/23/2020
+ms.date: 01/04/2021
ms.custom: devx-track-js --- # Connect Stream Analytics jobs to resources in an Azure Virtual Network (VNet)
@@ -35,7 +35,7 @@ Your jobs can connect to the following Azure services using this technique:
1. [Blob Storage or Azure Data Lake Storage Gen2](https://docs.microsoft.com/azure/stream-analytics/blob-output-managed-identity) - can be your job's storage account, streaming input or output. 2. [Azure Event Hubs](https://docs.microsoft.com/azure/stream-analytics/event-hubs-managed-identity) - can be your job's streaming input or output.
-If your jobs need to connect to other input or output types, then the only option is to use private endpoints in Stream Analytics clusters.
+If your jobs need to connect to other input or output types, you could write from Stream Analytics to Event Hubs output first and then to any destination of your choice using Azure Functions. If you want to directly write from Stream Analytics to other output types secured in a VNet or firewall, then the only option is to use private endpoints in Stream Analytics clusters.
## Next steps
stream-analytics https://docs.microsoft.com/en-us/azure/stream-analytics/event-hubs-managed-identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/stream-analytics/event-hubs-managed-identity.md
@@ -5,7 +5,7 @@ author: mamccrea
ms.author: mamccrea ms.service: stream-analytics ms.topic: how-to
-ms.date: 11/30/2020
+ms.date: 01/04/2021
--- # Use managed identities to access Event Hub from an Azure Stream Analytics job (Preview)
@@ -16,6 +16,9 @@ A managed identity is a managed application registered in Azure Active Directory
This article shows you how to enable Managed Identity for an Event Hubs input or output of a Stream Analytics job through the Azure portal. Before you enabled Managed Identity, you must first have a Stream Analytics job and Event Hub resource.
+### Limitation
+During preview, sampling input from Event Hubs on Azure portal will not work when using Managed Identity authentication mode.
+ ## Create a managed identity  First, you create a managed identity for your Azure Stream Analytics job. 
@@ -79,4 +82,4 @@ Now that your managed identity is configured, you're ready to add the Event Hu
## Next steps * [Event Hubs output from Azure Stream Analytics](event-hubs-output.md)
-* [Stream data from Event Hubs](stream-analytics-define-inputs.md#stream-data-from-event-hubs)
\ No newline at end of file
+* [Stream data from Event Hubs](stream-analytics-define-inputs.md#stream-data-from-event-hubs)
synapse-analytics https://docs.microsoft.com/en-us/azure/synapse-analytics/catalog-and-governance/how-to-discover-connect-analyze-azure-purview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/catalog-and-governance/how-to-discover-connect-analyze-azure-purview.md new file mode 100644
@@ -0,0 +1,70 @@
+---
+title: Discover, connect, and explore data in Synapse using Azure Purview
+description: Guide on how to discover data, connect them and explore them in Synapse
+services: synapse-analytics
+author: ArnoMicrosoft
+ms.service: synapse-analytics
+ms.topic: how-to
+ms.date: 12/16/2020
+ms.author: acomet
+ms.reviewer: jrasnick
+---
+
+# Discover, connect, and explore data in Synapse using Azure Purview
+
+> [!IMPORTANT]
+> The integration between Azure Synapse Analytics and Azure Purview is currently under Preview. If you are interested to try Azure Purview in Synapse, please connect with your Microsoft Sales Representative.
+
+In this document, you will learn the type of interactions that you can perform when registering an Azure Purview Account into Synapse.
+
+## Prerequisites
+
+- [Azure Purview account](../../purview/create-catalog-portal.md)
+- [Synapse workspace](../quickstart-create-workspace.md)
+- [Connect an Azure Purview Account into Synapse](quickstart-connect-azure-purview.md)
+
+## Using Azure Purview in Synapse
+
+The use Azure Purview in Synapse requires you to have access to that Purview account. Synapse passes-through your Purview permission. As an example, if you have a curator permission role, you will be able to edit metadata scanned by Azure Purview.
+
+### Data discovery: search datasets
+
+To discover data registered and scanned by Azure Purview, you can use the Search bar at the top center of Synapse workspace. Make sure that you select Azure Purview to search for all of your organization data.
+
+## Azure Purview actions
+
+Here is a list of the Azure Purview features that are available in Synapse:
+- **Overview** of the metadata
+- View and edit **schema** of the metadata with classifications, glossary terms, data types, and descriptions
+- View **lineage** to understand dependencies and do impact analysis. For more information about, see [lineage](../../purview/catalog-lineage-user-guide.md)
+- View and edit **Contacts** to know who is an owner or expert over a dataset
+- **Related** to understand the hierarchical dependencies of a specific dataset. This experience is helpful to browse through data hierarchy.
+
+## Actions that you can perform over datasets with Synapse resources
+
+### Connect data to Synapse
+
+- You can create a **new linked service** to Synapse. That action will be required to copy data to Synapse or have them in your data hub (for supported data sources like ADLSg2)
+- For objects like files, folders, or tables, you can directly create a **new integration dataset** and leverage an existing linked service if already created
+
+We are not yet able to infer if there is an existing linked service or integration dataset.
+
+###  Develop in Synapse
+
+There are three actions that you can perform: **New SQL Script**, **New Notebook**, and **New Data Flow**.
+
+With **New SQL Script**, depending on the type of support, you can:
+- View the top 100 rows in order to understand the shape of the data.
+- Create an external table from Synapse SQL database
+- Load the data into a Synapse SQL database
+
+With **New notebook**, you can:
+- Load data into a Spark DataFrame
+- Create a Spark Table (if you do that over Parquet format, it also creates a serverless SQL pool table).
+
+With **New data flow**, you can create an integration dataset that can be used a source in a data flow pipeline. Data flow is a no-code developer capability to perform data transformation. For more information about [using data flow in Synapse](../quickstart-data-flow.md).
+
+##  Next steps
+
+- [Register and scan Azure Synapse assets in Azure Purview](../../purview/register-scan-azure-synapse-analytics.md)
+- [How to Search Data in Azure Purview Data Catalog](../../purview/how-to-search-catalog.md)
\ No newline at end of file
synapse-analytics https://docs.microsoft.com/en-us/azure/synapse-analytics/catalog-and-governance/quickstart-connect-azure-purview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/catalog-and-governance/quickstart-connect-azure-purview.md new file mode 100644
@@ -0,0 +1,49 @@
+---
+title: Connect an Azure Purview AccountΓÇ»
+description: Connect an Azure Purview Account to a Synapse workspace.
+services: synapse-analytics
+author: ArnoMicrosoft
+ms.service: synapse-analytics
+ms.topic: quickstart
+ms.date: 12/16/2020
+ms.author: acomet
+ms.reviewer: jrasnick
+---
+
+# QuickStart: Connect an Azure Purview Account to a Synapse workspace
+
+> [!IMPORTANT]
+> The integration between Azure Synapse Analytics and Azure Purview is currently under Preview. If you are interested to try Azure Purview in Synapse, please connect with your Microsoft Sales Representative.
+
+In this quickstart, you will register an Azure Purview Account to a Synapse workspace. That connection allows you to discover Azure Purview assets and interact with them through Synapse capabilities.
+
+You can perform the following tasks in Synapse:
+- Use the search box at the top to find Purview assets based on keywords
+- Understand the data based on metadata, lineage, annotations
+- Connect those data to your workspace with linked services or integration datasets
+- Analyze those datasets with Synapse Apache Spark, Synapse SQL, and Data Flow
+
+## Prerequisites
+- [Azure Purview account](../../purview/create-catalog-portal.md)
+- [Synapse workspace](../quickstart-create-workspace.md)
+
+## Sign in to a Synapse workspace
+
+Go to https://web.azuresynapse.net and sign in to your workspace.
+
+## Permissions for connecting an Azure Purview Account
+
+- To connect an Azure Purview Account to a Synapse workspace, you need a **Contributor** role in Synapse workspace from Azure portal IAM and you need access to that Azure Purview Account.
+
+## Connect an Azure Purview Account
+
+- In the Synapse workspace, go to **Manage** -> **Azure Purview**. Select **Connect to a Purview account**.
+- You can choose **From Azure subscription** or **Enter manually**. **From Azure subscription**, you can select the account that you have access to.
+- Once connected, you should be able to see the name of the Purview account in the tab **Azure Purview account**.
+- You can use the Search bar at the top center of the Synapse workspace to search for data.
+
+## Next steps
+
+[Register and scan Azure Synapse assets in Azure Purview](../../purview/register-scan-azure-synapse-analytics.md)
+
+[Discover, connect and explore data in Synapse using Azure Purview](how-to-discover-connect-analyze-azure-purview.md)ΓÇ»
\ No newline at end of file
synapse-analytics https://docs.microsoft.com/en-us/azure/synapse-analytics/machine-learning/tutorial-automl https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/machine-learning/tutorial-automl.md
@@ -1,6 +1,6 @@
---
-title: 'Tutorial: Train a model using automated ML'
-description: Tutorial on how to train a machine learning model without code in Azure Synapse using Apache Spark and automated ML.
+title: 'Tutorial: Train a model by using automated machine learning'
+description: Tutorial on how to train a machine learning model without code in Azure Synapse Analytics.
services: synapse-analytics ms.service: synapse-analytics ms.subservice: machine-learning
@@ -12,116 +12,106 @@ author: nelgson
ms.author: negust ---
-# Tutorial: Train a machine learning model code-free in Azure Synapse with Apache Spark and automated ML
+# Tutorial: Train a machine learning model without code
-Learn how to easily enrich your data in Spark tables with new machine learning models that you train using [automated ML in Azure Machine Learning](https://docs.microsoft.com/azure/machine-learning/concept-automated-ml). A user in Synapse can simply select a Spark table in the Azure Synapse workspace to use as a training dataset for building machine learning models in a code-free experience.
+You can enrich your data in Spark tables with new machine learning models that you train by using [automated machine learning](https://docs.microsoft.com/azure/machine-learning/concept-automated-ml). In Azure Synapse Analytics, you can select a Spark table in the workspace to use as a training dataset for building machine learning models, and you can do this in a code-free experience.
-In this tutorial, you'll learn how to:
-
-> [!div class="checklist"]
-> - Train machine learning models using a code-free experience in Azure Synapse studio that uses automated ML in Azure Machine Learning. The type of model you train depends on the problem you are trying to solve.
+In this tutorial, you learn how to train machine learning models by using a code-free experience in Azure Synapse Analytics studio. You use automated machine learning in Azure Machine Learning, instead of coding the experience manually. The type of model you train depends on the problem you are trying to solve.
If you don't have an Azure subscription, [create a free account before you begin](https://azure.microsoft.com/free/). ## Prerequisites -- [Synapse Analytics workspace](../get-started-create-workspace.md) with an ADLS Gen2 storage account configured as the default storage. You need to be the **Storage Blob Data Contributor** of the ADLS Gen2 filesystem that you work with.-- Spark pool in your Azure Synapse Analytics workspace. For details, see [Create a Spark pool in Azure Synapse](../quickstart-create-sql-pool-studio.md).-- Azure Machine Learning linked service in your Azure Synapse Analytics workspace. For details, see [Create an Azure Machine Learning linked service in Azure Synapse](quickstart-integrate-azure-machine-learning.md).
+- An [Azure Synapse Analytics workspace](../get-started-create-workspace.md). Ensure that it has the following storage account, configured as the default storage: Azure Data Lake Storage Gen2. For the Data Lake Storage Gen2 file system that you work with, ensure that you're the **Storage Blob Data Contributor**.
+- An Apache Spark pool in your Azure Synapse Analytics workspace. For details, see [Quickstart: Create a dedicated SQL pool by using Azure Synapse Analytics studio](../quickstart-create-sql-pool-studio.md).
+- An Azure Machine Learning linked service in your Azure Synapse Analytics workspace. For details, see [Quickstart: Create a new Azure Machine Learning linked service in Azure Synapse Analytics](quickstart-integrate-azure-machine-learning.md).
## Sign in to the Azure portal
-Sign in to the [Azure portal](https://portal.azure.com/)
+Sign in to the [Azure portal](https://portal.azure.com/).
## Create a Spark table for training dataset
-You will need a Spark table for this tutorial. The following notebook will create a Spark table.
+For this tutorial, you need a Spark table. The following notebook creates one.
-1. Download the notebook [Create-Spark-Table-NYCTaxi- Data.ipynb](https://go.microsoft.com/fwlink/?linkid=2149229)
+1. Download the notebook [Create-Spark-Table-NYCTaxi- Data.ipynb](https://go.microsoft.com/fwlink/?linkid=2149229).
-1. Import the notebook to Azure Synapse Studio.
-![Import Notebook](media/tutorial-automl-wizard/tutorial-automl-wizard-00a.png)
+1. Import the notebook to Azure Synapse Analytics studio.
+![Screenshot of Azure Synapse Analytics, with Import option highlighted.](media/tutorial-automl-wizard/tutorial-automl-wizard-00a.png)
-1. Select the Spark pool you want to use and click `Run all`. Run this notebook will get New York taxi data from open dataset and save to your default Spark database.
-![Run all](media/tutorial-automl-wizard/tutorial-automl-wizard-00b.png)
+1. Select the Spark pool you want to use, and select **Run all**. This gets New York taxi data from the open dataset, and saves it to your default Spark database.
+![Screenshot of Azure Synapse Analytics, with Run all and Spark database highlighted.](media/tutorial-automl-wizard/tutorial-automl-wizard-00b.png)
-1. After the notebook run has completed, a new Spark table will be created under the default Spark database. Go to the Data Hub and find the table named with `nyc_taxi`.
-![Spark Table](media/tutorial-automl-wizard/tutorial-automl-wizard-00c.png)
+1. After the notebook run has completed, you see a new Spark table under the default Spark database. From **Data**, find the table named **nyc_taxi**.
+![Screenshot of Azure Synapse Analytics Data tab, with new table highlighted.](media/tutorial-automl-wizard/tutorial-automl-wizard-00c.png)
-## Launch automated ML wizard to train a model
+## Launch automated machine learning wizard
-Right-click on the Spark table created in the previous step. Select "Machine Learning-> Enrich with new model" to open the wizard.
-![Launch automated ML wizard](media/tutorial-automl-wizard/tutorial-automl-wizard-00d.png)
+Here's how:
-A configuration panel will appear and you will be asked to provide configuration details for creating an automated ML experiment run in Azure Machine Learning. This run will train multiple models and the best model from a successful run will be registered in the Azure Machine Learning model registry:
+1. Right-click the Spark table that you created in the previous step. To open the wizard, select **Machine Learning** > **Enrich with new model**.
+![Screenshot of the Spark table, with Machine Learning and Enrich with new model highlighted.](media/tutorial-automl-wizard/tutorial-automl-wizard-00d.png)
-![Configure run step1](media/tutorial-automl-wizard/tutorial-automl-wizard-configure-run-00a.png)
+1. You can then provide your configuration details for creating an automated machine learning experiment run in Azure Machine Learning. This run trains multiple models, and the best model from a successful run is registered in the Azure Machine Learning model registry.
-- **Azure Machine Learning workspace**: An Azure Machine Learning workspace is required for creation of the automated ML experiment run. You also need to link your Azure Synapse workspace with the Azure Machine Learning workspace using a [linked service](quickstart-integrate-azure-machine-learning.md). Once you have all the pre=requisites, you can specify the Azure Machine Learning workspace you want to use for this automated ML run.
+ ![Screenshot of Enrich with new model configuration specifications.](media/tutorial-automl-wizard/tutorial-automl-wizard-configure-run-00a.png)
-- **Experiment name**: Specify the experiment name. When you submit an automated ML run, you provide an experiment name. Information for the run is stored under that experiment in the Azure Machine Learning workspace. This experience will create a new experiment by default and is generating a proposed name, but you can also provide a name of an existing experiment.
+ - **Azure Machine Learning workspace**: An Azure Machine Learning workspace is required for creating an automated machine learning experiment run. You also need to link your Azure Synapse Analytics workspace with the Azure Machine Learning workspace by using a [linked service](quickstart-integrate-azure-machine-learning.md). After you have fulfilled all the prerequisites, you can specify the Azure Machine Learning workspace you want to use for this automated run.
-- **Best model**: Specify the name of the best model from the automated ML run. The best model will be given this name and saved in the Azure Machine Learning model registry automatically after this run. An automated ML run will create many machine learning models. Based on the primary metric that you will select in a later step, those models can be compared and the best model can be selected.
+ - **Experiment name**: Specify the experiment name. When you submit an automated machine learning run, you provide an experiment name. Information for the run is stored under that experiment in the Azure Machine Learning workspace. This experience creates a new experiment by default and generates a proposed name, but you can also provide a name of an existing experiment.
-- **Target column**: This is what the model is trained to predict. Choose the column that you want to predict.
+ - **Best model**: Specify the name of the best model from the automated run. The best model is given this name and saved in the Azure Machine Learning model registry automatically after this run. An automated machine learning run creates many machine learning models. Based on the primary metric that you select in a later step, those models can be compared and the best model can be selected.
-- **Spark pool**: The Spark pool you want to use for the automated ML experiment run. The computations will be executed on the pool you specify.
+ - **Target column**: This is what the model is trained to predict. Choose the column that you want to predict. (In this tutorial, we select the numeric column `fareAmount` as the target column.)
-- **Spark configuration details**: In addition to the Spark pool, you also have the option to provide session configuration details.
+ - **Spark pool**: The Spark pool you want to use for the automated experiment run. The computations are run on the pool you specify.
-In this tutorial, we select the numeric column `fareAmount` as the target column.
+ - **Spark configuration details**: In addition to the Spark pool, you also have the option to provide session configuration details.
-Click "Continue".
+1. Select **Continue**.
## Choose task type
-Select the machine learning model type for the experiment based on the question you are trying to answer. Since we selected `fareAmount` as the target column, and it is a numeric value, we will select *Regression*.
-
-Click "Continue" to config additional settings.
+Select the machine learning model type for the experiment, based on the question you're trying to answer. Because `fareAmount` is the target column, and it's a numeric value, select **Regression** here. Then select **Continue**.
-![Task type selection](media/tutorial-automl-wizard/tutorial-automl-wizard-configure-run-00b.png)
+![Screenshot of Enrich with new model, with Regression highlighted.](media/tutorial-automl-wizard/tutorial-automl-wizard-configure-run-00b.png)
## Additional configurations
-If you select *Classification* or *Regression* type, the additional configurations are:
+If you select **Regression** or **Classification** as your model type in the previous section, the following configurations are available:
-- **Primary metric**: The metric used to measure how well the model is doing. This is the metric that will be used to compare different models created in the automated ML run, and determine which model performed best.
+- **Primary metric**: The metric used to measure how well the model is doing. This is the metric used to compare different models created in the automated run, and determine which model performed best.
-- **Training job time (hours)**: The maximum amount of time, in hours, for an experiment to run and train models. Note that you can also provide values less than 1. For example `0.5`.
+- **Training job time (hours)**: The maximum amount of time, in hours, for an experiment to run and train models. Note that you can also provide values less than 1 (for example `0.5`).
-- **Max concurrent iterations**: Represents the maximum number of iterations that would be executed in parallel.
+- **Max concurrent iterations**: Represents the maximum number of iterations run in parallel.
-- **ONNX model compatibility**: If enabled, the models trained by automated ML will be converted to the ONNX format. This is particularly relevant if you want to use the model for scoring in Azure Synapse SQL pools.
+- **ONNX model compatibility**: If you enable this option, the models trained by automated machine learning are converted to the ONNX format. This is particularly relevant if you want to use the model for scoring in Azure Synapse Analytics SQL pools.
These settings all have a default value that you can customize.
-![additional configurations](media/tutorial-automl-wizard/tutorial-automl-wizard-configure-run-00c.png)
-
-> Note that if you select "Time series forecasting", there are more configurations required. Forecasting also does not support ONNX model compatibility.
-
-Once all required configurations are done, you can start automated ML run.
+![Screenshot of Enrich with new model additional configurations.](media/tutorial-automl-wizard/tutorial-automl-wizard-configure-run-00c.png)
-There are two ways to start an automated ML run in Azure Azure Synapse. For a code-free experience, you can choose to **Create run** directly. If you prefer code, you can select **Open in notebook**, which allows you to see the code that creates the run and run the notebook.
+After all the required configurations are done, you can start your automated run. You can choose **Create run**, which starts your run directly, without code. Alternatively, if you prefer code, you can select **Open in notebook**. This option allows you to see the code that creates the run and run the notebook.
-### Create Run directly
+>[!NOTE]
+>If you select **Time series forecasting** as your model type in the previous section, you must make additional configurations. Forecasting also doesn't support ONNX model compatibility.
-Click "Start Run" to start automated ML run directly. There will be a notification that indicates automated ML run is starting.
+### Create run directly
-After the automated ML run is started successfully, you will see another successful notification. You can also click the notification button to check the state of run submission.
-Azure Machine Learning by clicking the link in the successful notification.
-![Successful notification](media/tutorial-automl-wizard/tutorial-automl-wizard-configure-run-00d.png)
+To start your automated machine learning run directly, select **Start Run**. You see a notification that indicates the run is starting. Then you see another notification indicating success. You can also check the status in Azure Machine Learning by selecting the link in the notification.
+![Screenshot of successful notification.](media/tutorial-automl-wizard/tutorial-automl-wizard-configure-run-00d.png)
### Create run with notebook
-Select *Open In Notebook* to generate a notebook. Click *Run all* to execute the notebook.
-This also gives you an opportunity to add additional settings to your automated ML run.
+To generate a notebook, select **Open In Notebook**. Then select **Run all**. This also gives you an opportunity to add additional settings to your automated machine learning run.
-![Open Notebook](media/tutorial-automl-wizard/tutorial-automl-wizard-configure-run-00e.png)
+![Screenshot of Notebook, with Run all highlighted.](media/tutorial-automl-wizard/tutorial-automl-wizard-configure-run-00e.png)
-After the run from the notebook has been submitted successfully, there will be a link to the experiment run in the Azure Machine Learning workspace in the notebook output. You can click the link to monitor your automated ML run in Azure Machine Learning.
-![Notebook run all](media/tutorial-automl-wizard/tutorial-automl-wizard-configure-run-00f.png))
+After you have successfully submitted the run, you see a link to the experiment run in the Azure Machine Learning workspace in the notebook output. Select the link to monitor your automated run in Azure Machine Learning.
+![Screenshot of Azure Synapse Analytics with link highlighted.](media/tutorial-automl-wizard/tutorial-automl-wizard-configure-run-00f.png))
## Next steps -- [Tutorial: Machine learning model scoring in Azure Synapse dedicated SQL Pools](tutorial-sql-pool-model-scoring-wizard.md).-- [Quickstart: Create a new Azure Machine Learning linked service in Azure Synapse](quickstart-integrate-azure-machine-learning.md)-- [Machine Learning capabilities in Azure Synapse Analytics](what-is-machine-learning.md)
+- [Tutorial: Machine learning model scoring wizard (preview) for dedicated SQL pools](tutorial-sql-pool-model-scoring-wizard.md)
+- [Quickstart: Create a new Azure Machine Learning linked service in Azure Synapse Analytics](quickstart-integrate-azure-machine-learning.md)
+- [Machine learning capabilities in Azure Synapse Analytics](what-is-machine-learning.md)
synapse-analytics https://docs.microsoft.com/en-us/azure/synapse-analytics/spark/apache-spark-azure-machine-learning-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/spark/apache-spark-azure-machine-learning-tutorial.md
@@ -1,6 +1,6 @@
---
-title: 'Tutorial: Train a model in Python with automated ML'
-description: Tutorial on how to train a machine learning model in Python in Azure Synapse using Apache Spark and automated ML.
+title: 'Tutorial: Train a model in Python with automated machine learning'
+description: Tutorial on how to train a machine learning model in Python by using Apache Spark and automated machine learning.
services: synapse-analytics author: midesa ms.service: synapse-analytics
@@ -11,120 +11,117 @@ ms.author: midesa
ms.reviewer: jrasnick ---
-# Tutorial: Train a machine learning model in Python in Azure Synapse with Apache Spark and automated ML
+# Tutorial: Train a model in Python with automated machine learning
Azure Machine Learning is a cloud-based environment that allows you to train, deploy, automate, manage, and track machine learning models.
-In this tutorial, you use [automated machine learning](https://docs.microsoft.com/azure/machine-learning/concept-automated-ml) in Azure Machine Learning to create a regression model to predict NYC taxi fare prices. This process accepts training data and configuration settings and automatically iterates through combinations of different feature normalization/standardization methods, models, and hyperparameter settings to arrive at the best model.
+In this tutorial, you use [automated machine learning](https://docs.microsoft.com/azure/machine-learning/concept-automated-ml) in Azure Machine Learning to create a regression model to predict taxi fare prices. This process arrives at the best model by accepting training data and configuration settings, and automatically iterating through combinations of different methods, models, and hyperparameter settings.
-In this tutorial you learn the following tasks:
-- Download the data using Apache Spark and Azure Open Datasets-- Transform and clean data using Apache Spark dataframes-- Train an automated machine learning regression model-- Calculate model accuracy
+In this tutorial, you learn how to:
+- Download the data by using Apache Spark and Azure Open Datasets.
+- Transform and clean data by using Apache Spark dataframes.
+- Train an automated machine learning regression model.
+- Calculate model accuracy.
-### Before you begin
+## Before you begin
- Create a serverless Apache Spark Pool by following the [Create a serverless Apache Spark pool quickstart](../quickstart-create-apache-spark-pool-studio.md).-- Complete the [Azure Machine Learning workspace setup tutorial](https://docs.microsoft.com/azure/machine-learning/tutorial-1st-experiment-sdk-setup) if you do not have an existing Azure Machine Learning workspace.
+- Complete the [Azure Machine Learning workspace setup tutorial](https://docs.microsoft.com/azure/machine-learning/tutorial-1st-experiment-sdk-setup) if you don't have an existing Azure Machine Learning workspace.
-### Understand regression models
+## Understand regression models
-*Regression models* predict numerical output values based on independent predictors. In regression, the objective is to help establish the relationship among those independent predictor variables by estimating how one variable impacts the others.
+*Regression models* predict numerical output values based on independent predictors. In regression, the objective is to help establish the relationship among those independent predictor variables by estimating how one variable affects the others.
-### Regression analysis example on the NYC taxi data
+### Example based on New York City taxi data
-In this example, you will use Spark to perform some analysis on taxi trip tip data from New York. The data is available through [Azure Open Datasets](https://azure.microsoft.com/services/open-datasets/catalog/nyc-taxi-limousine-commission-yellow-taxi-trip-records/). This subset of the dataset contains information about yellow taxi trips, including information about each trip, the start and end time and locations, the cost, and other interesting attributes.
+In this example, you use Spark to perform some analysis on taxi trip tip data from New York City (NYC). The data is available through [Azure Open Datasets](https://azure.microsoft.com/services/open-datasets/catalog/nyc-taxi-limousine-commission-yellow-taxi-trip-records/). This subset of the dataset contains information about yellow taxi trips, including information about each trip, the start and end time and locations, and the cost.
> [!IMPORTANT]
->
-> There may be additional charges for pulling this data from its storage location. In the following steps, you will develop a model to predict NYC taxi fare prices.
->
+> There might be additional charges for pulling this data from its storage location. In the following steps, you develop a model to predict NYC taxi fare prices.
## Download and prepare the data
-1. Create a notebook using the PySpark kernel. For instructions, see [Create a Notebook](https://docs.microsoft.com/azure/synapse-analytics/quickstart-apache-spark-notebook#create-a-notebook.)
-
-> [!Note]
->
-> Because of the PySpark kernel, you do not need to create any contexts explicitly. The Spark context is automatically created for you when you run the first code cell.
->
+Here's how:
-2. Because the raw data is in a Parquet format, you can use the Spark context to pull the file into memory as a dataframe directly. Create a Spark dataframe by retrieving the data via the Open Datasets API. Here, we'll use the Spark dataframe *schema on read* properties to infer the datatypes and schema.
+1. Create a notebook by using the PySpark kernel. For instructions, see [Create a notebook](https://docs.microsoft.com/azure/synapse-analytics/quickstart-apache-spark-notebook#create-a-notebook).
-```python
-blob_account_name = "azureopendatastorage"
-blob_container_name = "nyctlc"
-blob_relative_path = "yellow"
-blob_sas_token = r""
+ > [!Note]
+ > Because of the PySpark kernel, you don't need to create any contexts explicitly. The Spark context is automatically created for you when you run the first code cell.
+
+2. Because the raw data is in a Parquet format, you can use the Spark context to pull the file directly into memory as a dataframe. Create a Spark dataframe by retrieving the data via the Open Datasets API. Here, you use the Spark dataframe `schema on read` properties to infer the datatypes and schema.
+
+ ```python
+ blob_account_name = "azureopendatastorage"
+ blob_container_name = "nyctlc"
+ blob_relative_path = "yellow"
+ blob_sas_token = r""
-# Allow Spark to read from Blob remotely
-wasbs_path = 'wasbs://%s@%s.blob.core.windows.net/%s' % (blob_container_name, blob_account_name, blob_relative_path)
-spark.conf.set('fs.azure.sas.%s.%s.blob.core.windows.net' % (blob_container_name, blob_account_name),blob_sas_token)
+ # Allow Spark to read from Blob remotely
+ wasbs_path = 'wasbs://%s@%s.blob.core.windows.net/%s' % (blob_container_name, blob_account_name, blob_relative_path)
+ spark.conf.set('fs.azure.sas.%s.%s.blob.core.windows.net' % (blob_container_name, blob_account_name),blob_sas_token)
-# Spark read parquet, note that it won't load any data yet by now
-df = spark.read.parquet(wasbs_path)
+ # Spark read parquet, note that it won't load any data yet by now
+ df = spark.read.parquet(wasbs_path)
-```
+ ```
-3. Depending on the size of your Spark pool, the raw data may be too large or take too much time to operate on. You can filter this data down to something smaller by using the ```start_date``` and ```end_date``` filters. This applies a filter that returns a month of data. Once we have the filtered dataframe, we will also run the ```describe()``` function on the new dataframe to see summary statistics for each field.
+3. Depending on the size of your Spark pool, the raw data might be too large or take too much time to operate on. You can filter this data down to something smaller by using the ```start_date``` and ```end_date``` filters. This applies a filter that returns a month of data. After you have the filtered dataframe, you also run the ```describe()``` function on the new dataframe to see summary statistics for each field.
- Based on the summary statistics, we can see that there are some irregularities and outliers in the data. For example, the statistics show that the the minimum trip distance is less than 0. We will need to filter out these irregular data points.
+ Based on the summary statistics, you can see that there are some irregularities in the data. For example, the statistics show that the minimum trip distance is less than 0. You need to filter out these irregular data points.
-```python
-# Create an ingestion filter
-start_date = '2015-01-01 00:00:00'
-end_date = '2015-12-31 00:00:00'
+ ```python
+ # Create an ingestion filter
+ start_date = '2015-01-01 00:00:00'
+ end_date = '2015-12-31 00:00:00'
-filtered_df = df.filter('tpepPickupDateTime > "' + start_date + '" and tpepPickupDateTime < "' + end_date + '"')
+ filtered_df = df.filter('tpepPickupDateTime > "' + start_date + '" and tpepPickupDateTime< "' + end_date + '"')
-filtered_df.describe().show()
-```
+ filtered_df.describe().show()
+ ```
-4. Now, we will generate features from the dataset by selecting a set of columns and creating various time-based features from the pickup datetime field. We will also filter out outliers that were identified from the earlier step and then remove the last few columns which are unnecessary for training.
+4. Next, generate features from the dataset by selecting a set of columns and creating various time-based features from the pickup datetime field. Filter out the outliers that were identified from the earlier step, and then remove the last few columns because they are unnecessary for training.
-```python
-from datetime import datetime
-from pyspark.sql.functions import *
-
-# To make development easier, faster and less expensive down sample for now
-sampled_taxi_df = filtered_df.sample(True, 0.001, seed=1234)
-
-taxi_df = sampled_taxi_df.select('vendorID', 'passengerCount', 'tripDistance', 'startLon', 'startLat', 'endLon' \
- , 'endLat', 'paymentType', 'fareAmount', 'tipAmount'\
- , column('puMonth').alias('month_num') \
- , date_format('tpepPickupDateTime', 'hh').alias('hour_of_day')\
- , date_format('tpepPickupDateTime', 'EEEE').alias('day_of_week')\
- , dayofmonth(col('tpepPickupDateTime')).alias('day_of_month')
- ,(unix_timestamp(col('tpepDropoffDateTime')) - unix_timestamp(col('tpepPickupDateTime'))).alias('trip_time'))\
- .filter((sampled_taxi_df.passengerCount > 0) & (sampled_taxi_df.passengerCount < 8)\
- & (sampled_taxi_df.tipAmount >= 0)\
- & (sampled_taxi_df.fareAmount >= 1) & (sampled_taxi_df.fareAmount <= 250)\
- & (sampled_taxi_df.tipAmount < sampled_taxi_df.fareAmount)\
- & (sampled_taxi_df.tripDistance > 0) & (sampled_taxi_df.tripDistance <= 200)\
- & (sampled_taxi_df.rateCodeId <= 5)\
- & (sampled_taxi_df.paymentType.isin({"1", "2"})))
-taxi_df.show(10)
-```
+ ```python
+ from datetime import datetime
+ from pyspark.sql.functions import *
+
+ # To make development easier, faster and less expensive down sample for now
+ sampled_taxi_df = filtered_df.sample(True, 0.001, seed=1234)
+
+ taxi_df = sampled_taxi_df.select('vendorID', 'passengerCount', 'tripDistance', 'startLon', 'startLat', 'endLon' \
+ , 'endLat', 'paymentType', 'fareAmount', 'tipAmount'\
+ , column('puMonth').alias('month_num') \
+ , date_format('tpepPickupDateTime', 'hh').alias('hour_of_day')\
+ , date_format('tpepPickupDateTime', 'EEEE').alias('day_of_week')\
+ , dayofmonth(col('tpepPickupDateTime')).alias('day_of_month')
+ ,(unix_timestamp(col('tpepDropoffDateTime')) - unix_timestamp(col('tpepPickupDateTime'))).alias('trip_time'))\
+ .filter((sampled_taxi_df.passengerCount > 0) & (sampled_taxi_df.passengerCount < 8)\
+ & (sampled_taxi_df.tipAmount >= 0)\
+ & (sampled_taxi_df.fareAmount >= 1) & (sampled_taxi_df.fareAmount <= 250)\
+ & (sampled_taxi_df.tipAmount < sampled_taxi_df.fareAmount)\
+ & (sampled_taxi_df.tripDistance > 0) & (sampled_taxi_df.tripDistance <= 200)\
+ & (sampled_taxi_df.rateCodeId <= 5)\
+ & (sampled_taxi_df.paymentType.isin({"1", "2"})))
+ taxi_df.show(10)
+ ```
As you can see, this will create a new dataframe with additional columns for the day of the month, pickup hour, weekday, and total trip time. -
-![Picture of taxi dataframe.](./media/azure-machine-learning-spark-notebook/dataset.png#lightbox)
+ ![Picture of taxi dataframe.](./media/azure-machine-learning-spark-notebook/dataset.png#lightbox)
## Generate test and validation datasets
-Once we have our final dataset, we can split the data into training and test sets by using the Spark ```random_ split ``` function. Using the provided weights, this function randomly splits the data into the training dataset for model training and the validation dataset for testing.
+After you have your final dataset, you can split the data into training and test sets by using the ```random_ split ``` function in Spark. Using the provided weights, this function randomly splits the data into the training dataset for model training and the validation dataset for testing.
```python
-# Random split dataset using spark, convert Spark to Pandas
+# Random split dataset using Spark, convert Spark to Pandas
training_data, validation_data = taxi_df.randomSplit([0.8,0.2], 223) ```
-This step ensures that the data points to test the finished model that haven't been used to train the model.
+This step ensures that the data points to test the finished model haven't been used to train the model.
-## Connect to an Azure Machine Learning Workspace
-In Azure Machine Learning, a **Workspace** is a class that accepts your Azure subscription and resource information. It also creates a cloud resource to monitor and track your model runs. In this step, we will create a workspace object from the existing Azure Machine Learning workspace.
+## Connect to an Azure Machine Learning workspace
+In Azure Machine Learning, a workspace is a class that accepts your Azure subscription and resource information. It also creates a cloud resource to monitor and track your model runs. In this step, you create a workspace object from the existing Azure Machine Learning workspace.
```python from azureml.core import Workspace
@@ -141,8 +138,8 @@ ws = Workspace(workspace_name = workspace_name,
```
-## Convert a dataframe to an Azure Machine Learning Dataset
-To submit a remote experiment, we will need to convert our dataset into an Azure Machine Learning ```TabularDatset```. A [TabularDataset](https://docs.microsoft.com/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py&preserve-view=true) represents data in a tabular format by parsing the provided files.
+## Convert a dataframe to an Azure Machine Learning dataset
+To submit a remote experiment, convert your dataset into an Azure Machine Learning ```TabularDatset```. A [TabularDataset](https://docs.microsoft.com/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py&preserve-view=true) represents data in a tabular format by parsing the provided files.
The following code gets the existing workspace and the default Azure Machine Learning default datastore. It then passes the datastore and file locations to the path parameter to create a new ```TabularDataset```.
@@ -163,42 +160,44 @@ dataset_training = Dataset.Tabular.from_delimited_files(path = [(datastore, 'tra
``` ![Picture of uploaded dataset.](./media/azure-machine-learning-spark-notebook/upload-dataset.png)
-## Submit an automated ML experiment
+## Submit an automated experiment
-#### Define training settings
-1. To submit an experiment, we will need to define the experiment parameter and model settings for training. You can view the full list of settings [here](https://docs.microsoft.com/azure/machine-learning/how-to-configure-auto-train).
+The following sections walk you through the process of submitting an automated machine learning experiment.
-```python
-import logging
-
-automl_settings = {
- "iteration_timeout_minutes": 10,
- "experiment_timeout_minutes": 30,
- "enable_early_stopping": True,
- "primary_metric": 'r2_score',
- "featurization": 'auto',
- "verbosity": logging.INFO,
- "n_cross_validations": 2}
-```
+### Define training settings
+1. To submit an experiment, you need to define the experiment parameter and model settings for training. For the full list of settings, see [Configure automated machine learning experiments in Python](https://docs.microsoft.com/azure/machine-learning/how-to-configure-auto-train).
-2. Now, we will pass the defined training settings as a **kwargs parameter to an AutoMLConfig object. Since we are training in Spark, we will also have to pass the Spark Context which is automatically accessible by the ```sc``` variable. Additionally, we will specify the training data and the type of model, which is regression in this case.
+ ```python
+ import logging
-```python
-from azureml.train.automl import AutoMLConfig
-
-automl_config = AutoMLConfig(task='regression',
- debug_log='automated_ml_errors.log',
- training_data = dataset_training,
- spark_context = sc,
- model_explainability = False,
- label_column_name ="fareAmount",**automl_settings)
-```
+ automl_settings = {
+ "iteration_timeout_minutes": 10,
+ "experiment_timeout_minutes": 30,
+ "enable_early_stopping": True,
+ "primary_metric": 'r2_score',
+ "featurization": 'auto',
+ "verbosity": logging.INFO,
+ "n_cross_validations": 2}
+ ```
+
+1. Pass the defined training settings as a `kwargs` parameter to an `AutoMLConfig` object. Because you're using Spark, you must also pass the Spark context, which is automatically accessible by the ```sc``` variable. Additionally, you specify the training data and the type of model, which is regression in this case.
+
+ ```python
+ from azureml.train.automl import AutoMLConfig
+
+ automl_config = AutoMLConfig(task='regression',
+ debug_log='automated_ml_errors.log',
+ training_data = dataset_training,
+ spark_context = sc,
+ model_explainability = False,
+ label_column_name ="fareAmount",**automl_settings)
+ ```
> [!NOTE]
->Automated machine learning pre-processing steps (feature normalization, handling missing data, converting text to numeric, etc.) become part of the underlying model. When using the model for predictions, the same pre-processing steps applied during training are applied to your input data automatically.
+>Automated machine learning pre-processing steps become part of the underlying model. These steps include feature normalization, handling missing data, and converting text to numeric. When you're using the model for predictions, the same pre-processing steps applied during training are applied to your input data automatically.
-#### Train the automatic regression model
-Now, we will create an experiment object in your Azure Machine Learning workspace. An experiment acts as a container for your individual runs.
+### Train the automatic regression model
+Next, you create an experiment object in your Azure Machine Learning workspace. An experiment acts as a container for your individual runs.
```python from azureml.core.experiment import Experiment
@@ -211,113 +210,113 @@ local_run = experiment.submit(automl_config, show_output=True, tags = tags)
# Use the get_details function to retrieve the detailed output for the run. run_details = local_run.get_details() ```
-Once the experiment has completed, the output will return details about the completed iterations. For each iteration, you see the model type, the run duration, and the training accuracy. The field BEST tracks the best running training score based on your metric type.
+When the experiment has completed, the output returns details about the completed iterations. For each iteration, you see the model type, the run duration, and the training accuracy. The **BEST** field tracks the best-running training score based on your metric type.
![Screenshot of model output.](./media/azure-machine-learning-spark-notebook/model-output.png) > [!NOTE]
-> Once submitted, the automated ML experiment will run various iterations and model types. This run will typically take 1-1.5 hours.
+> After you submit the automated machine learning experiment, it runs various iterations and model types. This run typically takes 60-90 minutes.
-#### Retrieve the best model
-To select the best model from your iterations, we will use the ```get_output``` function to return the best run and fitted model. The code below will retrieve the best run and fitted model for any logged metric or a particular iteration.
+### Retrieve the best model
+To select the best model from your iterations, use the ```get_output``` function to return the best run and fitted model. The following code retrieves the best run and fitted model for any logged metric or a particular iteration.
```python # Get best model best_run, fitted_model = local_run.get_output() ```
-#### Test model accuracy
-1. To test the model accuracy, we will use the best model to run taxi fare predictions on the test data set. The ```predict``` function uses the best model and predicts the values of y (fare amount) from the validation dataset.
+### Test model accuracy
+1. To test the model accuracy, use the best model to run taxi fare predictions on the test dataset. The ```predict``` function uses the best model and predicts the values of y (fare amount) from the validation dataset.
-```python
-# Test best model accuracy
-validation_data_pd = validation_data.toPandas()
-y_test = validation_data_pd.pop("fareAmount").to_frame()
-y_predict = fitted_model.predict(validation_data_pd)
-```
+ ```python
+ # Test best model accuracy
+ validation_data_pd = validation_data.toPandas()
+ y_test = validation_data_pd.pop("fareAmount").to_frame()
+ y_predict = fitted_model.predict(validation_data_pd)
+ ```
-2. The root-mean-square error (RMSE) is a frequently used measure of the differences between sample values predicted by a model and the values observed. We will calculate the root mean squared error of the results by comparing the y_test dataframe to the values predicted by the model.
+1. The root-mean-square error is a frequently used measure of the differences between sample values predicted by a model and the values observed. You calculate the root-mean-square error of the results by comparing the `y_test` dataframe to the values predicted by the model.
- The function ```mean_squared_error``` takes two arrays and calculates the average squared error between them. We then take the square root of the result. This metric indicates roughly how far the taxi fare predictions are from the actual fares values.
+ The function ```mean_squared_error``` takes two arrays and calculates the average squared error between them. You then take the square root of the result. This metric indicates roughly how far the taxi fare predictions are from the actual fares values.
-```python
-from sklearn.metrics import mean_squared_error
-from math import sqrt
+ ```python
+ from sklearn.metrics import mean_squared_error
+ from math import sqrt
-# Calculate Root Mean Square Error
-y_actual = y_test.values.flatten().tolist()
-rmse = sqrt(mean_squared_error(y_actual, y_predict))
+ # Calculate Root Mean Square Error
+ y_actual = y_test.values.flatten().tolist()
+ rmse = sqrt(mean_squared_error(y_actual, y_predict))
-print("Root Mean Square Error:")
-print(rmse)
-```
+ print("Root Mean Square Error:")
+ print(rmse)
+ ```
-```Output
-Root Mean Square Error:
-2.309997102577151
-```
-The root-mean-square error is a good measure of how accurately the model predicts the response. From the results , you see that the model is fairly good at predicting taxi fares from the data set's features, typically within +- $2.00
+ ```Output
+ Root Mean Square Error:
+ 2.309997102577151
+ ```
+ The root-mean-square error is a good measure of how accurately the model predicts the response. From the results, you see that the model is fairly good at predicting taxi fares from the dataset's features, typically within $2.00.
-3. Run the following code to calculate mean absolute percent error (MAPE). This metric expresses accuracy as a percentage of the error. It does this by calculating an absolute difference between each predicted and actual value and then summing all the differences. Then, it expresses that sum as a percent of the total of the actual values.
+1. Run the following code to calculate the mean-absolute-percent error. This metric expresses accuracy as a percentage of the error. It does this by calculating an absolute difference between each predicted and actual value and then summing all the differences. Then, it expresses that sum as a percent of the total of the actual values.
-```python
-# Calculate MAPE and Model Accuracy
-sum_actuals = sum_errors = 0
+ ```python
+ # Calculate MAPE and Model Accuracy
+ sum_actuals = sum_errors = 0
-for actual_val, predict_val in zip(y_actual, y_predict):
- abs_error = actual_val - predict_val
- if abs_error < 0:
- abs_error = abs_error * -1
+ for actual_val, predict_val in zip(y_actual, y_predict):
+ abs_error = actual_val - predict_val
+ if abs_error < 0:
+ abs_error = abs_error * -1
- sum_errors = sum_errors + abs_error
- sum_actuals = sum_actuals + actual_val
+ sum_errors = sum_errors + abs_error
+ sum_actuals = sum_actuals + actual_val
-mean_abs_percent_error = sum_errors / sum_actuals
+ mean_abs_percent_error = sum_errors / sum_actuals
-print("Model MAPE:")
-print(mean_abs_percent_error)
-print()
-print("Model Accuracy:")
-print(1 - mean_abs_percent_error)
-```
+ print("Model MAPE:")
+ print(mean_abs_percent_error)
+ print()
+ print("Model Accuracy:")
+ print(1 - mean_abs_percent_error)
+ ```
-```Output
-Model MAPE:
-0.03655071038487368
+ ```Output
+ Model MAPE:
+ 0.03655071038487368
-Model Accuracy:
-0.9634492896151263
-```
-From the two prediction accuracy metrics, you see that the model is fairly good at predicting taxi fares from the data set's features.
+ Model Accuracy:
+ 0.9634492896151263
+ ```
+ From the two prediction accuracy metrics, you see that the model is fairly good at predicting taxi fares from the dataset's features.
-4. After fitting a linear regression model, we will now need to determine how well the model fits the data. To do this, we will plot the actual fare values against the predicted output. In addition, we will also calculate the R-squared measure to understand how close the data is to the fitted regression line.
+1. After fitting a linear regression model, you now need to determine how well the model fits the data. To do this, you plot the actual fare values against the predicted output. In addition, you calculate the R-squared measure to understand how close the data is to the fitted regression line.
-```python
-import matplotlib.pyplot as plt
-import numpy as np
-from sklearn.metrics import mean_squared_error, r2_score
-
-# Calculate the R2 score using the predicted and actual fare prices
-y_test_actual = y_test["fareAmount"]
-r2 = r2_score(y_test_actual, y_predict)
-
-# Plot the Actual vs Predicted Fare Amount Values
-plt.style.use('ggplot')
-plt.figure(figsize=(10, 7))
-plt.scatter(y_test_actual,y_predict)
-plt.plot([np.min(y_test_actual), np.max(y_test_actual)], [np.min(y_test_actual), np.max(y_test_actual)], color='lightblue')
-plt.xlabel("Actual Fare Amount")
-plt.ylabel("Predicted Fare Amount")
-plt.title("Actual vs Predicted Fare Amont R^2={}".format(r2))
-plt.show()
+ ```python
+ import matplotlib.pyplot as plt
+ import numpy as np
+ from sklearn.metrics import mean_squared_error, r2_score
-```
-![Screenshot of regression plot.](./media/azure-machine-learning-spark-notebook/fare-amount.png)
+ # Calculate the R2 score using the predicted and actual fare prices
+ y_test_actual = y_test["fareAmount"]
+ r2 = r2_score(y_test_actual, y_predict)
+
+ # Plot the Actual vs Predicted Fare Amount Values
+ plt.style.use('ggplot')
+ plt.figure(figsize=(10, 7))
+ plt.scatter(y_test_actual,y_predict)
+ plt.plot([np.min(y_test_actual), np.max(y_test_actual)], [np.min(y_test_actual), np.max(y_test_actual)], color='lightblue')
+ plt.xlabel("Actual Fare Amount")
+ plt.ylabel("Predicted Fare Amount")
+ plt.title("Actual vs Predicted Fare Amont R^2={}".format(r2))
+ plt.show()
+
+ ```
+ ![Screenshot of regression plot.](./media/azure-machine-learning-spark-notebook/fare-amount.png)
- From the results, we can see that the R-squared measure accounts for 95% of our variance. This is also validated by the actual verses observed plot. The more variance that is accounted for by the regression model the closer the data points will fall to the fitted regression line.
+ From the results, you can see that the R-squared measure accounts for 95 percent of the variance. This is also validated by the actual plot versus the observed plot. The more variance that's accounted for by the regression model, the closer the data points will fall to the fitted regression line.
## Register model to Azure Machine Learning
-Once we have validated our best model, we can register the model to Azure Machine Learning. After you register the model, you can then download or deploy the registered model and receive all the files that you registered.
+After you've validated your best model, you can register it to Azure Machine Learning. Then, you can download or deploy the registered model and receive all the files that you registered.
```python description = 'My automated ML model'
@@ -329,10 +328,10 @@ print(model.name, model.version)
NYCGreenTaxiModel 1 ``` ## View results in Azure Machine Learning
-Last, you can also access the results of the iterations by navigating to the experiment in your Azure Machine Learning Workspace. Here, you will be able to dig into additional details on the status of your run, attempted models, and other model metrics.
+Last, you can also access the results of the iterations by going to the experiment in your Azure Machine Learning workspace. Here, you can dig into additional details on the status of your run, attempted models, and other model metrics.
![Screenshot of Azure Machine Learning workspace.](./media/azure-machine-learning-spark-notebook/azure-machine-learning-workspace.png) ## Next steps - [Azure Synapse Analytics](https://docs.microsoft.com/azure/synapse-analytics)-- [Apache Spark MLlib Tutorial](./apache-spark-machine-learning-mllib-notebook.md)\ No newline at end of file
+- [Tutorial: Build a machine learning app with Apache Spark MLlib and Azure Synapse Analytics](./apache-spark-machine-learning-mllib-notebook.md)
virtual-machine-scale-sets https://docs.microsoft.com/en-us/azure/virtual-machine-scale-sets/quick-create-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machine-scale-sets/quick-create-portal.md
@@ -56,13 +56,12 @@ You can deploy a scale set with a Windows Server image or Linux image such as RH
1. In the **Basics** tab, under **Project details**, make sure the correct subscription is selected and then choose to **Create new** resource group. Type *myVMSSResourceGroup* for the name and then select **OK** . 1. Type *myScaleSet* as the name for your scale set. 1. In **Region**, select a region that is close to your area.
-1. Leave the default value of **ScaleSet VMs** for **Orchestration mode**.
1. Select a marketplace image for **Image**. In this example, we have chosen *Ubuntu Server 18.04 LTS*. 1. Enter your desired username, and select which authentication type you prefer. - A **Password** must be at least 12 characters long and meet three out of the four following complexity requirements: one lower case character, one upper case character, one number, and one special character. For more information, see [username and password requirements](../virtual-machines/windows/faq.md#what-are-the-username-requirements-when-creating-a-vm). - If you select a Linux OS disk image, you can instead choose **SSH public key**. Only provide your public key, such as *~/.ssh/id_rsa.pub*. You can use the Azure Cloud Shell from the portal to [create and use SSH keys](../virtual-machines/linux/mac-create-ssh-keys.md).
- ![Create a virtual machine scale set](./media/virtual-machine-scale-sets-create-portal/quick-create-scaleset.png)
+ :::image type="content" source="./media/virtual-machine-scale-sets-create-portal/quick-create-scale-set.png" alt-text="Image shows create options for scale sets in the Azure portal.":::
1. Select **Next** to move the the other pages. 1. Leave the defaults for the **Instance** and **Disks** pages.
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/disks-performance-tiers-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/disks-performance-tiers-portal.md
@@ -4,7 +4,7 @@ description: Learn how to change performance tiers for new and existing managed
author: roygara ms.service: virtual-machines ms.topic: how-to
-ms.date: 11/19/2020
+ms.date: 01/05/2021
ms.author: rogarana ms.subservice: disks ms.custom: references_regions
@@ -42,7 +42,7 @@ The following steps outline how to change the performance tier of an existing di
1. Either deallocate the VM or detach the disk. 1. Select your disk 1. Select **Size + Performance**.
-1. In the **Performance tier** dropdown, select a tier that is different than the disk's current baseline.
+1. In the **Performance tier** dropdown, select a tier other than the disk's current performance tier.
1. Select **Resize**. :::image type="content" source="media/disks-performance-tiers-portal/change-tier-existing-disk.png" alt-text="Screenshot of the size + performance blade, performance tier is highlighted." lightbox="media/disks-performance-tiers-portal/performance-tier-settings.png":::
virtual-machines