Updates from: 03/24/2021 04:10:44
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-domain-services Secure Your Domain https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-domain-services/secure-your-domain.md
Previously updated : 07/06/2020 Last updated : 03/08/2021
To complete this article, you need the following resources:
* If needed, [create an Azure Active Directory tenant][create-azure-ad-tenant] or [associate an Azure subscription with your account][associate-azure-ad-tenant]. * An Azure Active Directory Domain Services managed domain enabled and configured in your Azure AD tenant. * If needed, [create and configure an Azure Active Directory Domain Services managed domain][create-azure-ad-ds-instance].
-* Install and configure Azure PowerShell.
- * If needed, follow the instructions to [install the Azure PowerShell module and connect to your Azure subscription](/powershell/azure/install-az-ps).
- * Make sure that you sign in to your Azure subscription using the [Connect-AzAccount][Connect-AzAccount] cmdlet.
-* Install and configure Azure AD PowerShell.
- * If needed, follow the instructions to [install the Azure AD PowerShell module and connect to Azure AD](/powershell/azure/active-directory/install-adv2).
- * Make sure that you sign in to your Azure AD tenant using the [Connect-AzureAD][Connect-AzureAD] cmdlet.
-
-## Disable weak ciphers and NTLM password hash sync
+
+## Use Security settings to disable weak ciphers and NTLM password hash sync
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Search for and select **Azure AD Domain Services**.
+1. Choose your managed domain, such as *aaddscontoso.com*.
+1. On the left-hand side, select **Security settings**.
+1. Click **Disable** for the following settings:
+ - **TLS 1.2 only mode**
+ - **NTLM authentication**
+ - **NTLM password synchronization from on-premises**
+
+ ![Screenshot of Security settings to disable weak ciphers and NTLM password hash sync](media/secure-your-domain/security-settings.png)
+
+## Use PowerShell to disable weak ciphers and NTLM password hash sync
+
+If needed, [install and configure Azure PowerShell](/powershell/azure/install-az-ps). Make sure that you sign in to your Azure subscription using the [Connect-AzAccount][Connect-AzAccount] cmdlet.
+
+Also if needed, [install and configure Azure AD PowerShell](/powershell/azure/active-directory/install-adv2). Make sure that you sign in to your Azure AD tenant using the [Connect-AzureAD][Connect-AzureAD] cmdlet.
To disable weak cipher suites and NTLM credential hash synchronization, sign in to your Azure account, then get the Azure AD DS resource using the [Get-AzResource][Get-AzResource] cmdlet:
active-directory Concept Sspr Licensing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/concept-sspr-licensing.md
Previously updated : 06/02/2020 Last updated : 03/08/2021
This article details the different ways that self-service password reset can be
## Compare editions and features
-SSPR is licensed per user. To maintain compliance, organizations are required to assign the appropriate license to their users.
+SSPR requires a license only for the tenant.
The following table outlines the different SSPR scenarios for password change, reset, or on-premises writeback, and which SKUs provide the feature.
For additional licensing information, including costs, see the following pages:
* [Microsoft 365 Enterprise](https://www.microsoft.com/microsoft-365/enterprise) * [Microsoft 365 Business](/office365/servicedescriptions/microsoft-365-service-descriptions/microsoft-365-business-service-description)
-## Enable group or user-based licensing
-
-Azure AD supports group-based licensing. Administrators can assign licenses in bulk to a group of users, rather than assigning them one at a time. For more information, see [Assign, verify, and resolve problems with licenses](../enterprise-users/licensing-groups-assign.md#step-1-assign-the-required-licenses).
-
-Some Microsoft services aren't available in all locations. Before a license can be assigned to a user, the administrator must specify the **Usage location** property on the user. Assignment of licenses can be done under the **User** > **Profile** > **Settings** section in the Azure portal. *When you use group license assignment, any users without a usage location specified inherit the location of the directory.*
- ## Next steps To get started with SSPR, complete the following tutorial:
active-directory Tutorial Enable Sspr https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/tutorial-enable-sspr.md
Previously updated : 07/13/2020 Last updated : 03/23/2021
If you no longer want to use the SSPR functionality you have configured as part
1. From the **Properties** page, under the option *Self service password reset enabled*, choose **None**. 1. To apply the SSPR change, select **Save**.
+## FAQs
+
+This section explains common questions from administrators and end-users who try SSPR:
+
+- Why do federated users wait up to 2 minutes after they see **Your password has been reset** before they can use passwords that are synchronized from on-premises?
+
+ For federated users whose passwords are synchronized, the source of authority for the passwords is on-premises. As a result, SSPR updates only the on-premises passwords. Password hash synchronization back to Azure AD is scheduled for every 2 minutes.
+
+- When a newly created user who is pre-populated with SSPR data such as phone and email visits the SSPR registration page, **DonΓÇÖt lose access to your account!** appears as the title of the page. Why don't other users who have SSPR data pre-populated see the message?
+
+ A user who sees **DonΓÇÖt lose access to your account!** is a member of SSPR/combined registration groups that are configured for the tenant. Users who donΓÇÖt see **DonΓÇÖt lose access to your account!** were not part of the SSPR/combined registration groups.
+
+- When some users go through SSPR process and reset their password, why don't they see the password strength indicator?
+
+ Users who donΓÇÖt see weak/strong password strength have synchronized password writeback enabled. Since SSPR canΓÇÖt determine the password policy of the customerΓÇÖs on-premises environment, it cannot validate password strength or weakness.
+ ## Next steps In this tutorial, you enabled Azure AD self-service password reset for a selected group of users. You learned how to:
active-directory Concept Conditional Access Users Groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/concept-conditional-access-users-groups.md
The following options are available to include when creating a Conditional Acces
- All guest and external users - This selection includes any B2B guests and external users including any user with the `user type` attribute set to `guest`. This selection also applies to any external user signed-in from a different organization like a Cloud Solution Provider (CSP). - Directory roles
- - Allows administrators to select specific built-in Azure AD directory roles used to determine policy assignment. For example, organizations may create a more restrictive policy on users assigned the global administrator role. Other role types are not supported, including administrative unit-scoped directory roles, custom roles.
+ - Allows administrators to select specific built-in Azure AD directory roles used to determine policy assignment. For example, organizations may create a more restrictive policy on users assigned the global administrator role. Other role types are not supported, including administrative unit-scoped roles and custom roles.
- Users and groups - Allows targeting of specific sets of users. For example, organizations can select a group that contains all members of the HR department when an HR app is selected as the cloud app. A group can be any type of group in Azure AD, including dynamic or assigned security and distribution groups. Policy will be applied to nested users and groups.
active-directory What If Tool https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/what-if-tool.md
This article explains how you can use this tool to test your Conditional Access policies.
+> [!VIDEO https://www.youtube.com/embed/M_iQVM-3C3E]
+ ## What it is The **Conditional Access What If policy tool** allows you to understand the impact of your Conditional Access policies on your environment. Instead of test driving your policies by performing multiple sign-ins manually, this tool enables you to evaluate a simulated sign-in of a user. The simulation estimates the impact this sign-in has on your policies and generates a simulation report. The report does not only list the applied Conditional Access policies but also [classic policies](policy-migration.md#classic-policies) if they exist.
active-directory Device Management Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/device-management-azure-portal.md
This option is a premium edition capability available through products such as A
> - We recommend using ["Register or join devices" user action](../conditional-access/concept-conditional-access-cloud-apps.md#user-actions) in Conditional Access for enforcing multi-factor authentication for joining or registering a device. > - You must set this setting to **No** if you are using Conditional Access policy to require multi-factor authencation. -- **Maximum number of devices** - This setting enables you to select the maximum number of Azure AD joined or Azure AD registered devices that a user can have in Azure AD. If a user reaches this quota, they are not be able to add additional devices until one or more of the existing devices are removed. The default value is **50**.
+- **Maximum number of devices** - This setting enables you to select the maximum number of Azure AD joined or Azure AD registered devices that a user can have in Azure AD. If a user reaches this quota, they are not be able to add additional devices until one or more of the existing devices are removed. The default value is **50**. You can increase the value up to 100 and if you enter a value above 100, Azure AD will set it to 100. You can also use Unlimited value to enforce no limit other than existing quota limits.
> [!NOTE] > **Maximum number of devices** setting applies to devices that are either Azure AD joined or Azure AD registered. This setting does not apply to hybrid Azure AD joined devices.
active-directory Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/faq.md
Previously updated : 02/12/2021 Last updated : 03/08/2021
Yes. Multi-factor authentication and consumer email accounts are both supported
### Do you support password reset for Azure AD B2B collaboration users? If your Azure AD tenant is the home directory for a user, you can [reset the user's password](../fundamentals/active-directory-users-reset-password-azure-portal.md) from the Azure portal. But you can't directly reset a password for a guest user who signs in with an account that's managed by another Azure AD directory or external identity provider. Only the guest user or an administrator in the userΓÇÖs home directory can reset the password. Here are some examples of how password reset works for guest users:
+* Guest users in an Azure AD tenant that are marked "Guest" (UserType==Guest) cannot register for SSPR through [https://aka.ms/ssprsetup](https://aka.ms/ssprsetup). These type of guest user can only perform SSPR through [https://aka.ms/sspr](https://aka.ms/sspr).
* Guest users who sign in with a Microsoft account (for example guestuser@live.com) can reset their own passwords using Microsoft account self-service password reset (SSPR). See [How to reset your Microsoft account password](https://support.microsoft.com/help/4026971/microsoft-account-how-to-reset-your-password). * Guest users who sign in with a Google account or another external identity provider can reset their own passwords using their identity providerΓÇÖs SSPR method. For example, a guest user with the Google account guestuser@gmail.com can reset their password by following the instructions in [Change or reset your password](https://support.google.com/accounts/answer/41078). * If the identity tenant is a just-in-time (JIT) or "viral" tenant (meaning it's a separate, unmanaged Azure tenant), only the guest user can reset their password. Sometimes an organization will [take over management of viral tenants](../enterprise-users/domains-admin-takeover.md) that are created when employees use their work email addresses to sign up for services. After the organization takes over a viral tenant, only an administrator in that organization can reset the user's password or enable SSPR. If necessary, as the inviting organization, you can remove the guest user account from your directory and resend an invitation.
active-directory Identity Secure Score https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/identity-secure-score.md
Previously updated : 02/20/2020 Last updated : 03/23/2021 -+ #Customer intent: As an IT admin, I want understand the identity secure score, so that I can maximize the security posture of my tenant.
How secure is your Azure AD tenant? If you don't know how to answer this questio
## What is an identity secure score?
-The identity secure score is number between 1 and 223 that functions as an indicator for how aligned you are with Microsoft's best practice recommendations for security. Each improvement action in identity secure score is tailored to your specific configuration.
+The identity secure score is percentage that functions as an indicator for how aligned you are with Microsoft's best practice recommendations for security. Each improvement action in identity secure score is tailored to your specific configuration.
![Secure score](./media/identity-secure-score/identity-secure-score-overview.png)
The identity secure score can be used by the following roles:
### How are controls scored?
-Controls can be scored in two ways. Some are scored in a binary fashion - you get 100% of the score if you have the feature or setting configured based on our recommendation. Other scores are calculated as a percentage of the total configuration. For example, if the improvement recommendation states youΓÇÖll get 30 points if you protect all your users with MFA and you only have 5 of 100 total users protected, you would be given a partial score around 2 points (5 protected / 100 total * 30 max pts = 2 pts partial score).
+Controls can be scored in two ways. Some are scored in a binary fashion - you get 100% of the score if you have the feature or setting configured based on our recommendation. Other scores are calculated as a percentage of the total configuration. For example, if the improvement recommendation states youΓÇÖll get a maximum of 10.71% if you protect all your users with MFA and you only have 5 of 100 total users protected, you would be given a partial score around 0.53% (5 protected / 100 total * 10.71% maximum = 0.53% partial score).
### What does [Not Scored] mean?
In short, no. The secure score does not express an absolute measure of how likel
### How should I interpret my score?
-You're given points for configuring recommended security features or performing security-related tasks (like reading reports). Some actions are scored for partial completion, like enabling multi-factor authentication (MFA) for your users. Your secure score is directly representative of the Microsoft security services you use. Remember that security must be balanced with usability. All security controls have a user impact component. Controls with low user impact should have little to no effect on your users' day-to-day operations.
+Your score improves for configuring recommended security features or performing security-related tasks (like reading reports). Some actions are scored for partial completion, like enabling multi-factor authentication (MFA) for your users. Your secure score is directly representative of the Microsoft security services you use. Remember that security must be balanced with usability. All security controls have a user impact component. Controls with low user impact should have little to no effect on your users' day-to-day operations.
To see your score history, head over to the [Microsoft 365 security center](https://security.microsoft.com/) and review your overall Microsoft secure score. You can review changes to your overall secure score be clicking on View History. Choose a specific date to see which controls were enabled for that day and what points you earned for each one.
active-directory Reference Connect Version History https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/reference-connect-version-history.md
Please follow this link to read more about [auto upgrade](how-to-connect-install
- If the Cloned Custom Sync Rule does not flow some Mail and Exchange attributes, then new Exchange Sync Rule will add those attributes. - Added support for [Selective Password hash Synchronization](https://docs.microsoft.com/azure/active-directory/hybrid/how-to-connect-selective-password-hash-synchronization) - Added the new [Single Object Sync cmdlet](https://docs.microsoft.com/azure/active-directory/hybrid/how-to-connect-single-object-sync). Use this cmdlet to troubleshoot your Azure AD Connect sync configuration.
+ - Azure AD Connect now supports the Hybrid Identity Administrator role for configuring the service.
- Updated AADConnectHealth agent to 3.1.83.0 - New version of the [ADSyncTools PowerShell module](https://docs.microsoft.com/azure/active-directory/hybrid/reference-connect-adsynctools), which has several new or improved cmdlets.
active-directory Application Proxy Integrate With Sharepoint Server Saml https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-integrate-with-sharepoint-server-saml.md
In this step, you create an application in your Azure AD tenant that uses Applic
1. Create a new Azure AD Application Proxy application with custom domain. For step-by-step instructions, see [Custom domains in Azure AD Application Proxy](./application-proxy-configure-custom-domain.md).
- - Internal URL: https://portal.contoso.com/
- - External URL: https://portal.contoso.com/
+ - Internal URL: 'https://portal.contoso.com/'
+ - External URL: 'https://portal.contoso.com/'
- Pre-Authentication: Azure Active Directory - Translate URLs in Headers: No - Translate URLs in Application Body: No
In this step, you create an application in your Azure AD tenant that uses Applic
## Step 3: Test your application
-Using a browser from a computer on an external network, navigate to the URL (https://portal.contoso.com/) that you configured during the publish step. Make sure you can sign in with the test account that you set up.
+Using a browser from a computer on an external network, navigate to the link that you configured during the publish step. Make sure you can sign in with the test account that you set up.
active-directory Migrate Application Authentication To Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/migrate-application-authentication-to-azure-active-directory.md
After migration, you may choose to send communication informing the users of the
During the process of the migration, your app may already have a test environment used during regular deployments. You can continue to use this environment for migration testing. If a test environment is not currently available, you may be able to set one up using Azure App Service or Azure Virtual Machines, depending on the architecture of the application. You may choose to set up a separate test Azure AD tenant to use as you develop your app configurations. This tenant will start in a clean state and will not configured to sync with any system.
-You can test each app by logging in with a test user and make sure all functionality is the same as prior to the migration. If you determine during testing that users will need to update their [MFA](/active-directory/authentication/howto-mfa-userstates) or [SSPR](../authentication/tutorial-enable-sspr.md)settings, or you are adding this functionality during the migration, be sure to add that to your end-user communication plan. See [MFA](https://aka.ms/mfatemplates) and [SSPR](https://aka.ms/ssprtemplates) end-user communication templates.
+You can test each app by logging in with a test user and make sure all functionality is the same as prior to the migration. If you determine during testing that users will need to update their [MFA](/azure/active-directory/authentication/howto-mfa-userstates) or [SSPR](../authentication/tutorial-enable-sspr.md)settings, or you are adding this functionality during the migration, be sure to add that to your end-user communication plan. See [MFA](https://aka.ms/mfatemplates) and [SSPR](https://aka.ms/ssprtemplates) end-user communication templates.
Once you have migrated the apps, go to the [Azure portal](https://aad.portal.azure.com/) to test if the migration was a success. Follow the instructions below:
You can guide your users on how to discover their apps:
Users can download an **Intune-managed browser**: -- **For Android devices**, from the [Google play store](https://play.google.com/store/apps/details?id=com.microsoft.intune.mam.managedbrowser)
+- **For Android devices**, from the [Google play store](https://play.google.com/store/apps/details?id=com.microsoft.intune)
- **For Apple devices**, from the [Apple App Store](https://itunes.apple.com/us/app/microsoft-intune-managed-browser/id943264951?mt=8) or they can download the [My Apps mobile app for iOS ](https://apps.apple.com/us/app/my-apps-azure-active-directory/id824048653)
active-directory Troubleshoot Adding Apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/troubleshoot-adding-apps.md
- Title: Troubleshoot common problem adding or removing an application to Azure Active Directory
-description: Troubleshoot the common problems people face when adding or removing an app to Azure Active Directory.
------- Previously updated : 09/11/2018---
-# Troubleshoot common problem adding or removing an application to Azure Active Directory
-This article helps you understand the common problems people face when adding or removing an app to Azure Active Directory.
-
-## I clicked the ΓÇ£addΓÇ¥ button and my application took a long time to appear
-Under some circumstances, it can take 1-2 minutes (and sometimes longer) for an application to appear after adding it to your directory. While this is not the normal expected performance, you can see the application addition is in progress by clicking on the **Notifications** icon (the bell) in the upper right of the [Azure portal](https://portal.azure.com/) and looking for an **In Progress** or **Completed** notification labeled **Adding application.**
-
-If your application is never added, or you encounter an error when clicking the **Add** button, youΓÇÖll see a **Notification** in an **Error** state. If you want more details about the error to learn more to or share with a support engineer, you can see more information about the error by following the steps in the [How to see the details of a portal notification](#how-to-see-the-details-of-a-portal-notification) section.
-
-## I clicked the ΓÇ£addΓÇ¥ button and my application didnΓÇÖt appear
-Sometimes, due to transient issues, networking problems, or a bug, adding an application fails. You can tell this happens when you click the **Notifications** icon (the bell) in the upper right of the Azure portal and you see a red (!) icon next to your **Adding application** notification. This indicates there was an error when creating the application.
-
-If you encounter an error when clicking the **Add** button, youΓÇÖll see a **Notification** in an **Error** state. If you want more details about the error to learn more to or share with a support engineer, you can see more information about the error by following the steps in the [How to see the details of a portal notification](#how-to-see-the-details-of-a-portal-notification) section.
-
-## I donΓÇÖt know how to set up my application once IΓÇÖve added it
-If you need help with learning about applications, the [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](../saas-apps/tutorial-list.md) article is a good place to start.
-
-In addition to this, the [Azure AD Applications Document Library](./what-is-application-management.md) helps you to learn more about single sign-on with Azure AD and how it works.
-
-## I want to delete an application but the delete button is disabled
-
-The delete button will be disabled in the following scenarios:
--- For applications under Enterprise application, if you don't have one of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.--- For Microsoft application, you won't be able to delete them from the UI regardless of your role.--- For servicePrincipals that correspond to a managed identity. Managed identities service principals can't be deleted in the Enterprise apps blade. You need to go to the Azure resource to manage it. Learn more about [Managed Identity](../managed-identities-azure-resources/overview.md)-
-## How to see the details of a portal notification
-You can see the details of any portal notification by following the steps below:
-1. Select the **Notifications** icon (the bell) in the upper right of the Azure portal
-2. Select any notification in an **Error** state (those with a red (!) next to them).
- >[!NOTE]
- >You cannot click notifications in a **Successful** or **In Progress** state.
-4. Use the information under **Notification Details** to understand more details about the problem.
-5. If you still need help, you can also share this information with a support engineer or the product group to get help with your problem.
-6. Select the **copy icon** to the right of the **Copy error** textbox to copy all the notification details to share with a support or product group engineer.
-
-## How to get help by sending notification details to a support engineer
-It is important that you share **all the details listed below** with a support engineer if you need help, so that they can help you quickly. **Take a screenshot** or select the **Copy error icon**, found to the right of the **Copy error** textbox.
-
-## Notification Details Explained
-See the following descriptions for more details about the notifications.
-
-### Essential Notification Items
-- **Title** ΓÇô the descriptive title of the notification
- * Example ΓÇô **Application proxy settings**
-- **Description** ΓÇô the description of what occurred as a result of the operation
- - Example ΓÇô **Internal url entered is already being used by another application**
-- **Notification ID** ΓÇô the unique ID of the notification
- - Example ΓÇô **clientNotification-2adbfc06-2073-4678-a69f-7eb78d96b068**
-- **Client Request ID** ΓÇô the specific request ID made by your browser
- - Example ΓÇô **302fd775-3329-4670-a9f3-bea37004f0bc**
-- **Time Stamp UTC** ΓÇô the timestamp during which the notification occurred, in UTC
- - Example ΓÇô **2017-03-23T19:50:43.7583681Z**
-- **Internal Transaction ID** ΓÇô the internal ID we can use to look up the error in our systems
- - Example ΓÇô **71a2f329-ca29-402f-aa72-bc00a7aca603**
-- **UPN** ΓÇô the user who performed the operation
- - Example ΓÇô **tperkins\@f128.info**
-- **Tenant ID** ΓÇô the unique ID of the tenant that the user who performed the operation was a member of
- - Example ΓÇô **7918d4b5-0442-4a97-be2d-36f9f9962ece**
-- **User object ID** ΓÇô the unique ID of the user who performed the operation
- - Example ΓÇô **17f84be4-51f8-483a-b533-383791227a99**
-
-### Detailed Notification Items
-- **Display Name** ΓÇô **(can be empty)** a more detailed display name for the error
- - Example ΓÇô **Application proxy settings**
-- **Status** ΓÇô the specific status of the notification
- - Example ΓÇô **Failed**
-- **Object ID** ΓÇô **(can be empty)** the object ID against which the operation was performed
- - Example ΓÇô **8e08161d-f2fd-40ad-a34a-a9632d6bb599**
-- **Details** ΓÇô the detailed description of what occurred as a result of the operation
- - Example ΓÇô **Internal url `https://bing.com/` is invalid since it is already in use**
-- **Copy error** ΓÇô Select the **copy icon** to the right of the **Copy error** textbox to copy all the notification details to share with a support or product group -- engineer
- - Example
- ```{"errorCode":"InternalUrl\_Duplicate","localizedErrorDetails":{"errorDetail":"Internal url 'https://google.com/' is invalid since it is already in use"},"operationResults":\[{"objectId":null,"displayName":null,"status":0,"details":"Internal url 'https://bing.com/' is invalid since it is already in use"}\],"timeStampUtc":"2017-03-23T19:50:26.465743Z","clientRequestId":"302fd775-3329-4670-a9f3-bea37004f0bb","internalTransactionId":"ea5b5475-03b9-4f08-8e95-bbb11289ab65","upn":"tperkins@f128.info","tenantId":"7918d4b5-0442-4a97-be2d-36f9f9962ece","userObjectId":"17f84be4-51f8-483a-b533-383791227a99"}```
active-directory Permissions Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/permissions-reference.md
As a best practice, we recommend that you assign this role to fewer than five pe
> | [Groups Administrator](#groups-administrator) | Members of this role can create/manage groups, create/manage groups settings like naming and expiration policies, and view groups activity and audit reports. | fdd7a751-b60b-444a-984c-02652fe8fa1c | > | [Guest Inviter](#guest-inviter) | Can invite guest users independent of the 'members can invite guests' setting. | 95e79109-95c0-4d8e-aee3-d01accf2d47b | > | [Helpdesk Administrator](#helpdesk-administrator) | Can reset passwords for non-administrators and Helpdesk Administrators. | 729827e3-9c14-49f7-bb1b-9608f156bbb8 |
-> | [Hybrid Identity Administrator](#hybrid-identity-administrator) | Can manage AD to Azure AD cloud provisioning and federation settings. | 8ac3fc64-6eca-42ea-9e69-59f4c7b60eb2 |
+> | [Hybrid Identity Administrator](#hybrid-identity-administrator) | Can manage AD to Azure AD cloud provisioning, Azure AD Connect and federation settings. | 8ac3fc64-6eca-42ea-9e69-59f4c7b60eb2 |
> | [Insights Administrator](#insights-administrator) | Has administrative access in the Microsoft 365 Insights app. | eb1f4a8d-243a-41f0-9fbd-c7cdf6c5ef7c | > | [Insights Business Leader](#insights-business-leader) | Can view and share dashboards and insights via the M365 Insights app. | 31e939ad-9672-4796-9c2e-873181342d2d | > | [Intune Administrator](#intune-administrator) | Can manage all aspects of the Intune product. | 3a2c62db-5318-420d-8d74-23affee5d9d5 |
This role was previously called "Password Administrator" in the [Azure portal](h
## Hybrid Identity Administrator
-Users in this role can create, manage and deploy provisioning configuration setup from AD to Azure AD using Cloud Provisioning as well as manage federation settings. Users can also troubleshoot and monitor logs using this role.
+Users in this role can create, manage and deploy provisioning configuration setup from AD to Azure AD using Cloud Provisioning as well as manage Azure AD Connect and federation settings. Users can also troubleshoot and monitor logs using this role.
> [!div class="mx-tableFixed"] > | Actions | Description |
Usage Summary Reports Reader |   | :heavy_check_mark: | :heavy_check_mark:
- [Assign Azure AD roles to groups](groups-assign-role.md) - [Understand the different roles](../../role-based-access-control/rbac-and-directory-admin-roles.md)-- [Assign a user as an administrator of an Azure subscription](../../role-based-access-control/role-assignments-portal-subscription-admin.md)
+- [Assign a user as an administrator of an Azure subscription](../../role-based-access-control/role-assignments-portal-subscription-admin.md)
active-directory Cornerstone Ondemand Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/cornerstone-ondemand-tutorial.md
Title: 'Tutorial: Azure Active Directory Single sign-on (SSO) integration with Cornerstone OnDemand | Microsoft Docs'
-description: Learn how to configure single sign-on between Azure Active Directory and Cornerstone OnDemand.
+ Title: 'Tutorial: Azure Active Directory Single sign-on (SSO) integration with Cornerstone Single Sign-On | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and Cornerstone Single Sign-On.
Previously updated : 12/24/2020 Last updated : 03/09/2021
-# Tutorial: Azure Active Directory Single sign-on (SSO) integration with Cornerstone OnDemand
+# Tutorial: Azure Active Directory Single sign-on (SSO) integration with Cornerstone Single Sign-On
-In this tutorial, you'll learn how to integrate Cornerstone OnDemand with Azure Active Directory (Azure AD). When you integrate Cornerstone OnDemand with Azure AD, you can:
+In this tutorial, you'll learn how to integrate Cornerstone Single Sign-On with Azure Active Directory (Azure AD). When you integrate Cornerstone Single Sign-On with Azure AD, you can:
-* Control in Azure AD who has access to Cornerstone OnDemand.
-* Enable your users to be automatically signed-in to Cornerstone OnDemand with their Azure AD accounts.
+* Control in Azure AD who has access to Cornerstone Single Sign-On.
+* Enable your users to be automatically signed-in to Cornerstone Single Sign-On with their Azure AD accounts.
* Manage your accounts in one central location - the Azure portal. ## Prerequisites
In this tutorial, you'll learn how to integrate Cornerstone OnDemand with Azure
To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* Cornerstone OnDemand single sign-on (SSO) enabled subscription.
+* Cornerstone Single Sign-On single sign-on (SSO) enabled subscription.
> [!NOTE] > This integration is also available to use from Azure AD US Government Cloud environment. You can find this application in the Azure AD US Government Cloud Application Gallery and configure it in the same way as you do from public cloud.
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Cornerstone OnDemand supports **SP** initiated SSO
-* Cornerstone OnDemand supports [Automated user provisioning](cornerstone-ondemand-provisioning-tutorial.md)
+* Cornerstone Single Sign-On supports **SP** initiated SSO.
+* Cornerstone Single Sign-On supports [Automated user provisioning](cornerstone-ondemand-provisioning-tutorial.md).
-## Adding Cornerstone OnDemand from the gallery
+## Adding Cornerstone Single Sign-On from the gallery
-To configure the integration of Cornerstone OnDemand into Azure AD, you need to add Cornerstone OnDemand from the gallery to your list of managed SaaS apps.
+To configure the integration of Cornerstone Single Sign-On into Azure AD, you need to add Cornerstone Single Sign-On from the gallery to your list of managed SaaS apps.
1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account. 1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**.
-1. In the **Add from the gallery** section, type **Cornerstone OnDemand** in the search box.
-1. Select **Cornerstone OnDemand** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+1. In the **Add from the gallery** section, type **Cornerstone Single Sign-On** in the search box.
+1. Select **Cornerstone Single Sign-On** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD SSO for Cornerstone OnDemand
+## Configure and test Azure AD SSO for Cornerstone Single Sign-On
-Configure and test Azure AD SSO with Cornerstone OnDemand using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Cornerstone OnDemand.
+Configure and test Azure AD SSO with Cornerstone Single Sign-On using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Cornerstone Single Sign-On.
-To configure and test Azure AD SSO with Cornerstone OnDemand, perform the following steps:
+To configure and test Azure AD SSO with Cornerstone Single Sign-On, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon. 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-2. **[Configure Cornerstone OnDemand SSO](#configure-cornerstone-ondemand-sso)** - to configure the Single Sign-On settings on application side.
- 1. **[Create Cornerstone OnDemand test user](#create-cornerstone-ondemand-test-user)** - to have a counterpart of B.Simon in Cornerstone OnDemand that is linked to the Azure AD representation of user.
+2. **[Configure Cornerstone Single Sign-On SSO](#configure-cornerstone-single-sign-on-sso)** - to configure the Single Sign-On settings on application side.
+ 1. **[Create Cornerstone Single Sign-On test user](#create-cornerstone-single-sign-on-test-user)** - to have a counterpart of B.Simon in Cornerstone Single Sign-On that is linked to the Azure AD representation of user.
3. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the Azure portal, on the **Cornerstone OnDemand** application integration page, find the **Manage** section and select **Single sign-on**.
+1. In the Azure portal, on the **Cornerstone Single Sign-On** application integration page, find the **Manage** section and select **Single sign-on**.
1. On the **Select a Single sign-on method** page, select **SAML**. 1. On the **Set up Single Sign-On with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Basic SAML Configuration** section, perform the following steps:
- a. In the **Sign on URL** text box, type a URL using the following pattern:
- `https://<company>.csod.com/samldefault.aspx?ouid=2`
+ a. In the **Identifier** text box, type a URL using the following pattern:
+ `https://<PORTAL_NAME>.csod.com`
- b. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
- `https://<company>.csod.com/samldefault.aspx?ouid=2`
+ b. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://<PORTAL_NAME>.csod.com/samldefault.aspx?ouid=<OUID>`
+
+ c. In the **Sign on URL** text box, type a URL using the following pattern:
+ `https://<PORTAL_NAME>.csod.com/samldefault.aspx?ouid=<OUID>`
> [!NOTE]
- > These values are not real. Update these values with the actual Sign on URL and Identifier. Contact [Cornerstone OnDemand Client support team](mailto:moreinfo@csod.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Reply URL, Identifier and Sign on URL. Contact [Cornerstone Single Sign-On Client support team](mailto:moreinfo@csod.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
4. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer. ![The Certificate download link](common/certificatebase64.png)
-6. On the **Set up Cornerstone OnDemand** section, copy the appropriate URL(s) based on your requirement.
+6. On the **Set up Cornerstone Single Sign-On** section, copy the appropriate URL(s) based on your requirement.
![Copy configuration URLs](common/copy-configuration-urls.png)
In this section, you'll create a test user in the Azure portal called B.Simon.
### Assign the Azure AD test user
-In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Cornerstone OnDemand.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Cornerstone Single Sign-On.
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
-1. In the applications list, select **Cornerstone OnDemand**.
+1. In the applications list, select **Cornerstone Single Sign-On**.
1. In the app's overview page, find the **Manage** section and select **Users and groups**. 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog. 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected. 1. In the **Add Assignment** dialog, click the **Assign** button.
-## Configure Cornerstone OnDemand SSO
+## Configure Cornerstone Single Sign-On SSO
+
+1. Sign in to the Cornerstone Single Sign-On as an administrator.
+
+1. Go to the **Admin -> Tools**.
+
+ ![screeenshot for Admin page.](./media/cornerstone-ondemand-tutorial/admin.png)
+
+1. Select **EDGE** panel in **Configuration Tools**.
+
+ ![screeenshot for EDGE panel.](./media/cornerstone-ondemand-tutorial/edge-panel.png)
+
+1. Select Single Sign-On in the **Integrate** section.
+
+ ![screeenshot for Single Sign-On option.](./media/cornerstone-ondemand-tutorial/single-sign-on.png)
+
+1. Click on **Add SSO** button. Select **Inbound SAML** in the below shown pop up window and then click **Add**.
-To configure single sign-on on **Cornerstone OnDemand** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Cornerstone OnDemand support team](mailto:moreinfo@csod.com). They set this setting to have the SAML SSO connection set properly on both sides.
+ ![screeenshot for Inbound SAML.](./media/cornerstone-ondemand-tutorial/inbound.png)
-### Create Cornerstone OnDemand test user
+1. Perform the below steps in the following page:
-The objective of this section is to create a user called B.Simon in Cornerstone OnDemand. Cornerstone OnDemand supports automatic user provisioning, which is by default enabled. You can find more details [here](./cornerstone-ondemand-provisioning-tutorial.md) on how to configure automatic user provisioning.
+ ![screeenshot for Configuration section for Cornerstone.](./media/cornerstone-ondemand-tutorial/configuration.png)
+
+ a. In the **General Properties**, click on **Upload File** to upload the **Certificate (Base64)** file, which you have downloaded from the Azure portal.
+
+ b. Select the **Enable** checkbox and in the **IDP URL** textbox, paste the **Login URL** value which you have copied from the Azure portal.
+
+ c. Click **Save**.
+
+### Create Cornerstone Single Sign-On test user
+
+The objective of this section is to create a user called B.Simon in Cornerstone Single Sign-On. Cornerstone Single Sign-On supports automatic user provisioning, which is by default enabled. You can find more details [here](./cornerstone-ondemand-provisioning-tutorial.md) on how to configure automatic user provisioning.
**If you need to create user manually, perform following steps:**
-To configure user provisioning, send the information (e.g.: Name, Email) about the Azure AD user you want to provision to the [Cornerstone OnDemand support team](mailto:moreinfo@csod.com).
+1. Sign in to the Cornerstone Single Sign-On as an administrator.
+
+1. Go to the **Admin -> Users** and click on **Add User** in the bottom of the page.
+
+ ![screeenshot for test user creation of Cornerstone.](./media/cornerstone-ondemand-tutorial/user-1.png)
+
+1. Fill the required fields in **Add new user** page and click on **Save**.
->[!NOTE]
->You can use any other Cornerstone OnDemand user account creation tools or APIs provided by Cornerstone OnDemand to provision Azure AD user accounts.
+ ![screeenshot for test user creation with the required fields.](./media/cornerstone-ondemand-tutorial/user-2.png)
## Test SSO In this section, you test your Azure AD single sign-on configuration with following options.
-* Click on **Test this application** in Azure portal. This will redirect to Cornerstone OnDemand Sign-on URL where you can initiate the login flow.
+* Click on **Test this application** in Azure portal. This will redirect to Cornerstone Single Sign-On Sign-on URL where you can initiate the login flow.
-* Go to Cornerstone OnDemand Sign-on URL directly and initiate the login flow from there.
+* Go to Cornerstone Single Sign-On Sign-on URL directly and initiate the login flow from there.
-* You can use Microsoft My Apps. When you click the Cornerstone OnDemand tile in the My Apps, this will redirect to Cornerstone OnDemand Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+* You can use Microsoft My Apps. When you click the Cornerstone Single Sign-On tile in the My Apps, this will redirect to Cornerstone Single Sign-On Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
## Next steps
-Once you configure Cornerstone OnDemand you can enforce Session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad)
+Once you configure Cornerstone Single Sign-On you can enforce Session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad)
active-directory Grammarly Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/grammarly-provisioning-tutorial.md
This section guides you through the steps to configure the Azure AD provisioning
![Screenshot that shows Provisioning Mode set to Automatic.](common/provisioning-automatic.png)
-1. In the **Admin Credentials** section, enter your Grammarly **Tenant URL** and **Secret token** information. Select **Test Connection** to ensure that Azure AD can connect to Grammarly. If the connection fails, ensure that your Grammarly account has admin permissions and try again.
+1. Under the **Admin Credentials** section, in the enter **Tenant URL** field enter `https://sso.grammarly.com/scim/v2`, and in the **Secret Token** field enter the token provided by Grammarly (see Step 2 above). Click **Test Connection** to ensure Azure AD can connect to Grammarly. If the connection fails, ensure your Grammarly account has Admin permissions and try again.
![Screenshot that shows the Tenant URL and Secret Token boxes.](common/provisioning-testconnection-tenanturltoken.png)
active-directory Kemp Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/kemp-tutorial.md
In this section, you test your Azure AD single sign-on configuration with follow
* Click on Test this application in Azure portal and you should be automatically signed in to the Kemp LoadMaster Azure AD integration for which you set up the SSO.
-* You can use Microsoft My Apps. When you click the Kemp LoadMaster Azure AD integration tile in the My Apps, you should be automatically signed in to the Kemp LoadMaster Azure AD integration for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
+* You can use Microsoft My Apps. When you click the Kemp LoadMaster Azure AD integration tile in the My Apps, you should be automatically signed in to the Kemp LoadMaster Azure AD integration for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
## Next steps
-Once you configure Kemp LoadMaster Azure AD integration you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
+Once you configure Kemp LoadMaster Azure AD integration you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
active-directory Launchdarkly Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/launchdarkly-tutorial.md
In this section, you test your Azure AD single sign-on configuration with follow
* Click on Test this application in Azure portal and you should be automatically signed in to the LaunchDarkly for which you set up the SSO.
-* You can use Microsoft My Apps. When you click the LaunchDarkly tile in the My Apps, you should be automatically signed in to the LaunchDarkly for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
+* You can use Microsoft My Apps. When you click the LaunchDarkly tile in the My Apps, you should be automatically signed in to the LaunchDarkly for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
## Next steps
-Once you configure LaunchDarkly you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
+Once you configure LaunchDarkly you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
active-directory Learning At Work Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/learning-at-work-tutorial.md
In this section, you test your Azure AD single sign-on configuration with follow
* Go to Learning at Work Sign-on URL directly and initiate the login flow from there.
-* You can use Microsoft My Apps. When you click the Learning at Work tile in the My Apps, this will redirect to Learning at Work Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
+* You can use Microsoft My Apps. When you click the Learning at Work tile in the My Apps, this will redirect to Learning at Work Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
## Next steps
-Once you configure Learning at Work you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
+Once you configure Learning at Work you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
active-directory Lexonis Talentscape Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/lexonis-talentscape-tutorial.md
Follow these steps to enable Azure AD SSO in the Azure portal.
| roles | user.assignedroles | > [!NOTE]
- > Lexonis TalentScape expects roles for users assigned to the application. Please set up these roles in Azure AD so that users can be assigned the appropriate roles. To understand how to configure roles in Azure AD, see [here](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps#app-roles-ui).
+ > Lexonis TalentScape expects roles for users assigned to the application. Please set up these roles in Azure AD so that users can be assigned the appropriate roles. To understand how to configure roles in Azure AD, see [here](../develop/howto-add-app-roles-in-azure-ad-apps.md).
1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
In this section, you test your Azure AD single sign-on configuration with follow
* Click on **Test this application** in Azure portal and you should be automatically signed in to the Lexonis TalentScape for which you set up the SSO
-You can also use Microsoft My Apps to test the application in any mode. When you click the Lexonis TalentScape tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Lexonis TalentScape for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
+You can also use Microsoft My Apps to test the application in any mode. When you click the Lexonis TalentScape tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Lexonis TalentScape for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
## Next steps
-Once you configure Lexonis TalentScape you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
--
+Once you configure Lexonis TalentScape you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
active-directory Logicgate Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/logicgate-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure LogicGate for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to LogicGate.
+
+documentationcenter: ''
+
+writer: Zhchia
++
+ms.assetid: eea988ef-b0f1-4d22-b867-310f167540c3
+++
+ na
+ms.devlang: na
+ Last updated : 03/17/2021+++
+# Tutorial: Configure LogicGate for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both LogicGate and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [LogicGate](https://www.logicgate.com) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../manage-apps/user-provisioning.md).
++
+## Capabilities Supported
+> [!div class="checklist"]
+> * Create users in LogicGate
+> * Remove users in LogicGate when they do not require access anymore
+> * Keep user attributes synchronized between Azure AD and LogicGate
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](https://docs.microsoft.com/azure/active-directory/develop/quickstart-create-new-tenant)
+* A user account in Azure AD with [permission](https://docs.microsoft.com/azure/active-directory/users-groups-roles/directory-assign-admin-roles) to configure provisioning (e.g. Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A LogicGate tenant with the Enterprise plan or better enabled.
+* A user account in LogicGate with Admin permissions.
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](https://docs.microsoft.com/azure/active-directory/manage-apps/user-provisioning).
+2. Determine who will be in [scope for provisioning](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+3. Determine what data to [map between Azure AD and LogicGate](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes).
+
+## Step 2. Configure LogicGate to support provisioning with Azure AD
+
+1. Login to **LogicGate** admin console.Navigate to the **Home** tab and Click on **Profile** icon over top right corner.
+2. Navigate to **Profile** **>** **Access Key**.
+
+ ![Profile tab](./media/logicgate-provisioning-tutorial/profile.png)
+
+3. Click on **Generate Access key**.
+
+ ![Access tab](./media/logicgate-provisioning-tutorial/key.png)
+
+4. Copy and save the **Access Key**.This value will be entered in the **Secret Token** * field in the Provisioning tab of your LogicGate application in the Azure portal.
+
+ ![Key tab](./media/logicgate-provisioning-tutorial/access.png)
+
+## Step 3. Add LogicGate from the Azure AD application gallery
+
+Add LogicGate from the Azure AD application gallery to start managing provisioning to LogicGate. If you have previously setup LogicGate for SSO you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](https://docs.microsoft.com/azure/active-directory/manage-apps/add-gallery-app).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+
+* When assigning users and groups to LogicGate, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps) to add additional roles.
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
++
+## Step 5. Configure automatic user provisioning to LogicGate
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for LogicGate in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Enterprise applications blade](common/enterprise-applications.png)
+
+2. In the applications list, select **LogicGate**.
+
+ ![The LogicGate link in the Applications list](common/all-applications.png)
+
+3. Select the **Provisioning** tab.
+
+ ![Provision tab](common/provisioning.png)
+
+4. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Automatic tab](common/provisioning-automatic.png)
+
+5. Under the **Admin Credentials** section, input your LogicGate Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to LogicGate. If the connection fails, ensure your LogicGate account has Admin permissions and try again.
+
+ ![Token](common/provisioning-testconnection-tenanturltoken.png)
+
+6. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Notification Email](common/provisioning-notification-email.png)
+
+7. Select **Save**.
+
+8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to LogicGate**.
+
+9. Review the user attributes that are synchronized from Azure AD to LogicGate in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in LogicGate for update operations. If you choose to change the [matching target attribute](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes), you will need to ensure that the LogicGate API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|
+ ||||
+ |userName|String|&check;|
+ |emails[type eq "work"].value|String|
+ |active|Boolean|
+ |name.givenName|String|
+ |name.familyName|String|
+
+10. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../manage-apps/define-conditional-rules-for-provisioning-user-accounts.md).
+
+11. To enable the Azure AD provisioning service for LogicGate, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+
+12. Define the users and/or groups that you would like to provision to LogicGate by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Provisioning Scope](common/provisioning-scope.png)
+
+13. When you are ready to provision, click **Save**.
+
+ ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+1. Use the [provisioning logs](https://docs.microsoft.com/azure/active-directory/reports-monitoring/concept-provisioning-logs) to determine which users have been provisioned successfully or unsuccessfully
+2. Check the [progress bar](https://docs.microsoft.com/azure/active-directory/manage-apps/application-provisioning-when-will-provisioning-finish-specific-user) to see the status of the provisioning cycle and how close it is to completion
+3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](https://docs.microsoft.com/azure/active-directory/manage-apps/application-provisioning-quarantine-status).
+
+## Additional resources
+
+* [Managing user account provisioning for Enterprise Apps](../manage-apps/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../manage-apps/check-status-user-account-provisioning.md)
active-directory Onshape Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/onshape-tutorial.md
In this section, you test your Azure AD single sign-on configuration with follow
* Click on **Test this application** in Azure portal and you should be automatically signed in to the Onshape for which you set up the SSO
-You can also use Microsoft My Apps to test the application in any mode. When you click the Onshape tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Onshape for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
+You can also use Microsoft My Apps to test the application in any mode. When you click the Onshape tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Onshape for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
## Next steps
-Once you configure Onshape you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
--
+Once you configure Onshape you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
active-directory Printerlogic Saas Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/printerlogic-saas-tutorial.md
In this section, you test your Azure AD single sign-on configuration with follow
* Click on **Test this application** in Azure portal and you should be automatically signed in to the PrinterLogic for which you set up the SSO.
-You can also use Microsoft My Apps to test the application in any mode. When you click the PrinterLogic tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the PrinterLogic for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
+You can also use Microsoft My Apps to test the application in any mode. When you click the PrinterLogic tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the PrinterLogic for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
## Next steps
-Once you configure PrinterLogic you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
+Once you configure PrinterLogic you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
active-directory Ringcentral Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/ringcentral-tutorial.md
In this section, you test your Azure AD single sign-on configuration with follow
* Click on Test this application in Azure portal and you should be automatically signed in to the RingCentral for which you set up the SSO.
-* You can use Microsoft My Apps. When you click the RingCentral tile in the My Apps, you should be automatically signed in to the RingCentral for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
+* You can use Microsoft My Apps. When you click the RingCentral tile in the My Apps, you should be automatically signed in to the RingCentral for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
## Next steps
-Once you configure RingCentral you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
+Once you configure RingCentral you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
active-directory Samanage Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/samanage-tutorial.md
In this section, you test your Azure AD single sign-on configuration with follow
* Go to SolarWinds Sign-on URL directly and initiate the login flow from there.
-* You can use Microsoft My Apps. When you click the SolarWinds tile in the My Apps, this will redirect to SolarWinds Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
+* You can use Microsoft My Apps. When you click the SolarWinds tile in the My Apps, this will redirect to SolarWinds Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
## Next steps
-Once you configure SolarWinds you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
+Once you configure SolarWinds you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
active-directory Samlssojira Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/samlssojira-tutorial.md
In this section, you test your Azure AD single sign-on configuration with follow
* Click on **Test this application** in Azure portal and you should be automatically signed in to the SAML SSO for Jira by resolution GmbH for which you set up the SSO.
-You can also use Microsoft My Apps to test the application in any mode. When you click the SAML SSO for Jira by resolution GmbH tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the SAML SSO for Jira by resolution GmbH for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
+You can also use Microsoft My Apps to test the application in any mode. When you click the SAML SSO for Jira by resolution GmbH tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the SAML SSO for Jira by resolution GmbH for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
## Enable SSO redirection for Jira
After activating the option, you can still reach the username/password prompt if
## Next steps
-Once you configure SAML SSO for Jira by resolution GmbH you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
+Once you configure SAML SSO for Jira by resolution GmbH you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
active-directory Samsung Knox And Business Services Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/samsung-knox-and-business-services-tutorial.md
In this section, you test your Azure AD single sign-on configuration with follow
* Go to [SamsungKnox.com](https://samsungknox.com/) directly and initiate the login flow from there.
-* You can use Microsoft My Apps. When you click the Samsung Knox and Business Services tile in the My Apps, this will redirect to [SamsungKnox.com](https://samsungknox.com/). For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
+* You can use Microsoft My Apps. When you click the Samsung Knox and Business Services tile in the My Apps, this will redirect to [SamsungKnox.com](https://samsungknox.com/). For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
## Next steps
active-directory Sapient Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/sapient-tutorial.md
In this section, you test your Azure AD single sign-on configuration with follow
* Go to Sapient Sign-on URL directly and initiate the login flow from there.
-* You can use Microsoft My Apps. When you click the Sapient tile in the My Apps, this will redirect to Sapient Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
+* You can use Microsoft My Apps. When you click the Sapient tile in the My Apps, this will redirect to Sapient Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
## Next steps
-Once you configure Sapient you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
--
+Once you configure Sapient you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
active-directory Secure Login Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/secure-login-provisioning-tutorial.md
# Tutorial: Configure SecureLogin for automatic user provisioning
-This tutorial describes the steps you need to perform in both SecureLogin and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [SecureLogin](https://securelogin.nu) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../manage-apps/user-provisioning.md).
+This tutorial describes the steps you need to perform in both SecureLogin and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [SecureLogin](https://securelogin.nu) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
## Capabilities Supported
This tutorial describes the steps you need to perform in both SecureLogin and Az
The scenario outlined in this tutorial assumes that you already have the following prerequisites:
-* [An Azure AD tenant](https://docs.microsoft.com/azure/active-directory/develop/quickstart-create-new-tenant)
-* A user account in Azure AD with [permission](https://docs.microsoft.com/azure/active-directory/users-groups-roles/directory-assign-admin-roles) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md)
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
* A [SecureLogin](https://securelogin.nu) Account. ## Step 1. Plan your provisioning deployment
-1. Learn about [how the provisioning service works](https://docs.microsoft.com/azure/active-directory/manage-apps/user-provisioning).
-2. Determine who will be in [scope for provisioning](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
-3. Determine what data to [map between Azure AD and SecureLogin](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes).
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+2. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+3. Determine what data to [map between Azure AD and SecureLogin](../app-provisioning/customize-application-attributes.md).
## Step 2. Configure SecureLogin to support provisioning with Azure AD
A [SecureLogin](https://securelogin.nu) account as a manager is required to **Au
## Step 3. Add SecureLogin from the Azure AD application gallery
-Add SecureLogin from the Azure AD application gallery to start managing provisioning to SecureLogin. If you have previously setup SecureLogin for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](https://docs.microsoft.com/azure/active-directory/manage-apps/add-gallery-app).
+Add SecureLogin from the Azure AD application gallery to start managing provisioning to SecureLogin. If you have previously setup SecureLogin for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
## Step 4. Define who will be in scope for provisioning
-The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to SecureLogin, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps) to add additional roles.
+* When assigning users and groups to SecureLogin, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
## Step 5. Configure automatic user provisioning to SecureLogin
This section guides you through the steps to configure the Azure AD provisioning
8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to SecureLogin**.
-9. Review the user attributes that are synchronized from Azure AD to SecureLogin in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in SecureLogin for update operations. If you choose to change the [matching target attribute](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes), you will need to ensure that the SecureLogin API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+9. Review the user attributes that are synchronized from Azure AD to SecureLogin in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in SecureLogin for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the SecureLogin API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
|Attribute|Type|Supported for Filtering| ||||
This section guides you through the steps to configure the Azure AD provisioning
|externalId|String| |members|Reference|
-12. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../manage-apps/define-conditional-rules-for-provisioning-user-accounts.md).
+12. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
13. To enable the Azure AD provisioning service for SecureLogin, change the **Provisioning Status** to **On** in the **Settings** section.
This operation starts the initial synchronization cycle of all users and groups
## Step 6. Monitor your deployment Once you've configured provisioning, use the following resources to monitor your deployment:
-1. Use the [provisioning logs](https://docs.microsoft.com/azure/active-directory/reports-monitoring/concept-provisioning-logs) to determine which users have been provisioned successfully or unsuccessfully
-2. Check the [progress bar](https://docs.microsoft.com/azure/active-directory/app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user) to see the status of the provisioning cycle and how close it is to completion
-3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](https://docs.microsoft.com/azure/active-directory/manage-apps/application-provisioning-quarantine-status).
+1. Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+2. Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
+3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
## Additional resources
-* [Managing user account provisioning for Enterprise Apps](../manage-apps/configure-automatic-user-provisioning-portal.md)
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) ## Next steps
-* [Learn how to review logs and get reports on provisioning activity](../manage-apps/check-status-user-account-provisioning.md)
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Securitystudio Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/securitystudio-tutorial.md
In this section, you test your Azure AD single sign-on configuration with follow
* Click on Test this application in Azure portal and you should be automatically signed in to the SecurityStudio for which you set up the SSO.
-* You can use Microsoft My Apps. When you click the SecurityStudio tile in the My Apps, you should be automatically signed in to the SecurityStudio for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
+* You can use Microsoft My Apps. When you click the SecurityStudio tile in the My Apps, you should be automatically signed in to the SecurityStudio for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
## Next Steps
-Once you configure SecurityStudio you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
+Once you configure SecurityStudio you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
active-directory Sendpro Enterprise Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/sendpro-enterprise-tutorial.md
In this section, you test your Azure AD single sign-on configuration with follow
* Go to SendPro Enterprise Sign-on URL directly and initiate the login flow from there.
-* You can use Microsoft My Apps. When you click the SendPro Enterprise tile in the My Apps, this will redirect to SendPro Enterprise Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
+* You can use Microsoft My Apps. When you click the SendPro Enterprise tile in the My Apps, this will redirect to SendPro Enterprise Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
## Next steps
-Once you configure SendPro Enterprise you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
--
+Once you configure SendPro Enterprise you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
active-directory Sharingcloud Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/sharingcloud-tutorial.md
In this section, you test your Azure AD single sign-on configuration with follow
* Click on **Test this application** in Azure portal and you should be automatically signed in to the SharingCloud for which you set up the SSO
-You can also use Microsoft My Apps to test the application in any mode. When you click the SharingCloud tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the SharingCloud for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
+You can also use Microsoft My Apps to test the application in any mode. When you click the SharingCloud tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the SharingCloud for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
## Next steps
-Once you configure SharingCloud you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
-
+Once you configure SharingCloud you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
active-directory Smartlook Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/smartlook-tutorial.md
In this section, you test your Azure AD single sign-on configuration with follow
* Click on **Test this application** in Azure portal and you should be automatically signed in to the Smartlook for which you set up the SSO
-You can also use Microsoft My Apps to test the application in any mode. When you click the Smartlook tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Smartlook for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
+You can also use Microsoft My Apps to test the application in any mode. When you click the Smartlook tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Smartlook for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
## Next steps
-Once you configure Smartlook you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
--
+Once you configure Smartlook you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
active-directory Thrive Lxp Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/thrive-lxp-tutorial.md
In this section, you test your Azure AD single sign-on configuration with follow
* Go to Thrive LXP Sign-on URL directly and initiate the login flow from there.
-* You can use Microsoft My Apps. When you click the Thrive LXP tile in the My Apps, this will redirect to Thrive LXP Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
+* You can use Microsoft My Apps. When you click the Thrive LXP tile in the My Apps, this will redirect to Thrive LXP Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
## Next steps
-Once you configure Thrive LXP you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
--
+Once you configure Thrive LXP you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
active-directory Tradeshift Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/tradeshift-tutorial.md
In this section, you test your Azure AD single sign-on configuration with follow
* Click on **Test this application** in Azure portal and you should be automatically signed in to the Tradeshift for which you set up the SSO
-You can also use Microsoft My Apps to test the application in any mode. When you click the Tradeshift tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Tradeshift for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
+You can also use Microsoft My Apps to test the application in any mode. When you click the Tradeshift tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Tradeshift for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
## Next steps
-Once you configure Tradeshift you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
--
+Once you configure Tradeshift you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
active-directory Transperfect Globallink Dashboard Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/transperfect-globallink-dashboard-tutorial.md
In this section, you test your Azure AD single sign-on configuration with follow
* Click on **Test this application** in Azure portal and you should be automatically signed in to the TransPerfect GlobalLink Dashboard for which you set up the SSO
-You can also use Microsoft My Apps to test the application in any mode. When you click the TransPerfect GlobalLink Dashboard tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the TransPerfect GlobalLink Dashboard for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
+You can also use Microsoft My Apps to test the application in any mode. When you click the TransPerfect GlobalLink Dashboard tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the TransPerfect GlobalLink Dashboard for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
## Next steps
-Once you configure TransPerfect GlobalLink Dashboard you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
--
+Once you configure TransPerfect GlobalLink Dashboard you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
active-directory Travelperk Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/travelperk-provisioning-tutorial.md
# Tutorial: Configure TravelPerk for automatic user provisioning
-This tutorial describes the steps you need to perform in both TravelPerk and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and deprovisions users and groups to [TravelPerk](https://www.travelperk.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../manage-apps/user-provisioning.md).
+This tutorial describes the steps you need to perform in both TravelPerk and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and deprovisions users and groups to [TravelPerk](https://www.travelperk.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
## Capabilities supported
This tutorial describes the steps you need to perform in both TravelPerk and Azu
> - Create users in TravelPerk > - Remove users in TravelPerk when they do not require access anymore > - Keep user attributes synchronized between Azure AD and TravelPerk
-> - [Single sign-on](https://docs.microsoft.com/azure/active-directory/saas-apps/travelperk-tutorial) to TravelPerk (recommended)
+> - [Single sign-on](./travelperk-tutorial.md) to TravelPerk (recommended)
## Prerequisites The scenario outlined in this tutorial assumes that you already have the following prerequisites: -- [An Azure AD tenant](https://docs.microsoft.com/azure/active-directory/develop/quickstart-create-new-tenant).-- A user account in Azure AD with [permission](https://docs.microsoft.com/azure/active-directory/users-groups-roles/directory-assign-admin-roles) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+- [An Azure AD tenant](../develop/quickstart-create-new-tenant.md).
+- A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
- An active [TravelPerk](https://app.travelperk.com/signup) admin account. - A Premium/Pro [plan](https://www.travelperk.com/pricing/). ## Step 1. Plan your provisioning deployment
-1. Learn about [how the provisioning service works](https://docs.microsoft.com/azure/active-directory/manage-apps/user-provisioning).
-2. Determine who will be in [scope for provisioning](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
-3. Determine what data to [map between Azure AD and TravelPerk](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes).
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+2. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+3. Determine what data to [map between Azure AD and TravelPerk](../app-provisioning/customize-application-attributes.md).
## Step 2. Configure TravelPerk to support provisioning with Azure AD
Approvers will not be created if they are not properly configured on TravelPerk.
## Step 3. Add TravelPerk from the Azure AD application gallery
-Add TravelPerk from the Azure AD application gallery to start managing provisioning to TravelPerk. If you have previously setup TravelPerk for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](https://docs.microsoft.com/azure/active-directory/manage-apps/add-gallery-app).
+Add TravelPerk from the Azure AD application gallery to start managing provisioning to TravelPerk. If you have previously setup TravelPerk for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
## Step 4. Define who will be in scope for provisioning
-The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-- When assigning users to TravelPerk, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps) to add additional roles.
+- When assigning users to TravelPerk, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
-- Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+- Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
## Step 5. Configure automatic user provisioning to TravelPerk
This section guides you through the steps to configure the Azure AD provisioning
8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to TravelPerk**.
-9. Review the user attributes that are synchronized from Azure AD to TravelPerk in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in TravelPerk for update operations. If you choose to change the [matching target attribute](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes), you will need to ensure that the TravelPerk API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+9. Review the user attributes that are synchronized from Azure AD to TravelPerk in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in TravelPerk for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the TravelPerk API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
| Attribute | Type | Supported For Filtering | | | | -- |
This section guides you through the steps to configure the Azure AD provisioning
| urn:ietf:params:scim:schemas:extension:travelperk:2.0:User:emergencyContact.phone | String | | urn:ietf:params:scim:schemas:extension:travelperk:2.0:User:travelPolicy | String |
-10. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../manage-apps/define-conditional-rules-for-provisioning-user-accounts.md).
+10. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
11. To enable the Azure AD provisioning service for TravelPerk, change the **Provisioning Status** to **On** in the **Settings** section.
This operation starts the initial synchronization cycle of all users and groups
Once you've configured provisioning, use the following resources to monitor your deployment:
-1. Use the [provisioning logs](https://docs.microsoft.com/azure/active-directory/reports-monitoring/concept-provisioning-logs) to determine which users have been provisioned successfully or unsuccessfully
-2. Check the [progress bar](https://docs.microsoft.com/azure/active-directory/app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user) to see the status of the provisioning cycle and how close it is to completion
-3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](https://docs.microsoft.com/azure/active-directory/manage-apps/application-provisioning-quarantine-status).
+1. Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+2. Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
+3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
## Additional resources -- [Managing user account provisioning for Enterprise Apps](../manage-apps/configure-automatic-user-provisioning-portal.md)
+- [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) ## Next steps -- [Learn how to review logs and get reports on provisioning activity](../manage-apps/check-status-user-account-provisioning.md)
+- [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Truechoice Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/truechoice-tutorial.md
In this section, you test your Azure AD single sign-on configuration with follow
* Go to TrueChoice Sign-on URL directly and initiate the login flow from there.
-* You can use Microsoft My Apps. When you click the TrueChoice tile in the My Apps, this will redirect to TrueChoice Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
+* You can use Microsoft My Apps. When you click the TrueChoice tile in the My Apps, this will redirect to TrueChoice Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
## Next steps
-Once you configure TrueChoice you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
--
+Once you configure TrueChoice you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
active-directory Uniflow Online Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/uniflow-online-tutorial.md
In this section, you test your Azure AD single sign-on configuration with follow
* Go to uniFLOW Online Sign-on URL directly and initiate the login flow from there.
-* You can use Microsoft My Apps. When you click the uniFLOW Online tile in the My Apps, this will redirect to uniFLOW Online Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
+* You can use Microsoft My Apps. When you click the uniFLOW Online tile in the My Apps, this will redirect to uniFLOW Online Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
## Next steps
-Once you configure uniFLOW Online you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
+Once you configure uniFLOW Online you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
active-directory Validsign Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/validsign-tutorial.md
In this section, you test your Azure AD single sign-on configuration with follow
* Click on **Test this application** in Azure portal and you should be automatically signed in to the ValidSign for which you set up the SSO
-You can also use Microsoft My Apps to test the application in any mode. When you click the ValidSign tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the ValidSign for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
+You can also use Microsoft My Apps to test the application in any mode. When you click the ValidSign tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the ValidSign for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
## Next steps
-Once you configure ValidSign you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
--
+Once you configure ValidSign you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
active-directory Vmware Horizon Unified Access Gateway Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/vmware-horizon-unified-access-gateway-tutorial.md
You can also use Microsoft Access Panel to test the application in any mode. Whe
## Next Steps
-Once you configure VMware Horizon - Unified Access Gateway you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
+Once you configure VMware Horizon - Unified Access Gateway you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
active-directory Wandera Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/wandera-tutorial.md
Follow these steps to enable Azure AD SSO in the Azure portal.
`https://radar.wandera.com/saml/acs/<tenant id>` > [!NOTE]
- > The value is not real. Update the value with the actual Reply URL. Contact [Wandera RADAR Admin Client support team](https://www.wandera.com/about-wandera/contact/#supportsection) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > The value is not real. Update the value with the actual Reply URL. Contact [Wandera RADAR Admin Client support team](https://www.wandera.com/about-wandera/contact/#supportsection) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. Carefully replace the <tenant id> part of the above URL with the Tenant ID shown in the **Settings** > **Administration** > **Single Sign-On** page within your Wandera account.
+ 1. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
When you click the Wandera RADAR Admin tile in the Access Panel, you should be a
- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) -- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
active-directory Zscalerprivateaccess Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/zscalerprivateaccess-tutorial.md
In this section, you test your Azure AD single sign-on configuration with follow
* Go to Zscaler Private Access (ZPA) Sign-on URL directly and initiate the login flow from there.
-* You can use Microsoft My Apps. When you click the Zscaler Private Access (ZPA) tile in the My Apps, this will redirect to Zscaler Private Access (ZPA) Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
+* You can use Microsoft My Apps. When you click the Zscaler Private Access (ZPA) tile in the My Apps, this will redirect to Zscaler Private Access (ZPA) Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
## Next steps
-Once you configure Zscaler Private Access (ZPA) you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
+Once you configure Zscaler Private Access (ZPA) you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
aks Concepts Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/concepts-identity.md
The following additional permissions are needed by the cluster identity when cre
| Microsoft.Network/virtualNetworks/subnets/read <br/> Microsoft.Network/virtualNetworks/subnets/join/action | Required if using a subnet in another resource group such as a custom VNET. | | Microsoft.Network/routeTables/routes/read <br/> Microsoft.Network/routeTables/routes/write | Required if using a subnet associated with a route table in another resource group such as a custom VNET with a custom route table. Required to verify if a subnet already exists for the subnet in the other resource group. | | Microsoft.Network/virtualNetworks/subnets/read | Required if using an internal load balancer in another resource group. Required to verify if a subnet already exists for the internal load balancer in the resource group. |
+| Microsoft.Network/privatednszones/* | Required if using a private DNS zone in another resource group such as a custom privateDNSZone. |
## Kubernetes role-based access control (Kubernetes RBAC)
aks Rdp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/rdp.md
This article shows you how to create an RDP connection with an AKS node using th
## Before you begin This article assumes that you have an existing AKS cluster with a Windows Server node. If you need an AKS cluster, see the article on [creating an AKS cluster with a Windows container using the Azure CLI][aks-windows-cli]. You need the Windows administrator username and password for the Windows Server node you want to troubleshoot. If you don't know them, you can reset them by following [Reset Remote Desktop Services or its administrator password in a Windows VM
-](../virtual-machines/troubleshooting/reset-rdp.md). You also need an RDP client such as [Microsoft Remote Desktop][rdp-mac].
+](/troubleshoot/azure/virtual-machines/reset-rdp). You also need an RDP client such as [Microsoft Remote Desktop][rdp-mac].
You also need the Azure CLI version 2.0.61 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
aks Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/security-baseline.md
Network Watcher is enabled automatically in your virtual network's region when y
If intrusion detection and/or prevention based on payload inspection or behavior analytics is not a requirement, an Azure Application Gateway with WAF can be used and configured in "detection mode" to log alerts and threats, or "prevention mode" to actively block detected intrusions and attacks. -- [Understand best practices for securing your AKS cluster with a WAF](https://docs.microsoft.com/azure/aks/operator-best-practices-network#secure-traffic-with-a-web-application-firewall-waf)
+- [Understand best practices for securing your AKS cluster with a WAF](./operator-best-practices-network.md#secure-traffic-with-a-web-application-firewall-waf)
- [How to deploy Azure Application Gateway (Azure WAF)](../web-application-firewall/ag/application-gateway-web-application-firewall-portal.md)
Additional information is available at the referenced links.
- [How to configure and manage Azure Policy](../governance/policy/tutorials/create-and-manage.md) -- [Azure Policy samples for networking](https://docs.microsoft.com/azure/governance/policy/samples/built-in-policies#network)
+- [Azure Policy samples for networking](../governance/policy/samples/built-in-policies.md#network)
**Responsibility**: Customer
Create alerts within Azure Monitor that will trigger when changes to critical ne
Use Azure Monitor logs to enable and query the logs from AKS the master components, kube-apiserver and kube-controller-manager. Create and manage the nodes that run the kubelet with container runtime and deploy their applications through the managed Kubernetes API server. -- [How to view and retrieve Azure Activity Log events](/azure/azure-monitor/platform/activity-log#view-the-activity-log)
+- [How to view and retrieve Azure Activity Log events](../azure-monitor/essentials/activity-log.md#view-the-activity-log)
-- [How to create alerts in Azure Monitor](/azure/azure-monitor/platform/alerts-activity-log)
+- [How to create alerts in Azure Monitor](../azure-monitor/alerts/alerts-activity-log.md)
-- [Enable and review Kubernetes master node logs in Azure Kubernetes Service (AKS)](/azure/aks/view-master-logs)
+- [Enable and review Kubernetes master node logs in Azure Kubernetes Service (AKS)](./view-control-plane-logs.md)
**Responsibility**: Customer
Export these logs to Log Analytics or another storage platform. In Azure Monitor
Enable and on-board this data to Azure Sentinel or a third-party SIEM based on your organizational business requirements. -- [Review the Log schema including log roles here](/azure/aks/view-master-logs)
+- [Review the Log schema including log roles here](./view-control-plane-logs.md)
-- [Understand Azure Monitor for Containers](/azure/azure-monitor/insights/container-insights-overview)
+- [Understand Azure Monitor for Containers](../azure-monitor/containers/container-insights-overview.md)
-- [How to enable Azure Monitor for Containers](/azure/azure-monitor/insights/container-insights-onboard)
+- [How to enable Azure Monitor for Containers](../azure-monitor/containers/container-insights-onboard.md)
-- [Enable and review Kubernetes master node logs in Azure Kubernetes Service (AKS)](/azure/aks/view-master-logs)
+- [Enable and review Kubernetes master node logs in Azure Kubernetes Service (AKS)](./view-control-plane-logs.md)
**Responsibility**: Customer
Enable audit logs on AKS master components, such as:
Turn on other audit logs such as kube-audit as well. -- [How to enable and review Kubernetes master node logs in AKS](/azure/aks/view-master-logs)
+- [How to enable and review Kubernetes master node logs in AKS](./view-control-plane-logs.md)
**Responsibility**: Customer
Data collection is required to provide visibility into missing updates, misconfi
**Guidance**: Onboard your Azure Kubernetes Service (AKS) instances to Azure Monitor and set the corresponding Azure Log Analytics workspace retention period according to your organization's compliance requirements. -- [How to set log retention parameters for Log Analytics Workspaces](/azure/azure-monitor/platform/manage-cost-storage#change-the-data-retention-period)
+- [How to set log retention parameters for Log Analytics Workspaces](../azure-monitor/logs/manage-cost-storage.md#change-the-data-retention-period)
**Responsibility**: Customer
Use Azure Monitor's Log Analytics workspace to review logs and perform queries o
View the logs generated by AKS master components (kube-apiserver and kube-controllermanager) for troubleshooting your application and services. Enable and on-board data to Azure Sentinel or a third-party SIEM for centralized log management and monitoring. -- [How to enable and review Kubernetes master node logs in AKS](/azure/aks/view-master-logs)
+- [How to enable and review Kubernetes master node logs in AKS](./view-control-plane-logs.md)
- [How to onboard Azure Sentinel](../sentinel/quickstart-onboard.md) -- [How to perform custom queries in Azure Monitor](/azure/azure-monitor/log-query/get-started-queries)
+- [How to perform custom queries in Azure Monitor](../azure-monitor/logs/get-started-queries.md)
**Responsibility**: Customer
Review Security Center alerts on threats and malicious activity detected at the
- [Security alerts reference guide](../security-center/alerts-reference.md) -- [Alerts for containers - Azure Kubernetes Service clusters](https://docs.microsoft.com/azure/security-center/alerts-reference#alerts-akscluster)
+- [Alerts for containers - Azure Kubernetes Service clusters](../security-center/alerts-reference.md#alerts-akscluster)
**Responsibility**: Customer
Create policies and procedures around the use of dedicated administrative accoun
**Guidance**: Use single sign-on for Azure Kubernetes Service (AKS) with Azure Active Directory (Azure AD) integrated authentication for an AKS cluster. -- [How to view Kubernetes logs, events, and pod metrics in real-time](/azure/azure-monitor/insights/container-insights-livedata-overview)
+- [How to view Kubernetes logs, events, and pod metrics in real-time](../azure-monitor/containers/container-insights-livedata-overview.md)
**Responsibility**: Customer
Be aware of roles used for support or troubleshooting purposes. For example, any
**Guidance**: Integrate user authentication for Azure Kubernetes Service (AKS) with Azure Active Directory (Azure AD). Create Diagnostic Settings for Azure AD, sending the audit and sign-in logs to an Azure Log Analytics workspace. Configure desired Alerts (such as when a deactivated account attempts to log in) within an Azure Log Analytics workspace. - [How to integrate Azure Activity Logs into Azure Monitor](../active-directory/reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md) -- [How to create, view, and manage log alerts using Azure Monitor](/azure/azure-monitor/platform/alerts-log)
+- [How to create, view, and manage log alerts using Azure Monitor](../azure-monitor/alerts/alerts-log.md)
**Responsibility**: Customer
Configure alerts for proactive notification or log creation when CPU and memory
Use Azure Activity Log to monitor your AKS clusters and related resources at a high level. Integrate with Prometheus to view application and workload metrics it collects from nodes and Kubernetes using queries to create custom alerts, dashboards, and detailed perform detailed analysis. -- [Understand Azure Monitor for Containers](/azure/azure-monitor/insights/container-insights-overview)
+- [Understand Azure Monitor for Containers](../azure-monitor/containers/container-insights-overview.md)
-- [How to enable Azure Monitor for containers](/azure/azure-monitor/insights/container-insights-onboard)
+- [How to enable Azure Monitor for containers](../azure-monitor/containers/container-insights-onboard.md)
-- [How to view and retrieve Azure Activity Log events](/azure/azure-monitor/platform/activity-log#view-the-activity-log)
+- [How to view and retrieve Azure Activity Log events](../azure-monitor/essentials/activity-log.md#view-the-activity-log)
**Responsibility**: Customer
Note that the process to keep Windows Server nodes up to date differs from nodes
- [Understand how updates are applied to AKS cluster nodes running Linux](node-updates-kured.md) -- [How to upgrade an AKS node pool for AKS clusters that use Windows Server nodes](https://docs.microsoft.com/azure/aks/use-multiple-node-pools#upgrade-a-node-pool)
+- [How to upgrade an AKS node pool for AKS clusters that use Windows Server nodes](./use-multiple-node-pools.md#upgrade-a-node-pool)
- [Azure Kubernetes Service (AKS) node image upgrades](node-image-upgrade.md)
Taints, labels or tags can be used to reconcile inventory on a regular basis and
- [Managed Clusters - Update Tags](/rest/api/aks/managedclusters/updatetags) -- [Specify a taint, label, or tag for a node pool](https://docs.microsoft.com/azure/aks/use-multiple-node-pools#specify-a-taint-label-or-tag-for-a-node-pool)
+- [Specify a taint, label, or tag for a node pool](./use-multiple-node-pools.md#specify-a-taint-label-or-tag-for-a-node-pool)
**Responsibility**: Customer
Use Azure Resource Graph to query/discover resources within your subscriptions.
- [How to configure and manage Azure Policy](../governance/policy/tutorials/create-and-manage.md) -- [How to deny a specific resource type with Azure Policy](https://docs.microsoft.com/azure/governance/policy/samples/built-in-policies#general)
+- [How to deny a specific resource type with Azure Policy](../governance/policy/samples/built-in-policies.md#general)
**Responsibility**: Customer
Refer to the list of Center for Internet Security (CIS) controls which are built
- [Security hardening for AKS agent node host OS](security-hardened-vm-host-image.md) -- [Understand state configuration of AKS clusters](https://docs.microsoft.com/azure/aks/concepts-clusters-workloads#control-plane)
+- [Understand state configuration of AKS clusters](./concepts-clusters-workloads.md#control-plane)
- [Understand security hardening in AKS virtual machine hosts](security-hardened-vm-host-image.md)
Create custom policies to audit, and enforce system configurations. Develop a pr
- [How to configure and manage Azure Policy](../governance/policy/tutorials/create-and-manage.md) -- [How to use aliases](https://docs.microsoft.com/azure/governance/policy/concepts/definition-structure#aliases)
+- [How to use aliases](../governance/policy/concepts/definition-structure.md#aliases)
**Responsibility**: Customer
Refer to the list of Center for Internet Security (CIS) controls which are built
- [Understand security hardening in AKS virtual machine hosts](security-hardened-vm-host-image.md) -- [Understand state configuration of AKS clusters](https://docs.microsoft.com/azure/aks/concepts-clusters-workloads#control-plane)
+- [Understand state configuration of AKS clusters](./concepts-clusters-workloads.md#control-plane)
**Responsibility**: Customer
Avoid the use of fixed or shared credentials.
- [Security concepts for applications and clusters in Azure Kubernetes Service (AKS)](concepts-security.md) -- [How to use Key Vault with your AKS cluster](https://docs.microsoft.com/azure/aks/developer-best-practices-pod-security#limit-credential-exposure)
+- [How to use Key Vault with your AKS cluster](./developer-best-practices-pod-security.md#limit-credential-exposure)
**Responsibility**: Customer
Note that Pod managed identities are intended for use with Linux pods and contai
Service principals can also be used in AKS clusters. However, clusters using service principals eventually may reach a state in which the service principal must be renewed to keep the cluster working. Managing service principals adds complexity, which is why it's easier to use managed identities instead. The same permission requirements apply for both service principals and managed identities. -- [Understand Managed Identities and Key Vault with Azure Kubernetes Service (AKS)](https://docs.microsoft.com/azure/aks/developer-best-practices-pod-security#limit-credential-exposure)
+- [Understand Managed Identities and Key Vault with Azure Kubernetes Service (AKS)](./developer-best-practices-pod-security.md#limit-credential-exposure)
- [Azure AD Pod Identity](https://github.com/Azure/aad-pod-identity)
Limit credential exposure by not defining credentials in your application code.
- [Security alerts reference guide](../security-center/alerts-reference.md) -- [Alerts for containers - Azure Kubernetes Service clusters](https://docs.microsoft.com/azure/security-center/alerts-reference#alerts-akscluster)
+- [Alerts for containers - Azure Kubernetes Service clusters](../security-center/alerts-reference.md#alerts-akscluster)
-- [AKS shared responsibility and Daemon Sets](https://docs.microsoft.com/azure/aks/support-policies#shared-responsibility)
+- [AKS shared responsibility and Daemon Sets](./support-policies.md#shared-responsibility)
**Responsibility**: Shared
Limit credential exposure by not defining credentials in your application code.
- [Security alerts reference guide](../security-center/alerts-reference.md) -- [Alerts for containers - Azure Kubernetes Service clusters](https://docs.microsoft.com/azure/security-center/alerts-reference#alerts-akscluster)
+- [Alerts for containers - Azure Kubernetes Service clusters](../security-center/alerts-reference.md#alerts-akscluster)
-- [AKS shared responsibility and Daemon Sets](https://docs.microsoft.com/azure/aks/support-policies#shared-responsibility)
+- [AKS shared responsibility and Daemon Sets](./support-policies.md#shared-responsibility)
**Responsibility**: Shared
Perform regular automated backups of Key Vault Certificates, Keys, Managed Stora
- [How to backup Key Vault Secrets](/powershell/module/azurerm.keyvault/backup-azurekeyvaultsecret) -- [How to enable Azure Backup](/azure/backup/)
+- [How to enable Azure Backup](../backup/index.yml)
**Responsibility**: Customer
Perform regular automated backups of Key Vault Certificates, Keys, Managed Stora
Periodically perform data restoration of Key Vault Certificates, Keys, Managed Storage Accounts, and Secrets, with PowerShell commands. -- [How to restore Key Vault Certificates](https://docs.microsoft.com/powershell/module/az.keyvault/restore-azkeyvaultcertificate?view=azps-4.8.0&amp;preserve-view=true)
+- [How to restore Key Vault Certificates](/powershell/module/az.keyvault/restore-azkeyvaultcertificate?amp;preserve-view=true&view=azps-4.8.0)
-- [How to restore Key Vault Keys](https://docs.microsoft.com/powershell/module/az.keyvault/restore-azkeyvaultkey?view=azps-4.8.0&amp;preserve-view=true)
+- [How to restore Key Vault Keys](/powershell/module/az.keyvault/restore-azkeyvaultkey?amp;preserve-view=true&view=azps-4.8.0)
- [How to restore Key Vault Managed Storage Accounts](/powershell/module/az.keyvault/backup-azkeyvaultmanagedstorageaccount) -- [How to restore Key Vault Secrets](https://docs.microsoft.com/powershell/module/az.keyvault/restore-azkeyvaultsecret?view=azps-4.8.0&amp;preserve-view=true)
+- [How to restore Key Vault Secrets](/powershell/module/az.keyvault/restore-azkeyvaultsecret?amp;preserve-view=true&view=azps-4.8.0)
-- [How to recover files from Azure Virtual Machine backup](/azure/backup/backup-azure-restore-files-from-vm)
+- [How to recover files from Azure Virtual Machine backup](../backup/backup-azure-restore-files-from-vm.md)
**Responsibility**: Customer
Enable Soft-Delete in Key Vault to protect keys against accidental or malicious
- [Understand Azure Storage Service Encryption](../storage/common/storage-service-encryption.md) -- [How to enable Soft-Delete in Key Vault](https://docs.microsoft.com/azure/storage/blobs/soft-delete-blob-overview?tabs=azure-portal)
+- [How to enable Soft-Delete in Key Vault](../storage/blobs/soft-delete-blob-overview.md?tabs=azure-portal)
**Responsibility**: Customer
Choose the Security Center data connector to stream the alerts to Azure Sentinel
## Next steps -- See the [Azure Security Benchmark V2 overview](/azure/security/benchmarks/overview)-- Learn more about [Azure security baselines](/azure/security/benchmarks/security-baselines-overview)
+- See the [Azure Security Benchmark V2 overview](../security/benchmarks/overview.md)
+- Learn more about [Azure security baselines](../security/benchmarks/security-baselines-overview.md)
aks Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/troubleshooting.md
When a kubernetes cluster on Azure (AKS or no) does a frequent scale up/down or
Service returned an error. Status=429 Code=\"OperationNotAllowed\" Message=\"The server rejected the request because too many requests have been received for this subscription.\" Details=[{\"code\":\"TooManyRequests\",\"message\":\"{\\\"operationGroup\\\":\\\"HighCostGetVMScaleSet30Min\\\",\\\"startTime\\\":\\\"2020-09-20T07:13:55.2177346+00:00\\\",\\\"endTime\\\":\\\"2020-09-20T07:28:55.2177346+00:00\\\",\\\"allowedRequestCount\\\":1800,\\\"measuredRequestCount\\\":2208}\",\"target\":\"HighCostGetVMScaleSet30Min\"}] InnerError={\"internalErrorCode\":\"TooManyRequestsReceived\"}"} ```
-These throttling errors are described in detail [here](../azure-resource-manager/management/request-limits-and-throttling.md) and [here](../virtual-machines/troubleshooting/troubleshooting-throttling-errors.md)
+These throttling errors are described in detail [here](../azure-resource-manager/management/request-limits-and-throttling.md) and [here](/troubleshoot/azure/virtual-machines/troubleshooting-throttling-errors)
The recommendation from AKS Engineering Team is to ensure you are running version at least 1.18.x, which contains many improvements. More details can be found on these improvements [here](https://github.com/Azure/AKS/issues/1413) and [here](https://github.com/kubernetes-sigs/cloud-provider-azure/issues/247).
aks Uptime Sla https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/uptime-sla.md
Uptime SLA is a paid feature and enabled per cluster. Uptime SLA pricing is dete
## Creating a new cluster with Uptime SLA
-> [!NOTE]
-> Currently, if you enable Uptime SLA, there is no way to remove it from a cluster.
- To create a new cluster with the Uptime SLA, you use the Azure CLI. The following example creates a resource group named *myResourceGroup* in the *eastus* location:
Use the [`az aks update`][az-aks-update] command to update the existing cluster:
}, ```
+## Opt out of Uptime SLA
+
+You can update your cluster to change to the free tier and opt out of Uptime SLA.
+
+```azurecli-interactive
+# Update an existing cluster to opt out of Uptime SLA
+ az aks update --resource-group myResourceGroup --name myAKSCluster --no-uptime-sla
+ ```
+ ## Clean up To avoid charges, clean up any resources you created. To delete the cluster, use the [`az group delete`][az-group-delete] command to delete the AKS resource group:
api-management Api Management Get Started Publish Versions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-get-started-publish-versions.md
az apim api versionset list --resource-group apim-hello-word-resource-group \
When the Azure portal creates a version set for you, it assigns an alphanumeric name, which appears in the **Name** column of the list. Use this name in other Azure CLI commands.
-To see details about a version set, run the [az apim api versionset show](/api/versionset#az_apim_api_versionset_show) command:
+To see details about a version set, run the [az apim api versionset show](/cli/azure/apim/api/versionset#az_apim_api_versionset_show) command:
```azurecli az apim api versionset show --resource-group apim-hello-word-resource-group \
api-management Howto Protect Backend Frontend Azure Ad B2c https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/howto-protect-backend-frontend-azure-ad-b2c.md
Open the Azure AD B2C blade in the portal and do the following steps.
1. Paste the Well-known open-id configuration endpoint from the sign up and sign in policy into the Issuer URL box (we recorded this configuration earlier). 1. Click 'Show Secret' and paste the Backend application's client secret into the appropriate box. 1. Select OK, which takes you back to the identity provider selection blade/screen.
-1. Leave [Token Store](https://docs.microsoft.com/azure/app-service/overview-authentication-authorization#token-store) enabled under advanced settings (default).
+1. Leave [Token Store](../app-service/overview-authentication-authorization.md#token-store) enabled under advanced settings (default).
1. Click 'Save' (at the top left of the blade). > [!IMPORTANT]
The steps above can be adapted and edited to allow many different uses of Azure
* Check out more [videos](https://azure.microsoft.com/documentation/videos/index/?services=api-management) about API Management. * For other ways to secure your back-end service, see [Mutual Certificate authentication](api-management-howto-mutual-certificates.md). * [Create an API Management service instance](get-started-create-service-instance.md).
-* [Manage your first API](import-and-publish.md).
+* [Manage your first API](import-and-publish.md).
api-management Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/security-baseline.md
Inbound and outbound traffic into the subnet in which API Management is deployed
Caution: When configuring an NSG on the API Management subnet, there are a set of ports that are required to be open. If any of these ports are unavailable, API Management may not operate properly and may become inaccessible. -- [Understand NSG configurations for Azure API Management](https://docs.microsoft.com/azure/api-management/api-management-using-with-vnet#-common-network-configuration-issues)
+- [Understand NSG configurations for Azure API Management](./api-management-using-with-vnet.md#-common-network-configuration-issues)
- [How to Enable NSG Flow Logs](../network-watcher/network-watcher-nsg-flow-logging-portal.md)
Note: This feature is available in the Premium and Developer tiers of API Manage
- [How to integrate API Management in an internal VNET with Application Gateway](api-management-howto-integrate-internal-vnet-appgateway.md) -- [Understand Azure Application Gateway](/azure/application-gateway/)
+- [Understand Azure Application Gateway](../application-gateway/index.yml)
**Responsibility**: Customer
Use Azure Security Center Integrated Threat Intelligence to deny communications
- [How to integrate API Management in an internal VNET with Application Gateway](api-management-howto-integrate-internal-vnet-appgateway.md) -- [Understand Azure Application Gateway](/azure/application-gateway/)
+- [Understand Azure Application Gateway](../application-gateway/index.yml)
- [Understand Azure Security Center Integrated Threat Intelligence](../security-center/azure-defender.md)
Use Azure Security Center Integrated Threat Intelligence to deny communications
Caution: When configuring an NSG on the API Management subnet, there are a set of ports that are required to be open. If any of these ports are unavailable, API Management may not operate properly and may become inaccessible. -- [Understand NSG configurations for Azure API Management](https://docs.microsoft.com/azure/api-management/api-management-using-with-vnet#-common-network-configuration-issues)
+- [Understand NSG configurations for Azure API Management](./api-management-using-with-vnet.md#-common-network-configuration-issues)
- [How to Enable NSG Flow Logs](../network-watcher/network-watcher-nsg-flow-logging-portal.md)
Note: This feature is available in the Premium and Developer tiers of API Manage
- [Azure Web Application Firewall on Azure Application Gateway](../web-application-firewall/ag/ag-overview.md) -- [Understand Azure Application Gateway](/azure/application-gateway/overview)
+- [Understand Azure Application Gateway](../application-gateway/overview.md)
**Responsibility**: Customer
Caution: When configuring an NSG on the API Management subnet, there are a set o
- [Understanding and using Service Tags](../virtual-network/service-tags-overview.md) -- [Ports required for API Management](https://docs.microsoft.com/azure/api-management/api-management-using-with-vnet#-common-network-configuration-issues)
+- [Ports required for API Management](./api-management-using-with-vnet.md#-common-network-configuration-issues)
**Responsibility**: Customer
You may also use Azure Blueprints to simplify large-scale Azure deployments by p
**Guidance**: Use Tags for Network Security groups (NSGs) and other resources related to network security and traffic flow. For individual NSG rules, you may use the "Description" field to specify business need and/or duration (etc.) for any rules that allow traffic to/from a network. -- [How to create and use Tags](/azure/azure-resource-manager/resource-group-using-tags)
+- [How to create and use Tags](../azure-resource-manager/management/tag-resources.md)
- [How to create a Virtual Network](../virtual-network/quick-create-portal.md)
You may also use Azure Blueprints to simplify large-scale Azure deployments by p
**Guidance**: Use Azure Activity Log to monitor network resource configurations and detect changes to network resources associated with your Azure API Management deployments. Create alerts within Azure Monitor that will trigger when changes to critical network resources take place. -- [How to view and retrieve Azure Activity Log events](/azure/azure-monitor/platform/activity-log-view)
+- [How to view and retrieve Azure Activity Log events](../azure-monitor/essentials/activity-log.md#view-the-activity-log)
-- [How to create alerts in Azure Monitor](/azure/azure-monitor/platform/alerts-activity-log)
+- [How to create alerts in Azure Monitor](../azure-monitor/alerts/alerts-activity-log.md)
**Responsibility**: Customer
In addition to Azure Monitor, Azure API Management can be integrated with one or
Optionally, enable, and on-board data to Azure Sentinel or a third-party Security Incident and Event Management (SIEM). -- [How to configure diagnostic settings](/azure/azure-monitor/platform/diagnostic-settings#create-in-azure-portal)
+- [How to configure diagnostic settings](../azure-monitor/essentials/diagnostic-settings.md#create-in-azure-portal)
- [How to onboard Azure Sentinel](../sentinel/quickstart-onboard.md)
Optionally, enable, and on-board data to Azure Sentinel or a third-party Securit
For data plane audit logging, diagnostic logs provide rich information about operations and errors that are important for auditing as well as troubleshooting purposes. Diagnostics logs differ from activity logs. Activity logs provide insights into the operations that were performed on your Azure resources. Diagnostics logs provide insight into operations that your resource performed. -- [How to enable Diagnostic Settings for Azure Activity Log](/azure/azure-monitor/platform/activity-log)
+- [How to enable Diagnostic Settings for Azure Activity Log](../azure-monitor/essentials/activity-log.md)
-- [How to enable Diagnostic Settings for Azure API Management](/Azure/api-management/api-management-howto-use-azure-monitor#resource-logs)
+- [How to enable Diagnostic Settings for Azure API Management](./api-management-howto-use-azure-monitor.md#resource-logs)
**Responsibility**: Customer
For data plane audit logging, diagnostic logs provide rich information about ope
**Guidance**: Within Azure Monitor, set your Log Analytics Workspace retention period according to your organization's compliance regulations. Use Azure Storage accounts for long-term/archival storage. -- [How to set log retention parameters for Log Analytics Workspaces](/azure/azure-monitor/platform/manage-cost-storage#change-the-data-retention-period)
+- [How to set log retention parameters for Log Analytics Workspaces](../azure-monitor/logs/manage-cost-storage.md#change-the-data-retention-period)
-- [How to archive logs to an Azure Storage account](/azure/azure-monitor/platform/resource-logs#send-to-azure-storage)
+- [How to archive logs to an Azure Storage account](../azure-monitor/essentials/resource-logs.md#send-to-azure-storage)
**Responsibility**: Customer
For data plane audit logging, diagnostic logs provide rich information about ope
Optionally, integrate API Management with Azure Application Insights and use it as primary or secondary monitoring, tracing, reporting, and alerting tool. -- [How to monitor and review logs for Azure API Management](/Azure/api-management/api-management-howto-use-azure-monitor)
+- [How to monitor and review logs for Azure API Management](./api-management-howto-use-azure-monitor.md)
-- [How to perform custom queries in Azure Monitor](/azure/azure-monitor/log-query/get-started-queries)
+- [How to perform custom queries in Azure Monitor](../azure-monitor/logs/get-started-queries.md)
-- [Understand Log Analytics Workspace](/azure/azure-monitor/log-query/log-analytics-tutorial)
+- [Understand Log Analytics Workspace](../azure-monitor/logs/log-analytics-tutorial.md)
- [How to integrate with Azure Application Insights](api-management-howto-app-insights.md)
Optionally, integrate API Management with Azure Application Insights and use it
Optionally, you may enable and on-board data to Azure Sentinel or a third-party SIEM. -- [How to enable diagnostic settings for Azure Activity Log](/azure/azure-monitor/platform/activity-log)
+- [How to enable diagnostic settings for Azure Activity Log](../azure-monitor/essentials/activity-log.md)
-- [How to enable diagnostic settings for Azure API Management](https://docs.microsoft.com/azure/api-management/api-management-howto-use-azure-monitor#resource-logs)
+- [How to enable diagnostic settings for Azure API Management](./api-management-howto-use-azure-monitor.md#resource-logs)
-- [How to configure an alert rule for Azure API Management](/Azure/api-management/api-management-howto-use-azure-monitor#set-up-an-alert-rule-for-unauthorized-request)
+- [How to configure an alert rule for Azure API Management](./api-management-howto-use-azure-monitor.md#set-up-an-alert-rule)
- [How to view capacity metrics of an Azure API management instance](api-management-capacity.md)
Follow recommendations from Azure Security Center for the management and mainten
- [How to get a directory role definition in Azure AD with PowerShell](/powershell/module/az.resources/get-azroledefinition) -- [Understand identity and access recommendations from Azure Security Center](https://docs.microsoft.com/azure/security-center/recommendations-reference#identityandaccess-recommendations)
+- [Understand identity and access recommendations from Azure Security Center](../security-center/recommendations-reference.md#identityandaccess-recommendations)
**Responsibility**: Customer
In addition, use Azure AD risk detections to view alerts and reports on risky us
- [How to deploy Privileged Identity Management (PIM)](../active-directory/privileged-identity-management/pim-deployment-plan.md) -- [Understand Azure AD risk detections](/azure/active-directory/reports-monitoring/concept-risk-events)
+- [Understand Azure AD risk detections](../active-directory/identity-protection/overview-identity-protection.md)
**Responsibility**: Customer
Configure advanced monitoring with API Management by using the `log-to-eventhub`
**Guidance**: For account login behavior deviation on the control plane (the Azure portal), use Azure Active Directory (Azure AD) Identity Protection and risk detection features to configure automated responses to detected suspicious actions related to user identities. You can also ingest data into Azure Sentinel for further investigation. -- [How to view Azure AD risky sign-ins](/azure/active-directory/reports-monitoring/concept-risky-sign-ins)
+- [How to view Azure AD risky sign-ins](../active-directory/identity-protection/overview-identity-protection.md)
- [How to configure and enable Identity Protection risk policies](../active-directory/identity-protection/howto-identity-protection-configure-risk-policies.md)
Configure advanced monitoring with API Management by using the `log-to-eventhub`
**Guidance**: Not currently available; Customer Lockbox is not currently supported for Azure API Management. -- [List of Customer Lockbox-supported services](https://docs.microsoft.com/azure/security/fundamentals/customer-lockbox-overview#supported-services-and-scenarios-in-general-availability)
+- [List of Customer Lockbox-supported services](../security/fundamentals/customer-lockbox-overview.md#supported-services-and-scenarios-in-general-availability)
**Responsibility**: Customer
Configure advanced monitoring with API Management by using the `log-to-eventhub`
**Guidance**: Use tags to assist in tracking Azure resources that store or process sensitive information. -- [How to create and use tags](/azure/azure-resource-manager/resource-group-using-tags)
+- [How to create and use tags](../azure-resource-manager/management/tag-resources.md)
**Responsibility**: Customer
Configure advanced monitoring with API Management by using the `log-to-eventhub`
**Guidance**: Implement separate subscriptions and/or management groups for development, test, and production. Azure API Management instances should be separated by virtual network (VNet)/subnet and tagged appropriately. -- [How to create additional Azure subscriptions](/azure/billing/billing-create-subscription)
+- [How to create additional Azure subscriptions](../cost-management-billing/manage/create-subscription.md)
- [How to create Management Groups](../governance/management-groups/create-management-group-portal.md) -- [How to create and use tags](/azure/azure-resource-manager/resource-group-using-tags)
+- [How to create and use tags](../azure-resource-manager/management/tag-resources.md)
- [How to use Azure API Management with virtual networks](api-management-using-with-vnet.md)
Microsoft manages the underlying infrastructure for Azure API Management and has
**Guidance**: Management plane calls are made through Azure Resource Manager over TLS. A valid JSON web token (JWT) is required. Data plane calls can be secured with TLS and one of supported authentication mechanisms (for example, client certificate or JWT). -- [Understand data protection in Azure API Management](/azure/api-management/api-management-security-controls#data-protection)
+- [Understand data protection in Azure API Management](#data-protection)
- [Manage TLS settings in Azure API Management](api-management-howto-manage-protocols-ciphers.md)
Microsoft manages the underlying infrastructure for Azure API Management and has
**Guidance**: Use Azure Monitor with the Azure Activity log to create alerts for when changes take place to production Azure Functions apps as well as other critical or related resources. -- [How to create alerts for Azure Activity Log events](/azure/azure-monitor/platform/alerts-activity-log)
+- [How to create alerts for Azure Activity Log events](../azure-monitor/alerts/alerts-activity-log.md)
- [How to use Azure Monitor and Azure Activity Log in Azure API Management](api-management-howto-use-azure-monitor.md)
Microsoft manages the underlying infrastructure for Azure API Management and has
Underlying platform scanned and patched by Microsoft. Review security controls available to reduce service configuration related vulnerabilities. -- [Understanding security controls available to Azure API Management](/azure/api-management/api-management-security-controls)
+- [Understanding security controls available to Azure API Management]()
**Responsibility**: Customer
Underlying platform scanned and patched by Microsoft. Review security controls a
Underlying platform scanned and patched by Microsoft. Customer to review security controls available to them to reduce service configuration related vulnerabilities. -- [Understanding security controls available to Azure API Management](/azure/api-management/api-management-security-controls) **Responsibility**: Customer
Although classic Azure resources may be discovered via Resource Graph, it is hig
- [How to create queries with Azure Resource Graph](../governance/resource-graph/first-query-portal.md) -- [How to view your Azure Subscriptions](https://docs.microsoft.com/powershell/module/az.accounts/get-azsubscription?view=azps-4.8.0&amp;preserve-view=true)
+- [How to view your Azure Subscriptions](/powershell/module/az.accounts/get-azsubscription?preserve-view=true&view=azps-4.8.0)
- [Understand Azure RBAC](../role-based-access-control/overview.md)
Although classic Azure resources may be discovered via Resource Graph, it is hig
**Guidance**: Apply tags to Azure resources giving metadata to logically organize them into a taxonomy. -- [How to create and utilize Tags](/azure/azure-resource-manager/resource-group-using-tags)
+- [How to create and utilize Tags](../azure-resource-manager/management/tag-resources.md)
**Responsibility**: Customer
In addition, use Azure policy to put restrictions on the type of resources that
- Not allowed resource types - Allowed resource types -- [How to create additional Azure subscriptions](/azure/billing/billing-create-subscription)
+- [How to create additional Azure subscriptions](../cost-management-billing/manage/create-subscription.md)
- [How to create Management Groups](../governance/management-groups/create-management-group-portal.md) -- [How to create and use Tags](/azure/azure-resource-manager/resource-group-using-tags)
+- [How to create and use Tags](../azure-resource-manager/management/tag-resources.md)
**Responsibility**: Customer
Use Azure Resource Graph to query/discover resources within their subscription(s
- [How to configure and manage Azure Policy](../governance/policy/tutorials/create-and-manage.md) -- [How to deny a specific resource type with Azure Policy](https://docs.microsoft.com/azure/governance/policy/samples/built-in-policies#general)
+- [How to deny a specific resource type with Azure Policy](../governance/policy/samples/built-in-policies.md#general)
**Responsibility**: Customer
Use Azure Resource Graph to query/discover resources within their subscription(s
**Guidance**: Define and implement standard security configurations for your Azure API Management service with Azure Policy. Use Azure Policy aliases in the "Microsoft.ApiManagement" namespace to create custom policies to audit or enforce the configuration of your Azure API Management services. -- [How to view available Azure Policy Aliases](https://docs.microsoft.com/powershell/module/az.resources/get-azpolicyalias?view=azps-4.8.0&amp;preserve-view=true)
+- [How to view available Azure Policy Aliases](/powershell/module/az.resources/get-azpolicyalias?preserve-view=true&view=azps-4.8.0)
- [How to configure and manage Azure Policy](../governance/policy/tutorials/create-and-manage.md)
In addition, define and implement standard security configurations for your Azur
- [How to create a managed identity for an API Management instance](api-management-howto-use-managed-service-identity.md) -- [Policy to authenticate with managed identity](https://docs.microsoft.com/azure/api-management/api-management-authentication-policies#ManagedIdentity)
+- [Policy to authenticate with managed identity](./api-management-authentication-policies.md#ManagedIdentity)
**Responsibility**: Customer
The service backup and restore features of API Management provide the necessary
- [How to deploy API Management data plane to multiple regions](api-management-howto-deploy-multi-region.md) -- [How to implement disaster recovery using service backup and restore in Azure API Management](https://docs.microsoft.com/azure/api-management/api-management-howto-disaster-recovery-backup-restore#calling-the-backup-and-restore-operations)
+- [How to implement disaster recovery using service backup and restore in Azure API Management](./api-management-howto-disaster-recovery-backup-restore.md#calling-the-backup-and-restore-operations)
- [How to call the API Management backup operation](/rest/api/apimanagement/2019-01-01/apimanagementservice/backup)
The service backup and restore features of API Management provide the necessary
Managed identities can be used to obtain certificates from Azure Key Vault for API Management custom domain names. Backup any certificates being stored within Azure Key Vault. -- [How to implement disaster recovery using service backup and restore in Azure API Management](https://docs.microsoft.com/azure/api-management/api-management-howto-disaster-recovery-backup-restore#calling-the-backup-and-restore-operations)
+- [How to implement disaster recovery using service backup and restore in Azure API Management](./api-management-howto-disaster-recovery-backup-restore.md#calling-the-backup-and-restore-operations)
-- [How to backup Azure Key Vault certificates](https://docs.microsoft.com/powershell/module/az.keyvault/backup-azkeyvaultcertificate?view=azps-4.8.0&amp;preserve-view=true)
+- [How to backup Azure Key Vault certificates](/powershell/module/az.keyvault/backup-azkeyvaultcertificate?preserve-view=true&view=azps-4.8.0)
**Responsibility**: Customer
Managed identities can be used to obtain certificates from Azure Key Vault for A
**Guidance**: Azure API Management writes backups to customer-owned Azure Storage accounts. Follow Azure Storage security recommendations to protect your backup. -- [How to implement disaster recovery using service backup and restore in Azure API Management](https://docs.microsoft.com/azure/api-management/api-management-howto-disaster-recovery-backup-restore#calling-the-backup-and-restore-operations)
+- [How to implement disaster recovery using service backup and restore in Azure API Management](./api-management-howto-disaster-recovery-backup-restore.md#calling-the-backup-and-restore-operations)
- [Security recommendation for blob storage](../storage/blobs/security-recommendations.md)
Additionally, clearly mark subscriptions (for ex. production, non-prod) using ta
- [Security alerts in Azure Security Center](../security-center/security-center-alerts-overview.md) -- [Use tags to organize your Azure resources](/azure/azure-resource-manager/resource-group-using-tags)
+- [Use tags to organize your Azure resources](../azure-resource-manager/management/tag-resources.md)
**Responsibility**: Customer
https://www.microsoft.com/msrc/pentest-rules-of-engagement?rtc=1.
## Next steps -- See the [Azure Security Benchmark V2 overview](/azure/security/benchmarks/overview)-- Learn more about [Azure security baselines](/azure/security/benchmarks/security-baselines-overview)
+- See the [Azure Security Benchmark V2 overview](../security/benchmarks/overview.md)
+- Learn more about [Azure security baselines](../security/benchmarks/security-baselines-overview.md)
app-service App Service Ip Restrictions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/app-service-ip-restrictions.md
With service endpoints, you can configure your app with application gateways or
> [!NOTE] > - Service endpoints aren't currently supported for web apps that use IP Secure Sockets Layer (SSL) virtual IP (VIP). >
-#### Set a service tag-based rule (preview)
+#### Set a service tag-based rule
-* For step 4, in the **Type** drop-down list, select **Service Tag (preview)**.
+* For step 4, in the **Type** drop-down list, select **Service Tag**.
- :::image type="content" source="media/app-service-ip-restrictions/access-restrictions-service-tag-add.png" alt-text="Screenshot of the 'Add Restriction' pane with the Service Tag type selected.":::
+ :::image type="content" source="media/app-service-ip-restrictions/access-restrictions-service-tag-add.png?v2" alt-text="Screenshot of the 'Add Restriction' pane with the Service Tag type selected.":::
Each service tag represents a list of IP ranges from Azure services. A list of these services and links to the specific ranges can be found in the [service tag documentation][servicetags].
-The following list of service tags is supported in access restriction rules during the preview phase:
+All available service tags are supported in access restriction rules. For simplicity, only a list of the most common tags are available through the Azure portal. Use Azure Resource Manager templates or scripting to configure more advanced rules like regional scoped rules. These are the tags available through Azure portal:
+ * ActionGroup
+* ApplicationInsightsAvailability
* AzureCloud * AzureCognitiveSearch
-* AzureConnectors
* AzureEventGrid * AzureFrontDoor.Backend * AzureMachineLearning
-* AzureSignalR
* AzureTrafficManager * LogicApps
-* ServiceFabric
### Edit a rule
To delete a rule, on the **Access Restrictions** page, select the ellipsis (**..
## Access restriction advanced scenarios The following sections describe some advanced scenarios using access restrictions.+
+### Filter by http header
+
+As part of any rule, you can add additional http header filters. The following http header names are supported:
+* X-Forwarded-For
+* X-Forwarded-Host
+* X-Azure-FDID
+* X-FD-HealthProbe
+
+For each header name you can add up to 8 values separated by comma. The http header filters are evaluated after the rule itself and both conditions must be true for the rule to apply.
+
+### Multi-source rules
+
+Multi-source rules allow you to combine up to 8 IP ranges or 8 Service Tags in a single rule. You might use this if you have more than 512 IP ranges or you want to create logical rules where multiple IP ranges are combined with a single http header filter.
+
+Multi-source rules are defined the same way you define single-source rules, but with each range separated with comma.
+
+PowerShell example:
+
+ ```azurepowershell-interactive
+ Add-AzWebAppAccessRestrictionRule -ResourceGroupName "ResourceGroup" -WebAppName "AppName" `
+ -Name "Multi-source rule" -IpAddress "192.168.1.0/24,192.168.10.0/24,192.168.100.0/24" `
+ -Priority 100 -Action Allow
+ ```
+ ### Block a single IP address When you add your first access restriction rule, the service adds an explicit *Deny all* rule with a priority of 2147483647. In practice, the explicit *Deny all* rule is the final rule to be executed, and it blocks access to any IP address that's not explicitly allowed by an *Allow* rule.
In addition to being able to control access to your app, you can restrict access
:::image type="content" source="media/app-service-ip-restrictions/access-restrictions-scm-browse.png" alt-text="Screenshot of the 'Access Restrictions' page in the Azure portal, showing that no access restrictions are set for the SCM site or the app.":::
-### Restrict access to a specific Azure Front Door instance (preview)
-Traffic from Azure Front Door to your application originates from a well known set of IP ranges defined in the AzureFrontDoor.Backend service tag. Using a service tag restriction rule, you can restrict traffic to only originate from Azure Front Door. To ensure traffic only originates from your specific instance, you will need to further filter the incoming requests based on the unique http header that Azure Front Door sends. During preview you can achieve this with PowerShell or REST/ARM.
+### Restrict access to a specific Azure Front Door instance
+Traffic from Azure Front Door to your application originates from a well known set of IP ranges defined in the AzureFrontDoor.Backend service tag. Using a service tag restriction rule, you can restrict traffic to only originate from Azure Front Door. To ensure traffic only originates from your specific instance, you will need to further filter the incoming requests based on the unique http header that Azure Front Door sends.
-* PowerShell example (Front Door ID can be found in the Azure portal):
+
+PowerShell example:
+
+ ```azurepowershell-interactive
+ $afd = Get-AzFrontDoor -Name "MyFrontDoorInstanceName"
+ Add-AzWebAppAccessRestrictionRule -ResourceGroupName "ResourceGroup" -WebAppName "AppName" `
+ -Name "Front Door example rule" -Priority 100 -Action Allow -ServiceTag AzureFrontDoor.Backend `
+ -HttpHeader @{'x-azure-fdid' = $afd.FrontDoorId}
+ ```
- ```azurepowershell-interactive
- $frontdoorId = "xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
- Add-AzWebAppAccessRestrictionRule -ResourceGroupName "ResourceGroup" -WebAppName "AppName" `
- -Name "Front Door example rule" -Priority 100 -Action Allow -ServiceTag AzureFrontDoor.Backend `
- -HttpHeader @{'x-azure-fdid' = $frontdoorId}
- ```
## Manage access restriction rules programmatically You can add access restrictions programmatically by doing either of the following:
You can add access restrictions programmatically by doing either of the followin
-Name "Ip example rule" -Priority 100 -Action Allow -IpAddress 122.133.144.0/24 ``` > [!NOTE]
- > Working with service tags, http headers or multi-source rules requires at least version 5.1.0. You can verify the version of the installed module with: **Get-InstalledModule -Name Az**
+ > Working with service tags, http headers or multi-source rules requires at least version 5.7.0. You can verify the version of the installed module with: **Get-InstalledModule -Name Az**
You can also set values manually by doing either of the following:
app-service App Service Web Tutorial Custom Domain Uiex https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/app-service-web-tutorial-custom-domain-uiex.md
To add a custom domain to your app, you need to verify your ownership of the dom
<details> <summary>Can I manage DNS from my domain provider using Azure?</summary>
- If you like, you can use Azure DNS to manage DNS records for your domain and configure a custom DNS name for Azure App Service. For more information, see <a href="https://docs.microsoft.com/azure/dns/dns-delegate-domain-azure-dns">Tutorial: Host your domain in Azure DNS></a>.
+ If you like, you can use Azure DNS to manage DNS records for your domain and configure a custom DNS name for Azure App Service. For more information, see <a href="/azure/dns/dns-delegate-domain-azure-dns">Tutorial: Host your domain in Azure DNS></a>.
</details> 1. Find the page for managing DNS records.
To add a custom domain to your app, you need to verify your ownership of the dom
<ul> <li>To map the root domain (for example, <code>contoso.com</code>), use an A record. Don't use the CNAME record for the root record (for information, see the <a href="https://en.wikipedia.org/wiki/CNAME_record">Wikipedia entry</a>).</li> <li>To map a subdomain (for example, <code>www.contoso.com</code>), use a CNAME record.</li>
- <li>You can map a subdomain to the app's IP address directly with an A record, but it's possible for <a href="https://docs.microsoft.com/azure/app-service/overview-inbound-outbound-ips#when-inbound-ip-changes">the IP address to change</a>. The CNAME maps to the app's hostname instead, which is less susceptible to change.</li>
+ <li>You can map a subdomain to the app's IP address directly with an A record, but it's possible for <a href="/azure/app-service/overview-inbound-outbound-ips#when-inbound-ip-changes">the IP address to change</a>. The CNAME maps to the app's hostname instead, which is less susceptible to change.</li>
<li>To map a <a href="https://en.wikipedia.org/wiki/Wildcard_DNS_record">wildcard domain</a> (for example, <code>*.contoso.com</code>), use a CNAME record.</li> </ul> </div>
For a wildcard name like `*` in `*.contoso.com`, create two records according to
<details> <summary>What's with the <strong>Not Secure</strong> warning label?</summary>
- A warning label for your custom domain means that it's not yet bound to a TLS/SSL certificate. Any HTTPS request from a browser to your custom domain will receive an error or warning, depending on the browser. To add a TLS binding, see <a href="https://docs.microsoft.com/azure/app-service/configure-ssl-bindings">Secure a custom DNS name with a TLS/SSL binding in Azure App Service</a>.
+ A warning label for your custom domain means that it's not yet bound to a TLS/SSL certificate. Any HTTPS request from a browser to your custom domain will receive an error or warning, depending on the browser. To add a TLS binding, see <a href="/azure/app-service/configure-ssl-bindings">Secure a custom DNS name with a TLS/SSL binding in Azure App Service</a>.
</details> If you missed a step or made a typo somewhere earlier, a verification error appears at the bottom of the page.
For a wildcard name like `*` in `*.contoso.com`, create two records according to
<details> <summary>What's with the <strong>Not Secure</strong> warning label?</summary>
- A warning label for your custom domain means that it's not yet bound to a TLS/SSL certificate. Any HTTPS request from a browser to your custom domain will receive an error or warning, depending on the browser. To add a TLS binding, see <a href="https://docs.microsoft.com/azure/app-service/configure-ssl-bindings">Secure a custom DNS name with a TLS/SSL binding in Azure App Service</a>.
+ A warning label for your custom domain means that it's not yet bound to a TLS/SSL certificate. Any HTTPS request from a browser to your custom domain will receive an error or warning, depending on the browser. To add a TLS binding, see <a href="/azure/app-service/configure-ssl-bindings">Secure a custom DNS name with a TLS/SSL binding in Azure App Service</a>.
</details> If you missed a step or made a typo somewhere earlier, a verification error appears at the bottom of the page.
For a wildcard name like `*` in `*.contoso.com`, create two records according to
<details> <summary>What's with the <strong>Not Secure</strong> warning label?</summary>
- A warning label for your custom domain means that it's not yet bound to a TLS/SSL certificate. Any HTTPS request from a browser to your custom domain will receive an error or warning, depending on the browser. To add a TLS binding, see <a href="https://docs.microsoft.com/azure/app-service/configure-ssl-bindings">Secure a custom DNS name with a TLS/SSL binding in Azure App Service</a>.
+ A warning label for your custom domain means that it's not yet bound to a TLS/SSL certificate. Any HTTPS request from a browser to your custom domain will receive an error or warning, depending on the browser. To add a TLS binding, see <a href="/azure/app-service/configure-ssl-bindings">Secure a custom DNS name with a TLS/SSL binding in Azure App Service</a>.
</details> --
For more information, see [Assign a custom domain to a web app](scripts/powershe
Continue to the next tutorial to learn how to bind a custom TLS/SSL certificate to a web app. > [!div class="nextstepaction"]
-> [Secure a custom DNS name with a TLS/SSL binding in Azure App Service](configure-ssl-bindings.md)
+> [Secure a custom DNS name with a TLS/SSL binding in Azure App Service](configure-ssl-bindings.md)
app-service Configure Language Dotnetcore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-language-dotnetcore.md
namespace SomeNamespace
} ```
-If you configure an app setting with the same name in App Service and in *appsettings.json*, for example, the App Service value takes precedence over the *appsettings.json* value. The local *appsettings.json* value lets you debug the app locally, but the App Service value lets your run the app in product with production settings. Connection strings work in the same way. This way, you can keep your application secrets outside of your code repository and access the appropriate values without changing your code.
+If you configure an app setting with the same name in App Service and in *appsettings.json*, for example, the App Service value takes precedence over the *appsettings.json* value. The local *appsettings.json* value lets you debug the app locally, but the App Service value lets your run the app in production with production settings. Connection strings work in the same way. This way, you can keep your application secrets outside of your code repository and access the appropriate values without changing your code.
> [!NOTE] > Note the [hierarchical configuration data](/aspnet/core/fundamentals/configuration/#hierarchical-configuration-data) in *appsettings.json* is accessed using the `:` delimiter that's standard to .NET Core. To override a specific hierarchical configuration setting in App Service, set the app setting name with the same delimited format in the key. you can run the following example in the [Cloud Shell](https://shell.azure.com):
app-service Configure Language Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-language-python.md
Existing web applications can be redeployed to Azure as follows:
1. **App startup**: Review the section, [Container startup process](#container-startup-process) later in this article to understand how App Service attempts to run your app. App Service uses the Gunicorn web server by default, which must be able to find your app object or *wsgi.py* folder. If needed, you can [Customize the startup command](#customize-startup-command).
-1. **Continuous deployment**: Set up continuous deployment, as described on [Continuous deployment to Azure App Service](deploy-continuous-deployment.md) if using Azure Pipelines or Kudu deployment, or [Deploy to App Service using GitHub Actions](deploy-github-actions.md) if using GitHub actions.
+1. **Continuous deployment**: Set up continuous deployment, as described on [Continuous deployment to Azure App Service](deploy-continuous-deployment.md) if using Azure Pipelines or Kudu deployment, or [Deploy to App Service using GitHub Actions](./deploy-continuous-deployment.md) if using GitHub actions.
1. **Custom actions**: To perform actions within the App Service container that hosts your app, such as Django database migrations, you can [connect to the container through SSH](configure-linux-open-ssh-session.md). For an example of running Django database migrations, see [Tutorial: Deploy a Django web app with PostgreSQL - run database migrations](tutorial-python-postgresql-app.md#43-run-django-database-migrations). - When using continuous deployment, you can perform those actions using post-build commands as described earlier under [Customize build automation](#customize-build-automation).
If you're encountering this error with the sample in [Tutorial: Deploy a Django
> [Tutorial: Deploy from private container repository](tutorial-custom-container.md?pivots=container-linux) > [!div class="nextstepaction"]
-> [App Service Linux FAQ](faq-app-service-linux.md)
+> [App Service Linux FAQ](faq-app-service-linux.md)
app-service Deploy Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/deploy-best-practices.md
App Service has [built-in continuous delivery](deploy-continuous-deployment.md)
### Use GitHub Actions
-You can also automate your container deployment [with GitHub Actions](deploy-container-github-action.md). The workflow file below will build and tag the container with the commit ID, push it to a container registry, and update the specified site slot with the new image tag.
+You can also automate your container deployment [with GitHub Actions](./deploy-ci-cd-custom-container.md). The workflow file below will build and tag the container with the commit ID, push it to a container registry, and update the specified site slot with the new image tag.
```yaml name: Build and deploy a container image to Azure Web Apps
app-service Deploy Complex Application Predictably https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/deploy-complex-application-predictably.md
For more information, see [Using Azure PowerShell with Azure Resource Manager](.
This [preview tool](https://resources.azure.com) enables you to explore the JSON definitions of all the resource groups in your subscription and the individual resources. In the tool, you can edit the JSON definitions of a resource, delete an entire hierarchy of resources, and create new resources. The information readily available in this tool is very helpful for template authoring because it shows you what properties you need to set for a particular type of resource, the correct values, etc. You can even create your resource group in the [Azure Portal](https://portal.azure.com/), then inspect its JSON definitions in the explorer tool to help you templatize the resource group. ### Deploy to Azure button
-If you use GitHub for source control, you can put a [Deploy to Azure button](https://docs.microsoft.com/azure/azure-resource-manager/templates/deploy-to-azure-button) into your README.MD, which enables a turn-key deployment UI to Azure. While you can do this for any simple app, you can extend this to enable deploying an entire resource group by putting an azuredeploy.json file in the repository root. This JSON file, which contains the resource group template, will be used by the Deploy to Azure button to create the resource group. For an example, see the [ToDoApp](https://github.com/azure-appservice-samples/ToDoApp) sample, which you will use in this tutorial.
+If you use GitHub for source control, you can put a [Deploy to Azure button](../azure-resource-manager/templates/deploy-to-azure-button.md) into your README.MD, which enables a turn-key deployment UI to Azure. While you can do this for any simple app, you can extend this to enable deploying an entire resource group by putting an azuredeploy.json file in the repository root. This JSON file, which contains the resource group template, will be used by the Deploy to Azure button to create the resource group. For an example, see the [ToDoApp](https://github.com/azure-appservice-samples/ToDoApp) sample, which you will use in this tutorial.
## Get the sample resource group template So now letΓÇÖs get right to it.
To learn about the JSON syntax and properties for resource types deployed in thi
* [Microsoft.Web/serverfarms](/azure/templates/microsoft.web/serverfarms) * [Microsoft.Web/sites](/azure/templates/microsoft.web/sites) * [Microsoft.Web/sites/slots](/azure/templates/microsoft.web/sites/slots)
-* [Microsoft.Insights/autoscalesettings](/azure/templates/microsoft.insights/autoscalesettings)
+* [Microsoft.Insights/autoscalesettings](/azure/templates/microsoft.insights/autoscalesettings)
app-service Management Addresses https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/management-addresses.md
ms.assetid: a7738a24-89ef-43d3-bff1-77f43d5a3952 Previously updated : 02/11/2021 Last updated : 03/22/2021
The addresses noted below can be configured in a route table to avoid asymmetric
| Region | Addresses | |--|--|
-| All public regions | 13.66.140.0, 13.67.8.128, 13.69.64.128, 13.69.227.128, 13.70.73.128, 13.71.170.64, 13.71.194.129, 13.75.34.192, 13.75.127.117, 13.77.50.128, 13.78.109.0, 13.89.171.0, 13.94.141.115, 13.94.143.126, 13.94.149.179, 20.36.106.128, 20.36.114.64, 23.102.135.246, 23.102.188.65, 40.69.106.128, 40.70.146.128, 40.71.13.64, 40.74.100.64, 40.78.194.128, 40.79.130.64, 40.79.178.128, 40.83.120.64, 40.83.121.56, 40.83.125.161, 40.112.242.192, 51.140.146.64, 51.140.210.128, 52.151.25.45, 52.162.106.192, 52.165.152.214, 52.165.153.122, 52.165.154.193, 52.165.158.140, 52.174.22.21, 52.178.177.147, 52.178.184.149, 52.178.190.65, 52.178.195.197, 52.187.56.50, 52.187.59.251, 52.187.63.19, 52.187.63.37, 52.224.105.172, 52.225.177.153, 52.231.18.64, 52.231.146.128, 65.52.172.237, 70.37.57.58, 104.44.129.141, 104.44.129.243, 104.44.129.255, 104.44.134.255, 104.208.54.11, 104.211.81.64, 104.211.146.128, 157.55.208.185, 191.233.203.64, 191.236.154.88, 52.181.183.11 |
+| All public regions | 13.66.140.0, 13.67.8.128, 13.69.64.128, 13.69.227.128, 13.70.73.128, 13.71.170.64, 13.71.194.129, 13.75.34.192, 13.75.127.117, 13.77.50.128, 13.78.109.0, 13.89.171.0, 13.94.141.115, 13.94.143.126, 13.94.149.179, 20.36.106.128, 20.36.114.64, 20.37.74.128, 23.96.195.3, 23.102.135.246, 23.102.188.65, 40.69.106.128, 40.70.146.128, 40.71.13.64, 40.74.100.64, 40.78.194.128, 40.79.130.64, 40.79.178.128, 40.83.120.64, 40.83.121.56, 40.83.125.161, 40.112.242.192, 51.107.58.192, 51.107.154.192, 51.116.58.192, 51.116.155.0, 51.120.99.0, 51.120.219.0, 51.140.146.64, 51.140.210.128, 52.151.25.45, 52.162.106.192, 52.165.152.214, 52.165.153.122, 52.165.154.193, 52.165.158.140, 52.174.22.21, 52.178.177.147, 52.178.184.149, 52.178.190.65, 52.178.195.197, 52.187.56.50, 52.187.59.251, 52.187.63.19, 52.187.63.37, 52.224.105.172, 52.225.177.153, 52.231.18.64, 52.231.146.128, 65.52.172.237, 65.52.250.128, 70.37.57.58, 104.44.129.141, 104.44.129.243, 104.44.129.255, 104.44.134.255, 104.208.54.11, 104.211.81.64, 104.211.146.128, 157.55.208.185, 191.233.50.128, 191.233.203.64, 191.236.154.88 |
| Microsoft Azure Government | 23.97.29.209, 13.72.53.37, 13.72.180.105, 52.181.183.11, 52.227.80.100, 52.182.93.40, 52.244.79.34, 52.238.74.16 | | Azure China | 42.159.4.236, 42.159.80.125 |
The management addresses can be placed in a route table with a next hop of inter
$rg = "resource group name" $rt = "route table name" $location = "azure location"
-$managementAddresses = "13.66.140.0", "13.67.8.128", "13.69.64.128", "13.69.227.128", "13.70.73.128", "13.71.170.64", "13.71.194.129", "13.75.34.192", "13.75.127.117", "13.77.50.128", "13.78.109.0", "13.89.171.0", "13.94.141.115", "13.94.143.126", "13.94.149.179", "20.36.106.128", "20.36.114.64", "23.102.135.246", "23.102.188.65", "40.69.106.128", "40.70.146.128", "40.71.13.64", "40.74.100.64", "40.78.194.128", "40.79.130.64", "40.79.178.128", "40.83.120.64", "40.83.121.56", "40.83.125.161", "40.112.242.192", "51.140.146.64", "51.140.210.128", "52.151.25.45", "52.162.106.192", "52.165.152.214", "52.165.153.122", "52.165.154.193", "52.165.158.140", "52.174.22.21", "52.178.177.147", "52.178.184.149", "52.178.190.65", "52.178.195.197", "52.187.56.50", "52.187.59.251", "52.187.63.19", "52.187.63.37", "52.224.105.172", "52.225.177.153", "52.231.18.64", "52.231.146.128", "65.52.172.237", "70.37.57.58", "104.44.129.141", "104.44.129.243", "104.44.129.255", "104.44.134.255", "104.208.54.11", "104.211.81.64", "104.211.146.128", "157.55.208.185", "191.233.203.64", "191.236.154.88", "52.181.183.11"
+$managementAddresses = "13.66.140.0", "13.67.8.128", "13.69.64.128", "13.69.227.128", "13.70.73.128", "13.71.170.64", "13.71.194.129", "13.75.34.192", "13.75.127.117", "13.77.50.128", "13.78.109.0", "13.89.171.0", "13.94.141.115", "13.94.143.126", "13.94.149.179", "20.36.106.128", "20.36.114.64", "20.37.74.128", "23.96.195.3", "23.102.135.246", "23.102.188.65", "40.69.106.128", "40.70.146.128", "40.71.13.64", "40.74.100.64", "40.78.194.128", "40.79.130.64", "40.79.178.128", "40.83.120.64", "40.83.121.56", "40.83.125.161", "40.112.242.192", "51.107.58.192", "51.107.154.192", "51.116.58.192", "51.116.155.0", "51.120.99.0", "51.120.219.0", "51.140.146.64", "51.140.210.128", "52.151.25.45", "52.162.106.192", "52.165.152.214", "52.165.153.122", "52.165.154.193", "52.165.158.140", "52.174.22.21", "52.178.177.147", "52.178.184.149", "52.178.190.65", "52.178.195.197", "52.187.56.50", "52.187.59.251", "52.187.63.19", "52.187.63.37", "52.224.105.172", "52.225.177.153", "52.231.18.64", "52.231.146.128", "65.52.172.237", "65.52.250.128", "70.37.57.58", "104.44.129.141", "104.44.129.243", "104.44.129.255", "104.44.134.255", "104.208.54.11", "104.211.81.64", "104.211.146.128", "157.55.208.185", "191.233.50.128", "191.233.203.64", "191.236.154.88"
az network route-table create --name $rt --resource-group $rg --location $location foreach ($ip in $managementAddresses) {
app-service Monitor Instances Health Check https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/monitor-instances-health-check.md
In addition to configuring the Health check options, you can also configure the
Health check integrates with App Service's authentication and authorization features. No additional settings are required if these security features are enabled. However, if you're using your own authentication system, the Health check path must allow anonymous access. If the site is HTTP**S**-Only enabled, the Health check request will be sent via HTTP**S**.
-Large enterprise development teams often need to adhere to security requirements for exposed APIs. To secure the Health check endpoint, you should first use features such as [IP restrictions](app-service-ip-restrictions.md#set-an-ip-address-based-rule), [client certificates](app-service-ip-restrictions.md#set-an-ip-address-based-rule), or a Virtual Network to restrict application access. You can secure the Health check endpoint by requiring the `User-Agent` of the incoming request matches `ReadyForRequest/1.0`. The User-Agent can't be spoofed since the request would already secured by prior security features.
+Large enterprise development teams often need to adhere to security requirements for exposed APIs. To secure the Health check endpoint, you should first use features such as [IP restrictions](app-service-ip-restrictions.md#set-an-ip-address-based-rule), [client certificates](app-service-ip-restrictions.md#set-an-ip-address-based-rule), or a Virtual Network to restrict application access. You can secure the Health check endpoint by requiring the `User-Agent` of the incoming request matches `HealthCheck/1.0`. The User-Agent can't be spoofed since the request would already secured by prior security features.
## Monitoring
app-service Networking Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/networking-features.md
ms.assetid: 5c61eed1-1ad1-4191-9f71-906d610ee5b7 Previously updated : 10/18/2020 Last updated : 03/26/2021
Some use cases for this feature:
![Diagram that illustrates the use of service endpoints with Application Gateway.](media/networking-features/service-endpoints-appgw.png) To learn more about configuring service endpoints with your app, see [Azure App Service access restrictions][serviceendpoints].
-#### Access restriction rules based on service tags (preview)
+
+#### Access restriction rules based on service tags
+ [Azure service tags][servicetags] are well defined sets of IP addresses for Azure services. Service tags group the IP ranges used in various Azure services and is often also further scoped to specific regions. This allows you to filter *inbound* traffic from specific Azure services. For a full list of tags and more information, visit the service tag link above. To learn how to enable this feature, see [Configuring access restrictions][iprestrictions].
-#### Http header filtering for access restriction rules (preview)
+
+#### Http header filtering for access restriction rules
+ For each access restriction rule, you can add additional http header filtering. This allows you to further inspect the incoming request and filter based on specific http header values. Each header can have up to 8 values per rule. The following list of http headers is currently supported: * X-Forwarded-For * X-Forwarded-Host
For each access restriction rule, you can add additional http header filtering.
Some use cases for http header filtering are: * Restrict access to traffic from proxy servers forwarding the host name * Restrict access to a specific Azure Front Door instance with a service tag rule and X-Azure-FDID header restriction+ ### Private Endpoint Private Endpoint is a network interface that connects you privately and securely to your Web App by Azure private link. Private Endpoint uses a private IP address from your virtual network, effectively bringing the web app into your virtual network. This feature is only for *inbound* flows to your web app.
app-service Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/security-baseline.md
Review detailed security alerts and recommendations in Security Center, at the p
It is recommended that you create a process with automated tools to monitor network resource configurations and quickly detect changes. -- [How to view and retrieve Azure Activity Log events](/azure/azure-monitor/platform/activity-log#view-the-activity-log)
+- [How to view and retrieve Azure Activity Log events](../azure-monitor/essentials/activity-log.md#view-the-activity-log)
-- [How to create alerts in Azure Monitor](/azure/azure-monitor/platform/alerts-activity-log)
+- [How to create alerts in Azure Monitor](../azure-monitor/alerts/alerts-activity-log.md)
- [Export security alerts and recommendations](../security-center/continuous-export.md)
Enable Azure Activity log diagnostic settings for control plane audit logging. S
Use Microsoft Azure Sentinel, a scalable, cloud-native, security information event management (SIEM) available to connect to various data sources and connectors, based on your business requirements. You can also enable and on-board data to a third-party security information event management (SIEM) system, such as Barracuda in Azure Marketplace. -- [Logging ASE Activity](https://docs.microsoft.com/azure/app-service/environment/using-an-ase#logging)
+- [Logging ASE Activity](./environment/using-an-ase.md#logging)
- [How to enable Diagnostic Settings for Azure App Service](troubleshoot-diagnostic-logs.md)
The "what, who, and when" for any write operations (PUT, POST, DELETE) performed
Additionally, Azure Key Vault provides centralized secret management with access policies and audit history. -- [How to enable Diagnostic Settings for Azure Activity Log](/azure/azure-monitor/platform/activity-log)
+- [How to enable Diagnostic Settings for Azure Activity Log](../azure-monitor/essentials/activity-log.md)
- [How to enable Diagnostic Settings for Azure App Service](troubleshoot-diagnostic-logs.md)
Additionally, Azure Key Vault provides centralized secret management with access
### 2.5: Configure security log storage retention **Guidance**: In Azure Monitor, set the log retention period for the Log Analytics workspaces associated with your App Service resources according to your organization's compliance regulations.-- [How to set log retention parameters](/azure/azure-monitor/platform/manage-cost-storage#change-the-data-retention-period)
+- [How to set log retention parameters](../azure-monitor/logs/manage-cost-storage.md#change-the-data-retention-period)
**Responsibility**: Customer
If you have deployed a Web Application Firewall (WAF), you can monitor attacks a
Use Azure Sentinel, a scalable and cloud-native security information event management (SIEM), to integrate with various data sources and connectors, as per requirements. Optionally, enable and on-board data to a third-party security information event management solution in the Azure Marketplace. -- [How to enable diagnostic settings for Azure Activity Log](/azure/azure-monitor/platform/activity-log)
+- [How to enable diagnostic settings for Azure Activity Log](../azure-monitor/essentials/activity-log.md)
- [How to enable Application Insights](../azure-monitor/app/app-insights-overview.md)
Monitor attacks against your App Service apps by using a real-time Web Applicati
**Guidance**: Azure Active Directory (Azure AD) has built-in roles that must be explicitly assigned and query-able. Use the Azure AD PowerShell module to perform ad hoc queries to discover accounts that are members of administrative groups. -- [How to get members of a directory role in Azure AD with PowerShell](https://docs.microsoft.com/powershell/module/azuread/get-azureaddirectoryrolemember?view=azureadps-2.0&amp;preserve-view=true)
+- [How to get members of a directory role in Azure AD with PowerShell](/powershell/module/azuread/get-azureaddirectoryrolemember?preserve-view=true&view=azureadps-2.0)
-- [How to use managed identities for App Service and Azure Functions](https://docs.microsoft.com/azure/app-service/overview-managed-identity?context=azure%2Factive-directory%2Fmanaged-identities-azure-resources%2Fcontext%2Fmsi-context&amp;tabs=dotnet)
+- [How to use managed identities for App Service and Azure Functions](./overview-managed-identity.md?tabs=dotnet&context=azure%2factive-directory%2fmanaged-identities-azure-resources%2fcontext%2fmsi-context)
- [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md)
Generally, avoid implementing default passwords for user access when building yo
Disable anonymous access, unless you need to support it. -- [Identity providers available by default in Azure App Service](https://docs.microsoft.com/azure/app-service/overview-authentication-authorization#identity-providers)
+- [Identity providers available by default in Azure App Service](./overview-authentication-authorization.md#identity-providers)
- [Authentication and authorization in Azure App Service and Azure Functions](overview-authentication-authorization.md)
App Service apps use federated identity, in which a third-party identity provide
When you enable authentication and authorization with one of these providers, its sign-in endpoint is available for user authentication and for validation of authentication tokens from the provider. -- [Understand authentication and authorization in Azure App Service](https://docs.microsoft.com/azure/app-service/overview-authentication-authorization#identity-providers)
+- [Understand authentication and authorization in Azure App Service](./overview-authentication-authorization.md#identity-providers)
- [Learn about Authentication and Authorization in Azure App Service](overview-authentication-authorization.md)
When you enable authentication and authorization with one of these providers, it
Implement multifactor authentication for Azure AD. Administrators need to ensure that the subscription accounts in the portal are protected. The subscription is vulnerable to attacks because it manages the resources that you created. -- [Azure Security multifactor authentication](/azure/security/develop/secure-aad-app)
+- [Azure Security multifactor authentication](/previous-versions/azure/security/develop/secure-aad-app)
- [How to enable multifactor authentication in Azure](../active-directory/authentication/howto-mfa-getstarted.md)
Threat protection in Security Center provides comprehensive defenses for your en
**Guidance**: Discover stale accounts with the logs provided by Azure Active Directory (Azure AD). Use Azure Identity Access Reviews to efficiently manage group memberships and access to enterprise applications, as well as role assignments. Review user access periodically to make sure only the intended users have continued access. -- [Understand Azure AD reporting](/azure/active-directory/reports-monitoring/)
+- [Understand Azure AD reporting](../active-directory/reports-monitoring/index.yml)
- [How to use Azure Identity Access Reviews](../active-directory/governance/access-reviews-overview.md)
Access to Azure AD sign-in activity, audit, and risk event log sources allow you
- [How to configure your Azure App Service apps to use Azure AD login](configure-authentication-provider-aad.md) -- [How to integrate Azure Activity Logs into Azure Monitor](/azure/active-directory/reports-monitoring/howto-integrate-activity-logs-with-log-analytics)
+- [How to integrate Azure Activity Logs into Azure Monitor](../active-directory/reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md)
- [How to on-board Azure Sentinel](../sentinel/quickstart-onboard.md)
Use Azure AD Identity Protection to configure automated responses to detected su
**Guidance**: Not available; Customer Lockbox is not supported for Azure App Service. -- [List of Customer Lockbox-supported services](https://docs.microsoft.com/azure/security/fundamentals/customer-lockbox-overview#supported-services-and-scenarios-in-general-availability)
+- [List of Customer Lockbox-supported services](../security/fundamentals/customer-lockbox-overview.md#supported-services-and-scenarios-in-general-availability)
**Responsibility**: Customer
Customer supplied secrets are encrypted at rest while stored in App Service conf
Note that while locally attached disks can be used optionally by websites as temporary storage, (for example, D:\local and %TMP%), they are not encrypted at rest. -- [Understand data protection controls for Azure App Service](https://docs.microsoft.com/azure/app-service/security-recommendations#data-protection)
+- [Understand data protection controls for Azure App Service](./security-recommendations.md#data-protection)
- [Understand Azure Storage encryption at rest](../storage/common/storage-service-encryption.md)
Note that while locally attached disks can be used optionally by websites as tem
**Guidance**: Use Azure Monitor with Azure Activity log to create alerts upon any changes to production App Service apps and other critical or related resources. -- [How to create alerts for Azure Activity Log events](/azure/azure-monitor/platform/alerts-activity-log)
+- [How to create alerts for Azure Activity Log events](../azure-monitor/alerts/alerts-activity-log.md)
**Responsibility**: Customer
Note that while locally attached disks can be used optionally by websites as tem
Review and follow recommendations from Security Center for securing your App Service apps. -- [How to add continuous security validation to your CI/CD pipeline](https://docs.microsoft.com/azure/devops/migrate/security-validation-cicd-pipeline?preserve-view=true&amp;view=azure-devops)
+- [How to add continuous security validation to your CI/CD pipeline](/azure/devops/migrate/security-validation-cicd-pipeline?view=azure-devops&preserve-view=true)
- [How to implement Azure Security Center vulnerability assessment recommendations](../security-center/deploy-vulnerability-assessment-vm.md)
Although classic Azure resources may be discovered via Resource Graph, it is hig
- [How to create queries with Azure Resource Graph](../governance/resource-graph/first-query-portal.md) -- [How to view your Azure Subscriptions](https://docs.microsoft.com/powershell/module/az.accounts/get-azsubscription?preserve-view=true&amp;view=azps-4.8.0)
+- [How to view your Azure Subscriptions](/powershell/module/az.accounts/get-azsubscription?view=azps-4.8.0&preserve-view=true)
- [Understand Azure RBAC](../role-based-access-control/overview.md)
Use WebJobs in App Service to monitor for unapproved software applications deplo
- [How to configure and manage Azure Policy](../governance/policy/tutorials/create-and-manage.md) -- [How to deny a specific resource type with Azure Policy](https://docs.microsoft.com/azure/governance/policy/samples/built-in-policies#general)
+- [How to deny a specific resource type with Azure Policy](../governance/policy/samples/built-in-policies.md#general)
- [Run background tasks with WebJobs in Azure App Service](webjobs-create.md)
Similarly, use WebJobs in App Service to inventory unapproved software applicati
- [How to configure and manage Azure Policy](../governance/policy/tutorials/create-and-manage.md) -- [How to deny a specific resource type with Azure Policy](https://docs.microsoft.com/azure/governance/policy/samples/built-in-policies#general)
+- [How to deny a specific resource type with Azure Policy](../governance/policy/samples/built-in-policies.md#general)
**Responsibility**: Customer
Apply built-in policy definitions such as:
It is recommended that you document the process to apply the built-in policy definitions for standardized usage. -- [How to view available Azure Policy Aliases](https://docs.microsoft.com/powershell/module/az.resources/get-azpolicyalias?preserve-view=true&amp;view=azps-4.8.0)
+- [How to view available Azure Policy Aliases](/powershell/module/az.resources/get-azpolicyalias?view=azps-4.8.0&preserve-view=true)
- [How to configure and manage Azure Policy](../governance/policy/tutorials/create-and-manage.md)
It is recommended that you document the process to apply the built-in policy def
Use your existing Continuous Integration (CI) and Continuous Delivery (CD) pipeline to deploy a known-secure configuration. -- [How to store code in Azure DevOps](https://docs.microsoft.com/azure/devops/repos/git/gitworkflow?preserve-view=true&amp;view=azure-devops)
+- [How to store code in Azure DevOps](/azure/devops/repos/git/gitworkflow?view=azure-devops&preserve-view=true)
-- [Azure Repos Documentation](https://docs.microsoft.com/azure/devops/repos/?preserve-view=true&amp;view=azure-devops)
+- [Azure Repos Documentation](/azure/devops/repos/?view=azure-devops&preserve-view=true)
**Responsibility**: Customer
More information is available at the referenced links.
- [Restore an app running in Azure App Service](web-sites-restore.md) -- [Understand encryption at rest in Azure](https://docs.microsoft.com/azure/security/fundamentals/encryption-atrest#encryption-at-rest-in-microsoft-cloud-services)
+- [Understand encryption at rest in Azure](../security/fundamentals/encryption-atrest.md#encryption-at-rest-in-microsoft-cloud-services)
- [Encryption Model and key management table](../security/fundamentals/encryption-atrest.md)
Additionally, clearly mark subscriptions (for example, production, non-productio
## Next steps -- See the [Azure Security Benchmark V2 overview](/azure/security/benchmarks/overview)-- Learn more about [Azure security baselines](/azure/security/benchmarks/security-baselines-overview)
+- See the [Azure Security Benchmark V2 overview](../security/benchmarks/overview.md)
+- Learn more about [Azure security baselines](../security/benchmarks/security-baselines-overview.md)
app-service Troubleshoot Diagnostic Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/troubleshoot-diagnostic-logs.md
In your application code, you use the usual logging facilities to send log messa
System.Diagnostics.Trace.TraceError("If you're seeing this, something bad happened"); ``` -- By default, ASP.NET Core uses the [Microsoft.Extensions.Logging.AzureAppServices](https://www.nuget.org/packages/Microsoft.Extensions.Logging.AzureAppServices) logging provider. For more information, see [ASP.NET Core logging in Azure](/aspnet/core/fundamentals/logging/). For information about WebJobs SDK logging, see [Get started with the Azure WebJobs SDK](/azure/app-service/webjobs-sdk-get-started#enable-console-logging)
+- By default, ASP.NET Core uses the [Microsoft.Extensions.Logging.AzureAppServices](https://www.nuget.org/packages/Microsoft.Extensions.Logging.AzureAppServices) logging provider. For more information, see [ASP.NET Core logging in Azure](/aspnet/core/fundamentals/logging/). For information about WebJobs SDK logging, see [Get started with the Azure WebJobs SDK](./webjobs-sdk-get-started.md#enable-console-logging)
## Stream logs
The following table shows the supported log types and descriptions:
* [Query logs with Azure Monitor](../azure-monitor/logs/log-query-overview.md) * [How to Monitor Azure App Service](web-sites-monitor.md) * [Troubleshooting Azure App Service in Visual Studio](troubleshoot-dotnet-visual-studio.md)
-* [Analyze app Logs in HDInsight](https://gallery.technet.microsoft.com/scriptcenter/Analyses-Windows-Azure-web-0b27d413)
+* [Analyze app Logs in HDInsight](https://gallery.technet.microsoft.com/scriptcenter/Analyses-Windows-Azure-web-0b27d413)
application-gateway Application Gateway Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/application-gateway-diagnostics.md
The firewall log is generated only if you have enabled it for each application g
|ruleSetVersion | Rule set version used. Available values are 2.2.9 and 3.0. | |ruleId | Rule ID of the triggering event. | |message | User-friendly message for the triggering event. More details are provided in the details section. |
-|action | Action taken on the request. Available values are Matched and Blocked. |
+|action | Action taken on the request. Available values are Blocked and Allowed (for custom rules), Matched (when a rule matches a part of the request), and Detected and Blocked (these are both for mandatory rules, depending on if the WAF is in detection or prevention mode). |
|site | Site for which the log was generated. Currently, only Global is listed because rules are global.| |details | Details of the triggering event. | |details.message | Description of the rule. |
application-gateway Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/security-baseline.md
For additional information, see the references below.
- [Understand Network Security provided by Azure Security Center](../security-center/security-center-network-recommendations.md) -- [FAQ for diagnostic and Logging for Azure Application Gateway](/azure/application-gateway/application-gateway-faq#what-types-of-logs-does-application-gateway-provide)
+- [FAQ for diagnostic and Logging for Azure Application Gateway](./application-gateway-faq.yml#what-types-of-logs-does-application-gateway-provide)
**Responsibility**: Customer
For additional information, see the references below.
- [Understand Network Security provided by Azure Security Center](../security-center/security-center-network-recommendations.md) -- [FAQ for diagnostic and Logging for Azure Application Gateway](/azure/application-gateway/application-gateway-faq#what-types-of-logs-does-application-gateway-provide)
+- [FAQ for diagnostic and Logging for Azure Application Gateway](./application-gateway-faq.yml#what-types-of-logs-does-application-gateway-provide)
**Responsibility**: Customer
Alternatively, there are multiple marketplace options like the Barracuda WAF for
- [How to deploy Azure WAF](../web-application-firewall/ag/create-waf-policy-ag.md) -- [Understand Barracuda WAF Cloud Service](https://docs.microsoft.com/azure/app-service/environment/app-service-app-service-environment-web-application-firewall#configuring-your-barracuda-waf-cloud-service)
+- [Understand Barracuda WAF Cloud Service](../app-service/environment/app-service-app-service-environment-web-application-firewall.md#configuring-your-barracuda-waf-cloud-service)
**Responsibility**: Customer
You may use Azure PowerShell or Azure CLI to look up or perform actions on resou
**Guidance**: Use Azure Activity Log to monitor network resource configurations and detect changes for network settings and resources related to your Azure Application Gateway deployments. Create alerts within Azure Monitor that will trigger when changes to critical network settings or resources take place. -- [How to view and retrieve Azure Activity Log events](/azure/azure-monitor/platform/activity-log#view-the-activity-log)
+- [How to view and retrieve Azure Activity Log events](../azure-monitor/essentials/activity-log.md#view-the-activity-log)
-- [How to create alerts in Azure Monitor](/azure/azure-monitor/platform/alerts-activity-log)
+- [How to create alerts in Azure Monitor](../azure-monitor/alerts/alerts-activity-log.md)
**Responsibility**: Customer
In addition to Activity Logs, you can configure diagnostic settings for your Azu
Azure Application Gateway also offers built-in integration with Azure Application Insights. Application Insights collects log, performance, and error data. Application Insights automatically detects performance anomalies and includes powerful analytics tools to help you diagnose issues and to understand how your web apps are being used. You may enable continuous export to export telemetry from Application Insights into a centralized location to keep the data for longer than the standard retention period. -- [How to enable diagnostic settings for Azure Activity Log](/azure/azure-monitor/platform/activity-log)
+- [How to enable diagnostic settings for Azure Activity Log](../azure-monitor/essentials/activity-log.md)
- [How to enable diagnostic settings for Azure Application Gateway](application-gateway-diagnostics.md)
In addition to Activity Logs, you can configure diagnostic settings for your Azu
Azure Application Gateway also offers built-in integration with Azure Application Insights. Application Insights collects log, performance, and error data. Application Insights automatically detects performance anomalies and includes powerful analytics tools to help you diagnose issues and to understand how your web apps are being used. You may enable continuous export to export telemetry from Application Insights into a centralized location to keep the data for longer than the standard retention period. -- [How to enable diagnostic settings for Azure Activity Log](/azure/azure-monitor/platform/diagnostic-settings-legacy)
+- [How to enable diagnostic settings for Azure Activity Log](../azure-monitor/essentials/activity-log.md)
- [How to enable diagnostic settings for Azure Application Gateway](application-gateway-diagnostics.md)
Azure Application Gateway also offers built-in integration with Azure Applicatio
**Guidance**: Within Azure Monitor, set your Log Analytics Workspace retention period according to your organization's compliance regulations. Use Azure Storage Accounts for long-term/archival storage. -- [How to set log retention parameters for Log Analytics Workspaces](/azure/azure-monitor/platform/manage-cost-storage#change-the-data-retention-period)
+- [How to set log retention parameters for Log Analytics Workspaces](../azure-monitor/logs/manage-cost-storage.md#change-the-data-retention-period)
**Responsibility**: Customer
Use Azure Monitor for Networks for a comprehensive view of health and metrics fo
Optionally, you may enable and on-board data to Azure Sentinel or a third-party SIEM. -- [How to enable diagnostic settings for Azure Activity Log](/azure/azure-monitor/platform/activity-log)
+- [How to enable diagnostic settings for Azure Activity Log](../azure-monitor/essentials/activity-log.md)
- [How to enable diagnostic settings for Azure Application Gateway](application-gateway-diagnostics.md)
Use Azure Monitor for Networks for a comprehensive view of health and metrics fo
- [How to deploy Azure WAF](../web-application-firewall/ag/create-waf-policy-ag.md) -- [How to enable diagnostic settings for Azure Activity Log](/azure/azure-monitor/platform/activity-log)
+- [How to enable diagnostic settings for Azure Activity Log](../azure-monitor/essentials/activity-log.md)
- [How to enable diagnostic settings for Azure Application Gateway](application-gateway-diagnostics.md) - [How to use Azure Monitor for Networks](../azure-monitor/insights/network-insights-overview.md) -- [How to create alerts within Azure](/azure/azure-monitor/learn/tutorial-response)
+- [How to create alerts within Azure](../azure-monitor/alerts/tutorial-response.md)
**Responsibility**: Customer
Configure diagnostic settings for your Azure Application Gateway deployments. di
**Guidance**: Azure Active Directory (Azure AD) has built-in roles that must be explicitly assigned and are queryable. Use the Azure AD PowerShell module to perform ad hoc queries to discover accounts that are members of administrative groups. -- [How to get a directory role in Azure AD with PowerShell](https://docs.microsoft.com/powershell/module/azuread/get-azureaddirectoryrole?view=azureadps-2.0&amp;preserve-view=true)
+- [How to get a directory role in Azure AD with PowerShell](/powershell/module/azuread/get-azureaddirectoryrole?preserve-view=true&view=azureadps-2.0)
-- [How to get members of a directory role in Azure AD with PowerShell](https://docs.microsoft.com/powershell/module/azuread/get-azureaddirectoryrolemember?view=azureadps-2.0&amp;preserve-view=true)
+- [How to get members of a directory role in Azure AD with PowerShell](/powershell/module/azuread/get-azureaddirectoryrolemember?preserve-view=true&view=azureadps-2.0)
**Responsibility**: Customer
For additional information, see the references below.
**Guidance**: Azure Active Directory (Azure AD) provides logs to help discover stale accounts. In addition, use Azure Identity Access Reviews to efficiently manage group memberships, access to enterprise applications, and role assignments. User access can be reviewed on a regular basis to make sure only the right Users have continued access. -- [Understand Azure AD reporting](/azure/active-directory/reports-monitoring/)
+- [Understand Azure AD reporting](../active-directory/reports-monitoring/index.yml)
- [How to use Azure Identity Access Reviews](../active-directory/governance/access-reviews-overview.md)
For additional information, see the references below.
You can streamline this process by creating diagnostic settings for Azure AD user accounts and sending the audit logs and sign-in logs to a Log Analytics Workspace. You can configure desired Alerts within Log Analytics Workspace. -- [How to integrate Azure Activity Logs into Azure Monitor](/azure/active-directory/reports-monitoring/howto-integrate-activity-logs-with-log-analytics)
+- [How to integrate Azure Activity Logs into Azure Monitor](../active-directory/reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md)
**Responsibility**: Customer
You can streamline this process by creating diagnostic settings for Azure AD use
**Guidance**: Use Tags to assist in tracking Azure resources that store or process sensitive information. -- [How to create and use Tags](/azure/azure-resource-manager/resource-group-using-tags)
+- [How to create and use Tags](../azure-resource-manager/management/tag-resources.md)
**Responsibility**: Customer
You can streamline this process by creating diagnostic settings for Azure AD use
**Guidance**: Use Azure Monitor with the Azure Activity log to create alerts for when changes take place to production Azure Application Gateway instances as well as other critical or related resources. -- [How to create alerts for Azure Activity Log events](/azure/azure-monitor/platform/alerts-activity-log)
+- [How to create alerts for Azure Activity Log events](../azure-monitor/alerts/alerts-activity-log.md)
**Responsibility**: Customer
Although classic Azure resources may be discovered via Resource Graph, it is hig
- [How to create queries with Azure Resource Graph](../governance/resource-graph/first-query-portal.md) -- [How to view your Azure Subscriptions](https://docs.microsoft.com/powershell/module/az.accounts/get-azsubscription?view=azps-4.8.0&amp;preserve-view=true)
+- [How to view your Azure Subscriptions](/powershell/module/az.accounts/get-azsubscription?preserve-view=true&view=azps-4.8.0)
- [Understand Azure RBAC](../role-based-access-control/overview.md)
For additional information, see the references below.
- [How to configure and manage Azure Policy](../governance/policy/tutorials/create-and-manage.md) -- [How to deny a specific resource type with Azure Policy](https://docs.microsoft.com/azure/governance/policy/samples/built-in-policies#general)
+- [How to deny a specific resource type with Azure Policy](../governance/policy/samples/built-in-policies.md#general)
**Responsibility**: Customer
For additional information, see the references below.
**Guidance**: Define and implement standard security configurations for network settings related to your Azure Application Gateway deployments. Use Azure Policy aliases in the "Microsoft.Network" namespace to create custom policies to audit or enforce the network configuration of your Azure Application Gateways, Azure Virtual Networks, and network security groups. You may also make use of built-in policy definition. -- [How to view available Azure Policy Aliases](https://docs.microsoft.com/powershell/module/az.resources/get-azpolicyalias?view=azps-4.8.0&amp;preserve-view=true)
+- [How to view available Azure Policy Aliases](/powershell/module/az.resources/get-azpolicyalias?preserve-view=true&view=azps-4.8.0)
- [How to configure and manage Azure Policy](../governance/policy/tutorials/create-and-manage.md)
For additional information, see the references below.
**Guidance**: If using custom Azure policy definitions, use Azure DevOps or Azure Repos to securely store and manage your code. -- [How to store code in Azure DevOps](https://docs.microsoft.com/azure/devops/repos/git/gitworkflow?view=azure-devops&amp;preserve-view=true)
+- [How to store code in Azure DevOps](/azure/devops/repos/git/gitworkflow?preserve-view=true&view=azure-devops)
-- [Azure Repos Documentation](https://docs.microsoft.com/azure/devops/repos/?view=azure-devops&amp;preserve-view=true)
+- [Azure Repos Documentation](/azure/devops/repos/?preserve-view=true&view=azure-devops)
**Responsibility**: Customer
Configure diagnostic settings for your Azure Application Gateway deployments. di
**Guidance**: When using Azure Web Application Firewall (WAF), you can configure WAF policies. A WAF policy consists of two types of security rules: custom rules that are authored by the customer, and managed rule sets that are a collection of Azure-managed pre-configured set of rules. Azure-managed rule sets provide an easy way to deploy protection against a common set of security threats. Since such rulesets are managed by Azure, the rules are updated as needed to protect against new attack signatures. -- [Understand Azure-managed WAF rule sets](https://docs.microsoft.com/azure/web-application-firewall/ag/ag-overview#waf-policy-and-rules)
+- [Understand Azure-managed WAF rule sets](../web-application-firewall/ag/ag-overview.md#waf-policy-and-rules)
**Responsibility**: Shared
Configure diagnostic settings for your Azure Application Gateway deployments. di
Azure DevOps Services leverages many of the Azure storage features to ensure data availability in the case of hardware failure, service disruption, or region disaster. Additionally, the Azure DevOps team follows procedures to protect data from accidental or malicious deletion. -- [Understand data availability in Azure DevOps](https://docs.microsoft.com/azure/devops/organizations/security/data-protection?view=azure-devops#data-availability&amp;preserve-view=true)
+- [Understand data availability in Azure DevOps](/azure/devops/organizations/security/data-protection?preserve-view=true&view=azure-devops#data-availability)
-- [How to store code in Azure DevOps](https://docs.microsoft.com/azure/devops/repos/git/gitworkflow?view=azure-devops&amp;preserve-view=true)
+- [How to store code in Azure DevOps](/azure/devops/repos/git/gitworkflow?preserve-view=true&view=azure-devops)
-- [Azure Repos Documentation](https://docs.microsoft.com/azure/devops/repos/?view=azure-devops&amp;preserve-view=true)
+- [Azure Repos Documentation](/azure/devops/repos/?preserve-view=true&view=azure-devops)
**Responsibility**: Customer
Additionally, clearly mark subscriptions (for ex. production, non-prod) and crea
## Next steps -- See the [Azure Security Benchmark V2 overview](/azure/security/benchmarks/overview)-- Learn more about [Azure security baselines](/azure/security/benchmarks/security-baselines-overview)
+- See the [Azure Security Benchmark V2 overview](../security/benchmarks/overview.md)
+- Learn more about [Azure security baselines](../security/benchmarks/security-baselines-overview.md)
attestation Policy Examples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/attestation/policy-examples.md
c:[type=="x-ms-sgx-mrsigner"] => issue(type="<custom-name>", value=c.value);
}; ```
-For more information on the incoming claims generated by Azure Attestation, see [claim sets](/azure/attestation/claim-sets). Incoming claims can be used by policy authors to define authorization rules in a custom policy.
+For more information on the incoming claims generated by Azure Attestation, see [claim sets](./claim-sets.md). Incoming claims can be used by policy authors to define authorization rules in a custom policy.
-Issuance rules section is not mandatory. This section can be used by the users to have additional outgoing claims generated in the attestation token with custom names. For more information on the outgoing claims generated by the service in attestation token, see [claim sets](/azure/attestation/claim-sets).
+Issuance rules section is not mandatory. This section can be used by the users to have additional outgoing claims generated in the attestation token with custom names. For more information on the outgoing claims generated by the service in attestation token, see [claim sets](./claim-sets.md).
## Default policy for an SGX enclave
issuancerules
}; ```
-Claims used in default policy are considered deprecated but are fully supported and will continue to be included in the future. It is recommended to use the non-deprecated claim names. For more information on the recommended claim names, see [claim sets](/azure/attestation/claim-sets).
+Claims used in default policy are considered deprecated but are fully supported and will continue to be included in the future. It is recommended to use the non-deprecated claim names. For more information on the recommended claim names, see [claim sets](./claim-sets.md).
## Sample custom policy to support multiple SGX enclaves
eyJhbGciOiJSU0EyNTYiLCJ4NWMiOlsiTUlJQzFqQ0NBYjZnQXdJQkFnSUlTUUdEOUVGakJcdTAwMkJZ
## Next steps - [How to author and sign an attestation policy](author-sign-policy.md)-- [Set up Azure Attestation using PowerShell](quickstart-powershell.md)
+- [Set up Azure Attestation using PowerShell](quickstart-powershell.md)
automanage Automanage Hotpatch https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automanage/automanage-hotpatch.md
Title: Hotpatch for Windows Server Azure Edition (preview)
description: Learn how Hotpatch for Windows Server Azure Edition works and how to enable it -+ Last updated 02/22/2021
az provider register --namespace Microsoft.Compute
## Patch installation
-During the preview, [Automatic VM Guest Patching](https://docs.microsoft.com/azure/virtual-machines/automatic-vm-guest-patching) is enabled automatically for all VMs created with _Windows Server 2019 Datacenter: Azure Edition_. With automatic VM guest patching enabled:
+During the preview, [Automatic VM Guest Patching](../virtual-machines/automatic-vm-guest-patching.md) is enabled automatically for all VMs created with _Windows Server 2019 Datacenter: Azure Edition_. With automatic VM guest patching enabled:
* Patches classified as Critical or Security are automatically downloaded and applied on the VM. * Patches are applied during off-peak hours in the VM's time zone.
-* Patch orchestration is managed by Azure and patches are applied following [availability-first principles](https://docs.microsoft.com/azure/virtual-machines/automatic-vm-guest-patching#availability-first-patching).
+* Patch orchestration is managed by Azure and patches are applied following [availability-first principles](../virtual-machines/automatic-vm-guest-patching.md#availability-first-patching).
* Virtual machine health, as determined through platform health signals, is monitored to detect patching failures. ### How does automatic VM guest patching work?
-When [Automatic VM Guest Patching](https://docs.microsoft.com/azure/virtual-machines/automatic-vm-guest-patching) is enabled on a VM, the available Critical and Security patches are downloaded and applied automatically. This process kicks off automatically every month when new patches are released. Patch assessment and installation are automatic, and the process includes rebooting the VM as required.
+When [Automatic VM Guest Patching](../virtual-machines/automatic-vm-guest-patching.md) is enabled on a VM, the available Critical and Security patches are downloaded and applied automatically. This process kicks off automatically every month when new patches are released. Patch assessment and installation are automatic, and the process includes rebooting the VM as required.
With Hotpatch enabled on _Windows Server 2019 Datacenter: Azure Edition_ VMs, most monthly security updates are delivered as hotpatches that don't require reboots. Latest Cumulative Updates sent on planned or unplanned baseline months will require VM reboots. Additional Critical or Security patches may also be available periodically which may require VM reboots. The VM is assessed automatically every few days and multiple times within any 30-day period to determine the applicable patches for that VM. This automatic assessment ensures that any missing patches are discovered at the earliest possible opportunity.
-Patches are installed within 30 days of the monthly patch releases, following [availability-first principles](https://docs.microsoft.com/azure/virtual-machines/automatic-vm-guest-patching#availability-first-patching). Patches are installed only during off-peak hours for the VM, depending on the time zone of the VM. The VM must be running during the off-peak hours for patches to be automatically installed. If a VM is powered off during a periodic assessment, the VM will be assessed and applicable patches will be installed automatically during the next periodic assessment when the VM is powered on. The next periodic assessment usually happens within a few days.
+Patches are installed within 30 days of the monthly patch releases, following [availability-first principles](../virtual-machines/automatic-vm-guest-patching.md#availability-first-patching). Patches are installed only during off-peak hours for the VM, depending on the time zone of the VM. The VM must be running during the off-peak hours for patches to be automatically installed. If a VM is powered off during a periodic assessment, the VM will be assessed and applicable patches will be installed automatically during the next periodic assessment when the VM is powered on. The next periodic assessment usually happens within a few days.
Definition updates and other patches not classified as Critical or Security won't be installed through automatic VM guest patching.
Definition updates and other patches not classified as Critical or Security won'
To view the patch status for your VM, navigate to the **Guest + host updates** section for your VM in the Azure portal. Under the **Guest OS updates** section, click on ΓÇÿGo to Hotpatch (Preview)ΓÇÖ to view the latest patch status for your VM.
-On this screen, you'll see the Hotpatch status for your VM. You can also review if there any available patches for your VM that haven't been installed. As described in the ΓÇÿPatch installationΓÇÖ section above, all security and critical updates will be automatically installed on your VM using [Automatic VM Guest Patching](https://docs.microsoft.com/azure/virtual-machines/automatic-vm-guest-patching) and no extra actions are required. Patches with other update classifications aren't automatically installed. Instead, they're viewable in the list of available patches under the ΓÇÿUpdate complianceΓÇÖ tab. You can also view the history of update deployments on your VM through the ΓÇÿUpdate historyΓÇÖ. Update history from the past 30 days is displayed, along with patch installation details.
+On this screen, you'll see the Hotpatch status for your VM. You can also review if there any available patches for your VM that haven't been installed. As described in the ΓÇÿPatch installationΓÇÖ section above, all security and critical updates will be automatically installed on your VM using [Automatic VM Guest Patching](../virtual-machines/automatic-vm-guest-patching.md) and no extra actions are required. Patches with other update classifications aren't automatically installed. Instead, they're viewable in the list of available patches under the ΓÇÿUpdate complianceΓÇÖ tab. You can also view the history of update deployments on your VM through the ΓÇÿUpdate historyΓÇÖ. Update history from the past 30 days is displayed, along with patch installation details.
:::image type="content" source="media\automanage-hotpatch\hotpatch-management-ui.png" alt-text="Hotpatch Management.":::
There are some important considerations to running a Windows Server Azure editio
## Next steps
-* Learn about Azure Update Management [here](https://docs.microsoft.com/azure/automation/update-management/overview).
-* Learn more about Automatic VM Guest Patching [here](https://docs.microsoft.com/azure/virtual-machines/automatic-vm-guest-patching)
+* Learn about Azure Update Management [here](../automation/update-management/overview.md).
+* Learn more about Automatic VM Guest Patching [here](../virtual-machines/automatic-vm-guest-patching.md)
automanage Automanage Linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automanage/automanage-linux.md
description: Learn about the Azure Automanage for virtual machines best practice
+ Last updated 02/22/2021
automanage Common Errors https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automanage/common-errors.md
Error | Mitigation
:--|:-| Automanage account insufficient permissions error | This may happen if you have recently moved a subscription containing a new Automanage Account into a new tenant. Steps to resolve this are located [here](./repair-automanage-account.md). Workspace region not matching region mapping requirements | Automanage was unable to onboard your machine but the Log Analytics workspace that the machine is currently linked to is not mapped to a supported Automation region. Ensure that your existing Log Analytics workspace and Automation account are located in a [supported region mapping](../automation/how-to/region-mappings.md).
-"Access denied because of the deny assignment with name 'System deny assignment created by managed application'" | A [denyAssignment](https://docs.microsoft.com/azure/role-based-access-control/deny-assignments) was created on your resource which prevented Automanage from accessing your resource. This may have been caused by either a [Blueprint](https://docs.microsoft.com/azure/governance/blueprints/concepts/resource-locking) or a [Managed Application](https://docs.microsoft.com/azure/azure-resource-manager/managed-applications/overview).
+"Access denied because of the deny assignment with name 'System deny assignment created by managed application'" | A [denyAssignment](../role-based-access-control/deny-assignments.md) was created on your resource which prevented Automanage from accessing your resource. This may have been caused by either a [Blueprint](../governance/blueprints/concepts/resource-locking.md) or a [Managed Application](../azure-resource-manager/managed-applications/overview.md).
+"OS Information: Name='(null)', ver='(null)', agent status='Not Ready'." | Ensure that you're running a [minimum supported agent version](/troubleshoot/azure/virtual-machines/support-extensions-agent-version), the agent is running ([Linux](/troubleshoot/azure/virtual-machines/linux-azure-guest-agent) and [Windows](/troubleshoot/azure/virtual-machines/windows-azure-guest-agent)), and that the agent is up to date ([Linux](../virtual-machines/extensions/update-linux-agent.md) and [Windows](../virtual-machines/extensions/agent-windows.md)).
+"VM has reported a failure when processing extension 'IaaSAntimalware'" | Ensure you don't have another antimalware/antivirus offering already installed on your VM. If that fails, contact support.
+ASC workspace: Automanage does not currently support the Log Analytics service in _location_. | Check that your VM is located in a [supported region](./automanage-virtual-machines.md#supported-regions).
+The template deployment failed because of policy violation. Please see details for more information. | There is a policy preventing Automanage from onboarding your VM. Check the policies that are applied to your subscription or resource group containing your VM you want to onboard to Automanage.
"The assignment has failed; there is no additional information available" | Please open a case with Microsoft Azure support. ## Next steps
automation Automation Create Alert Triggered Runbook https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-create-alert-triggered-runbook.md
# Use an alert to trigger an Azure Automation runbook
-You can use [Azure Monitor](../azure-monitor/overview.md) to monitor base-level metrics and logs for most services in Azure. You can call Azure Automation runbooks by using [action groups](../azure-monitor/platform/action-groups.md) to automate tasks based on alerts. This article shows you how to configure and run a runbook by using alerts.
+You can use [Azure Monitor](../azure-monitor/overview.md) to monitor base-level metrics and logs for most services in Azure. You can call Azure Automation runbooks by using [action groups](../azure-monitor/alerts/action-groups.md) to automate tasks based on alerts. This article shows you how to configure and run a runbook by using alerts.
## Alert types
Alerts use action groups, which are collections of actions that are triggered by
* To discover different ways to start a runbook, see [Start a runbook](./start-runbooks.md). * To create an activity log alert, see [Create activity log alerts](../azure-monitor/alerts/activity-log-alerts.md). * To learn how to create a near real-time alert, see [Create an alert rule in the Azure portal](../azure-monitor/alerts/alerts-metric.md?toc=/azure/azure-monitor/toc.json).
-* For a PowerShell cmdlet reference, see [Az.Automation](/powershell/module/az.automation).
+* For a PowerShell cmdlet reference, see [Az.Automation](/powershell/module/az.automation).
automation Automation Managing Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-managing-data.md
The following table summarizes the retention policy for different resources.
| Node Reports |A node report is permanently removed 90 days after a new report is generated for that node. | | Runbooks |A runbook is permanently removed 30 days after a user deletes the resource, or 30 days after a user deletes the account that holds the resource<sup>1</sup>. |
-<sup>1</sup>The runbook can be recovered within the 30-day window by filing an Azure support incident with Microsoft Azure Support. Go to the [Azure support site](/support/options) and select **Submit a support request**.
+<sup>1</sup>The runbook can be recovered within the 30-day window by filing an Azure support incident with Microsoft Azure Support. Go to the [Azure support site](https://azure.microsoft.com/support/options/) and select **Submit a support request**.
## Data backup
automation Automation Runbook Execution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-runbook-execution.md
Title: Runbook execution in Azure Automation
description: This article provides an overview of the processing of runbooks in Azure Automation. Previously updated : 10/06/2020 Last updated : 03/23/2021
The following diagram shows the lifecycle of a runbook job for [PowerShell runbo
Runbooks in Azure Automation can run on either an Azure sandbox or a [Hybrid Runbook Worker](automation-hybrid-runbook-worker.md).
-When runbooks are designed to authenticate and run against resources in Azure, they run in an Azure sandbox, which is a shared environment that multiple jobs can use. Jobs using the same sandbox are bound by the resource limitations of the sandbox. The Azure sandbox environment does not support interactive operations. It prevents access to all out-of-process COM servers. It also requires the use of local MOF files for runbooks that make Win32 calls.
+When runbooks are designed to authenticate and run against resources in Azure, they run in an Azure sandbox, which is a shared environment that multiple jobs can use. Jobs using the same sandbox are bound by the resource limitations of the sandbox. The Azure sandbox environment does not support interactive operations. It prevents access to all out-of-process COM servers, and it does not support making [WMI calls](/windows/win32/wmisdk/wmi-architecture) to the Win32 provider in your runbook.  These scenarios are only supported by running the runbook on a Windows Hybrid Runbook Worker.
+ You can also use a [Hybrid Runbook Worker](automation-hybrid-runbook-worker.md) to run runbooks directly on the computer that hosts the role and against local resources in the environment. Azure Automation stores and manages runbooks and then delivers them to one or more assigned computers.
automation Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/security-baseline.md
Alternatively, if you have a specific requirement, Azure Firewall may also be us
- [How to deploy and configure Azure Firewall](../firewall/tutorial-firewall-deploy-portal.md) -- [Runbook execution environment](https://docs.microsoft.com/azure/automation/automation-runbook-execution#runbook-execution-environment)
+- [Runbook execution environment](./automation-runbook-execution.md#runbook-execution-environment)
**Responsibility**: Customer
You may also use Azure Blueprints to simplify large-scale Azure deployments by p
- [How to configure and manage Azure Policy](../governance/policy/tutorials/create-and-manage.md) -- [Azure Policy samples for networking](https://docs.microsoft.com/azure/governance/policy/samples/built-in-policies#network)
+- [Azure Policy samples for networking](../governance/policy/samples/built-in-policies.md#network)
- [How to create an Azure Blueprint](../governance/blueprints/create-blueprint-portal.md)
You may use Azure PowerShell or Azure CLI to look-up or perform actions on resou
**Guidance**: Use Azure Activity Log to monitor resource configurations and detect changes to your network resources. Create alerts within Azure Monitor that will trigger when changes to critical resources take place. -- [How to view and retrieve Azure Activity Log events](/azure/azure-monitor/platform/activity-log-view)
+- [How to view and retrieve Azure Activity Log events](../azure-monitor/essentials/activity-log.md#view-the-activity-log)
-- [How to create alerts in Azure Monitor](/azure/azure-monitor/platform/activity-log#view-the-activity-log)
+- [How to create alerts in Azure Monitor](../azure-monitor/essentials/activity-log.md#view-the-activity-log)
**Responsibility**: Customer
Alternatively, you may enable and on-board data to Azure Sentinel or a third-par
- [How to onboard Azure Sentinel](../sentinel/quickstart-onboard.md) -- [How to collect platform logs and metrics with Azure Monitor](/azure/azure-monitor/platform/diagnostic-settings)
+- [How to collect platform logs and metrics with Azure Monitor](../azure-monitor/essentials/diagnostic-settings.md)
- [How to get started with Azure Monitor and third-party SIEM integration](https://azure.microsoft.com/blog/use-azure-monitor-to-integrate-with-siem-tools/)
Alternatively, you may enable and on-board data to Azure Sentinel or a third-par
**Guidance**: Enable Azure Monitor for access to your audit and activity logs which includes event source, date, user, timestamp, source addresses, destination addresses, and other useful elements. -- [How to collect platform logs and metrics with Azure Monitor](/azure/azure-monitor/platform/diagnostic-settings)
+- [How to collect platform logs and metrics with Azure Monitor](../azure-monitor/essentials/diagnostic-settings.md)
-- [View and retrieve Azure Activity log events](/azure/azure-monitor/platform/activity-log#view-the-activity-log)
+- [View and retrieve Azure Activity log events](../azure-monitor/essentials/activity-log.md#view-the-activity-log)
**Responsibility**: Customer
Alternatively, you may enable and on-board data to Azure Sentinel or a third-par
**Guidance**: Within Azure Monitor, set your Log Analytics workspace retention period according to your organization's compliance regulations. Use Azure Storage Accounts for long-term/archival storage. -- [Change the data retention period in Log Analytics](/azure/azure-monitor/platform/manage-cost-storage#change-the-data-retention-period)
+- [Change the data retention period in Log Analytics](../azure-monitor/logs/manage-cost-storage.md#change-the-data-retention-period)
-- [Data retention details for Automation Accounts](https://docs.microsoft.com/azure/automation/automation-managing-data#data-retention)
+- [Data retention details for Automation Accounts](./automation-managing-data.md#data-retention)
**Responsibility**: Customer
Alternatively, you may enable and on-board data to Azure Sentinel or a third-par
- [How to onboard Azure Sentinel](../sentinel/quickstart-onboard.md) -- [Understand log queries in Azure Monitor](/azure/azure-monitor/log-query/log-analytics-tutorial)
+- [Understand log queries in Azure Monitor](../azure-monitor/logs/log-analytics-tutorial.md)
-- [How to perform custom queries in Azure Monitor](/azure/azure-monitor/log-query/get-started-queries)
+- [How to perform custom queries in Azure Monitor](../azure-monitor/logs/get-started-queries.md)
**Responsibility**: Customer
Alternatively, you may enable and on-board data to Azure Sentinel.
- [How to manage alerts in Azure Security Center](../security-center/security-center-managing-and-responding-alerts.md) -- [How to alert on Azure Monitor log data](/azure/azure-monitor/learn/tutorial-response)
+- [How to alert on Azure Monitor log data](../azure-monitor/alerts/tutorial-response.md)
**Responsibility**: Customer
Alternatively, you may enable and on-board data to Azure Sentinel.
You can also enable a Just-In-Time / Just-Enough-Access by using Azure Active Directory (Azure AD) Privileged Identity Management Privileged Roles for Microsoft Services, and Azure Resource Manager. -- [Learn more about Privileged Identity Management](/azure/active-directory/privileged-identity-management/)
+- [Learn more about Privileged Identity Management](../active-directory/privileged-identity-management/index.yml)
- [Delete a Run As or Classic Run As account](delete-run-as-account.md)
You can also enable a Just-In-Time / Just-Enough-Access by using Azure Active Di
- [How to integrate Azure Activity Logs into Azure Monitor](../active-directory/reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md) -- [How to configure action groups for custom alerting and notification](/azure/azure-monitor/platform/action-groups)
+- [How to configure action groups for custom alerting and notification](../azure-monitor/alerts/action-groups.md)
**Responsibility**: Customer
You can also enable a Just-In-Time / Just-Enough-Access by using Azure Active Di
- [How to create and configure an Azure AD instance](../active-directory-domain-services/tutorial-create-instance.md) -- [Use runbook authentication with managed identities](https://docs.microsoft.com/azure/automation/automation-hrw-run-runbooks#runbook-auth-managed-identities)
+- [Use runbook authentication with managed identities](./automation-hrw-run-runbooks.md#runbook-auth-managed-identities)
**Responsibility**: Customer
You can also enable a Just-In-Time / Just-Enough-Access by using Azure Active Di
**Guidance**: Azure Active Directory (Azure AD) provides logs to help discover stale accounts. In addition, use Azure identity access reviews to efficiently manage group memberships, access to enterprise applications, and role assignments. User access can be reviewed on a regular basis to make sure only the right users have continued access. Whenever using Automation Account Run As accounts for your runbooks ensure these service principals are also tracked in your inventory since they often time have elevated permissions. Delete any unused Run As accounts to minimize your exposed attack surface. -- [Understand Azure AD reporting](/azure/active-directory/reports-monitoring/)
+- [Understand Azure AD reporting](../active-directory/reports-monitoring/index.yml)
- [How to use Azure identity access reviews](../active-directory/governance/access-reviews-overview.md)
You can streamline this process by creating Diagnostic Settings for Azure AD use
**Guidance**: Use Azure Active Directory (Azure AD) Risk and Identity Protection features to configure automated responses to detected suspicious actions related to user identities for your network resource. You can also ingest data into Azure Sentinel for further investigation. -- [How to view Azure AD risky sign-ins](/azure/active-directory/reports-monitoring/concept-risky-sign-ins)
+- [How to view Azure AD risky sign-ins](../active-directory/identity-protection/overview-identity-protection.md)
- [How to configure and enable Identity Protection risk policies](../active-directory/identity-protection/howto-identity-protection-configure-risk-policies.md)
For the underlying platform which is managed by Microsoft, Microsoft treats all
Follow Azure Security Center recommendations for encryption at rest and encryption in transit, where applicable. -- [Understand encryption in transit with Azure](https://docs.microsoft.com/azure/security/fundamentals/encryption-overview#encryption-of-data-in-transit)
+- [Understand encryption in transit with Azure](../security/fundamentals/encryption-overview.md#encryption-of-data-in-transit)
- [Azure Automation TLS 1.2 enforcement](../active-directory/hybrid/reference-connect-tls-enforcement.md)
Follow Azure Security Center recommendations for encryption at rest and encrypti
- [How to configure Azure RBAC](../role-based-access-control/role-assignments-portal.md) -- [Runbook permissions for a Hybrid Runbook Worker](https://docs.microsoft.com/azure/automation/automation-hybrid-runbook-worker#runbook-permissions-for-a-hybrid-runbook-worker)
+- [Runbook permissions for a Hybrid Runbook Worker](./automation-hybrid-runbook-worker.md#runbook-permissions-for-a-hybrid-runbook-worker)
- [Manage role permissions and security](automation-role-based-access-control.md)
When using Hybrid Runbook Workers, the virtual disks on the virtual machines are
- [Azure Disk Encryption for Windows VMs](../virtual-machines/windows/disk-encryption-overview.md) -- [Use of customer-managed keys for an Automation account](https://docs.microsoft.com/azure/automation/automation-secure-asset-encryption#use-of-customer-managed-keys-for-an-automation-account)
+- [Use of customer-managed keys for an Automation account](./automation-secure-asset-encryption.md#use-of-customer-managed-keys-for-an-automation-account)
- [Managed variables in Azure Automation](shared-resources/variables.md)
When using Hybrid Runbook Workers, the virtual disks on the virtual machines are
**Guidance**: Use Azure Monitor with Azure Activity Log to create alerts for when changes take place to critical Azure resources like networking components, Azure Automation accounts, and runbooks. -- [Diagnostic logging for a network security group](https://docs.microsoft.com/azure/private-link/private-link-overview#logging-and-monitoring)
+- [Diagnostic logging for a network security group](../private-link/private-link-overview.md#logging-and-monitoring)
-- [How to create alerts for Azure Activity Log events](/azure/azure-monitor/platform/alerts-activity-log)
+- [How to create alerts for Azure Activity Log events](../azure-monitor/alerts/alerts-activity-log.md)
**Responsibility**: Customer
In addition, use the Azure Resource Graph to query/discover resources within sub
- [How to configure and manage Azure Policy](../governance/policy/tutorials/create-and-manage.md) -- [How to deny a specific resource type with Azure Policy](https://docs.microsoft.com/azure/governance/policy/samples/built-in-policies#general)
+- [How to deny a specific resource type with Azure Policy](../governance/policy/samples/built-in-policies.md#general)
**Responsibility**: Customer
You may also use recommendations from Azure Security Center as a secure configur
- [How to configure and manage Azure Policy](../governance/policy/tutorials/create-and-manage.md) -- [How to use Aliases](https://docs.microsoft.com/azure/governance/policy/concepts/definition-structure#aliases)
+- [How to use Aliases](../governance/policy/concepts/definition-structure.md#aliases)
- [Azure Policy sample built-ins for Azure Automation](policy-reference.md)
Use the source control integration feature to keep your runbooks in your Automat
- [How to backup key vault keys in Azure](/powershell/module/az.keyvault/backup-azkeyvaultkey) -- [Use of customer-managed keys for an Automation account](https://docs.microsoft.com/azure/automation/automation-secure-asset-encryption#use-of-customer-managed-keys-for-an-automation-account)
+- [Use of customer-managed keys for an Automation account](./automation-secure-asset-encryption.md#use-of-customer-managed-keys-for-an-automation-account)
- [Use source control integration](source-control-integration.md)
Use the source control integration feature to keep your runbooks in your Automat
- [How to backup key vault keys in Azure](/powershell/module/az.keyvault/backup-azkeyvaultkey) -- [Use of customer-managed keys for an Automation account](https://docs.microsoft.com/azure/automation/automation-secure-asset-encryption#use-of-customer-managed-keys-for-an-automation-account)
+- [Use of customer-managed keys for an Automation account](./automation-secure-asset-encryption.md#use-of-customer-managed-keys-for-an-automation-account)
-- [Azure data backup for Automation Accounts](https://docs.microsoft.com/azure/automation/automation-managing-data#data-backup)
+- [Azure data backup for Automation Accounts](./automation-managing-data.md#data-backup)
**Responsibility**: Customer
Use the source control integration feature to keep your runbooks in your Automat
- [How to restore key vault keys in Azure](/powershell/module/az.keyvault/restore-azkeyvaultkey) -- [Use of customer-managed keys for an Automation account](https://docs.microsoft.com/azure/automation/automation-secure-asset-encryption#use-of-customer-managed-keys-for-an-automation-account)
+- [Use of customer-managed keys for an Automation account](./automation-secure-asset-encryption.md#use-of-customer-managed-keys-for-an-automation-account)
**Responsibility**: Customer
Additionally, clearly mark subscriptions (for ex. production, non-prod) using ta
## Next steps -- See the [Azure Security Benchmark V2 overview](/azure/security/benchmarks/overview)-- Learn more about [Azure security baselines](/azure/security/benchmarks/security-baselines-overview)
+- See the [Azure Security Benchmark V2 overview](../security/benchmarks/overview.md)
+- Learn more about [Azure security baselines](../security/benchmarks/security-baselines-overview.md)
automation Modules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/shared-resources/modules.md
Importing an Az module into your Automation account doesn't automatically import
* When a runbook invokes a cmdlet from a module. * When a runbook imports the module explicitly with the [Import-Module](/powershell/module/microsoft.powershell.core/import-module) cmdlet.
-* When a runbook imports the module explicitly with the [using module](https://docs.microsoft.com/powershell/module/microsoft.powershell.core/about/about_using#module-syntax) statement. The using statement is supported starting with Windows PowerShell 5.0 and supports classes and enum type import.
+* When a runbook imports the module explicitly with the [using module](/powershell/module/microsoft.powershell.core/about/about_using#module-syntax) statement. The using statement is supported starting with Windows PowerShell 5.0 and supports classes and enum type import.
* When a runbook imports another dependent module.
-You can import the Az modules in the Azure portal. Remember to import only the Az modules that you need, not the entire Az.Automation module. Because [Az.Accounts](https://www.powershellgallery.com/packages/Az.Accounts/1.1.0) is a dependency for the other Az modules, be sure to import this module before any others.
+You can import the Az modules into the Automation account from the Azure portal. Remember to import only the Az modules that you need, not every Az module that's available. Because [Az.Accounts](https://www.powershellgallery.com/packages/Az.Accounts/1.1.0) is a dependency for the other Az modules, be sure to import this module before any others.
1. From your Automation account, under **Shared Resources**, select **Modules**. 2. Select **Browse Gallery**.
Remove-AzAutomationModule -Name <moduleName> -AutomationAccountName <automationA
* For more information about using Azure PowerShell modules, see [Get started with Azure PowerShell](/powershell/azure/get-started-azureps).
-* To learn more about creating PowerShell modules, see [Writing a Windows PowerShell module](/powershell/scripting/developer/module/writing-a-windows-powershell-module).
+* To learn more about creating PowerShell modules, see [Writing a Windows PowerShell module](/powershell/scripting/developer/module/writing-a-windows-powershell-module).
automation Start Stop Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/troubleshoot/start-stop-vm.md
Review the following list for potential resolutions:
``` * To start and stop VMs, the Run As account for the Automation account must have appropriate permissions to the VM. To learn how to check the permissions on a resource, see [Quickstart: View roles assigned to a user using the Azure portal](../../role-based-access-control/check-access.md). You'll need to provide the application ID for the service principal used by the Run As account. You can retrieve this value by going to your Automation account in the Azure portal. Select **Run as accounts** under **Account Settings** and select the appropriate Run As account.
-* If the VM is having a problem starting or deallocating, there might be an issue on the VM itself. Examples are an update that's being applied when the VM is trying to shut down, a service that hangs, and more. Go to your VM resource, and check **Activity Logs** to see if there are any errors in the logs. You might also attempt to log in to the VM to see if there are any errors in the event logs. To learn more about troubleshooting your VM, see [Troubleshooting Azure virtual machines](../../virtual-machines/troubleshooting/index.yml).
+* If the VM is having a problem starting or deallocating, there might be an issue on the VM itself. Examples are an update that's being applied when the VM is trying to shut down, a service that hangs, and more. Go to your VM resource, and check **Activity Logs** to see if there are any errors in the logs. You might also attempt to log in to the VM to see if there are any errors in the event logs. To learn more about troubleshooting your VM, see [Troubleshooting Azure virtual machines](/troubleshoot/azure/virtual-machines/welcome-virtual-machines).
* Check the [job streams](../automation-runbook-execution.md#job-statuses) to look for any errors. In the portal, go to your Automation account and select **Jobs** under **Process Automation**. ## <a name="custom-runbook"></a>Scenario: My custom runbook fails to start or stop my VMs
If you don't see your problem here or you can't resolve your issue, try one of t
* Get answers from Azure experts through [Azure Forums](https://azure.microsoft.com/support/forums/). * Connect with [@AzureSupport](https://twitter.com/azuresupport), the official Microsoft Azure account for improving customer experience. Azure Support connects the Azure community to answers, support, and experts.
-* File an Azure support incident. Go to the [Azure support site](https://azure.microsoft.com/support/options/), and select **Get Support**.
+* File an Azure support incident. Go to the [Azure support site](https://azure.microsoft.com/support/options/), and select **Get Support**.
automation Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/update-management/overview.md
You can use Update Management in Azure Automation to manage operating system upd
> [!NOTE] > At this time, enabling Update Management directly from an Arc enabled server is not supported. See [Enable Update Management from your Automation account](../../automation/update-management/enable-from-automation-account.md) to understand requirements and how to enable for your server.
-To download and install available *Critical* and *Security* patches automatically on your Azure VM, review [Automatic VM guest patching](../../virtual-machines/windows/automatic-vm-guest-patching.md) for Windows VMs.
+To download and install available *Critical* and *Security* patches automatically on your Azure VM, review [Automatic VM guest patching](../../virtual-machines/automatic-vm-guest-patching.md) for Windows VMs.
Before deploying Update Management and enabling your machines for management, make sure that you understand the information in the following sections.
Here are the ways that you can enable Update Management and select machines to b
* For details of working with Update Management, see [Manage updates for your VMs](manage-updates-for-vm.md).
-* Review commonly asked questions about Update Management in the [Azure Automation frequently asked questions](../automation-faq.md).
+* Review commonly asked questions about Update Management in the [Azure Automation frequently asked questions](../automation-faq.md).
azure-app-configuration Howto Integrate Azure Managed Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/howto-integrate-azure-managed-service-identity.md
To set up a managed identity in the portal, you first create an application and
> [!NOTE]
- > In the case you want to use a **user-assigned managed identity**, be sure to specify the clientId when creating the [ManagedIdentityCredential](https://docs.microsoft.com/dotnet/api/azure.identity.managedidentitycredential).
+ > In the case you want to use a **user-assigned managed identity**, be sure to specify the clientId when creating the [ManagedIdentityCredential](/dotnet/api/azure.identity.managedidentitycredential).
>``` >config.AddAzureAppConfiguration(options => > options.Connect(new Uri(settings["AppConfig:Endpoint"]), new ManagedIdentityCredential(<your_clientId>))); >```
- >As explained in the [Managed Identities for Azure resources FAQs](https://docs.microsoft.com/azure/active-directory/managed-identities-azure-resources/known-issues#what-identity-will-imds-default-to-if-dont-specify-the-identity-in-the-request), there is a default way to resolve which managed identity is used. In this case, the Azure Identity library enforces you to specify the desired identity to avoid posible runtime issues in the future (for instance, if a new user-assigned managed identity is added or if the system-assigned managed identity is enabled). So, you will need to specify the clientId even if only one user-assigned managed identity is defined, and there is no system-assigned managed identity.
+ >As explained in the [Managed Identities for Azure resources FAQs](../active-directory/managed-identities-azure-resources/known-issues.md#what-identity-will-imds-default-to-if-dont-specify-the-identity-in-the-request), there is a default way to resolve which managed identity is used. In this case, the Azure Identity library enforces you to specify the desired identity to avoid posible runtime issues in the future (for instance, if a new user-assigned managed identity is added or if the system-assigned managed identity is enabled). So, you will need to specify the clientId even if only one user-assigned managed identity is defined, and there is no system-assigned managed identity.
1. To use both App Configuration values and Key Vault references, update *Program.cs* as shown below. This code calls `SetCredential` as part of `ConfigureKeyVault` to tell the config provider what credential to use when authenticating to Key Vault.
To set up a managed identity in the portal, you first create an application and
> [!NOTE] > The `ManagedIdentityCredential` works only in Azure environments of services that support managed identity authentication. It doesn't work in the local environment. Use [`DefaultAzureCredential`](/dotnet/api/azure.identity.defaultazurecredential) for the code to work in both local and Azure environments as it will fall back to a few authentication options including managed identity. >
- > In case you want to use a **user-asigned managed identity** with the `DefaultAzureCredential` when deployed to Azure, [specify the clientId](https://docs.microsoft.com/dotnet/api/overview/azure/identity-readme#specifying-a-user-assigned-managed-identity-with-the-defaultazurecredential).
+ > In case you want to use a **user-asigned managed identity** with the `DefaultAzureCredential` when deployed to Azure, [specify the clientId](/dotnet/api/overview/azure/identity-readme#specifying-a-user-assigned-managed-identity-with-the-defaultazurecredential).
[!INCLUDE [Prepare repository](../../includes/app-service-deploy-prepare-repo.md)]
For example, you can update the .NET Framework console app created in the quicks
In this tutorial, you added an Azure managed identity to streamline access to App Configuration and improve credential management for your app. To learn more about how to use App Configuration, continue to the Azure CLI samples. > [!div class="nextstepaction"]
-> [CLI samples](./cli-samples.md)
+> [CLI samples](./cli-samples.md)
azure-arc Using Extensions In Postgresql Hyperscale Server Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/using-extensions-in-postgresql-hyperscale-server-group.md
This guide will take in a scenario to use two of these extensions:
## Add extensions to the shared_preload_libraries For details about that are shared_preload_libraries please read the PostgreSQL documentation [here](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-SHARED-PRELOAD-LIBRARIES): - This step isn't needed for the extensions that are part of `contrib`-- this step isn't required for extensions that are not required to pre-load by shared_preload_libraries. For these extensions you may jump the next next paragraph [Create extensions](https://docs.microsoft.com/azure/azure-arc/data/using-extensions-in-postgresql-hyperscale-server-group#create-extensions).
+- this step isn't required for extensions that are not required to pre-load by shared_preload_libraries. For these extensions you may jump the next next paragraph [Create extensions](#create-extensions).
### Add an extension at the creation time of a server group ```console
See the [pg_cron README](https://github.com/citusdata/pg_cron) for full details
## Next steps - Read documentation on [`plv8`](https://plv8.github.io/) - Read documentation on [`PostGIS`](https://postgis.net/)-- Read documentation on [`pg_cron`](https://github.com/citusdata/pg_cron)
+- Read documentation on [`pg_cron`](https://github.com/citusdata/pg_cron)
azure-arc Agent Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/agent-upgrade.md
One minor version of Azure Arc enabled Kubernetes agents is released approximate
## Next steps
-* Walk through our quickstart to [connect a Kubernetes cluster to Azure Arc](./connect-cluster.md).
-* Already have a Kubernetes cluster connected Azure Arc? [Create configurations on your Arc enabled Kubernetes cluster](./use-gitops-connected-cluster.md).
+* Walk through our quickstart to [connect a Kubernetes cluster to Azure Arc](./quickstart-connect-cluster.md).
+* Already have a Kubernetes cluster connected Azure Arc? [Create configurations on your Arc enabled Kubernetes cluster](./tutorial-use-gitops-connected-cluster.md).
* Learn how to [use Azure Policy to apply configurations at scale](./use-azure-policy.md).
azure-arc Conceptual Agent Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/conceptual-agent-architecture.md
Most on-prem datacenters enforce strict network rules that prevent inbound commu
## Next steps
-* Walk through our quickstart to [connect a Kubernetes cluster to Azure Arc](./connect-cluster.md).
-* Learn more about the creating connections between your cluster and a Git repository as a [configuration resource with Azure Arc enabled Kubernetes](./conceptual-configurations.md).
+* Walk through our quickstart to [connect a Kubernetes cluster to Azure Arc](./quickstart-connect-cluster.md).
+* Learn more about the creating connections between your cluster and a Git repository as a [configuration resource with Azure Arc enabled Kubernetes](./conceptual-configurations.md).
azure-arc Plan At Scale Deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/plan-at-scale-deployment.md
In this phase, system engineers or administrators enable the core features in th
|Task |Detail |Duration | |--|-|| | [Create a resource group](../../azure-resource-manager/management/manage-resource-groups-portal.md#create-resource-groups) | A dedicated resource group to include only Arc enabled servers and centralize management and monitoring of these resources. | One hour |
-| Apply [Tags](../../azure-resource-manager/management/tag-resources.md) to help organize machines. | Evaluate and develop an IT-aligned [tagging strategy](/cloud-adoption-framework/decision-guides/resource-tagging/) that can help reduce the complexity of managing your Arc enabled servers and simplify making management decisions. | One day |
+| Apply [Tags](../../azure-resource-manager/management/tag-resources.md) to help organize machines. | Evaluate and develop an IT-aligned [tagging strategy](/azure/cloud-adoption-framework/decision-guides/resource-tagging/) that can help reduce the complexity of managing your Arc enabled servers and simplify making management decisions. | One day |
| Design and deploy [Azure Monitor Logs](../../azure-monitor/logs/data-platform-logs.md) | Evaluate [design and deployment considerations](../../azure-monitor/logs/design-logs-deployment.md) to determine if your organization should use an existing or implement another Log Analytics workspace to store collected log data from hybrid servers and machines.<sup>1</sup> | One day | | [Develop an Azure Policy](../../governance/policy/overview.md) governance plan | Determine how you will implement governance of hybrid servers and machines at the subscription or resource group scope with Azure Policy. | One day | | Configure [Role based access control](../../role-based-access-control/overview.md) (RBAC) | Develop an access plan to control who has access to manage Arc enabled servers and ability to view their data from other Azure services and solutions. | One day |
Next, we add to the foundation laid in phase 1 by preparing for and deploying th
|Task |Detail |Duration | |--|-||
-| Download the pre-defined installation script | Review and customize the pre-defined installation script for at-scale deployment of the Connected Machine agent to support your automated deployment requirements.<br><br> Sample at scale onboarding resources:<br><br> <ul><li> [At scale basic deployment script](onboard-service-principal.md)</ul></li> <ul><li>[At scale onboarding VMware vSphere Windows Server VMs](https://github.com/microsoft/azure_arc/blob/master/azure_arc_servers_jumpstart/docs/vmware_scaled_powercli_win.md)</ul></li> <ul><li>[At scale onboarding VMware vSphere Linux VMs](https://github.com/microsoft/azure_arc/blob/master/azure_arc_servers_jumpstart/docs/vmware_scaled_powercli_linux.md)</ul></li> <ul><li>[At scale onboarding AWS EC2 instances using Ansible](https://github.com/microsoft/azure_arc/blob/master/azure_arc_servers_jumpstart/docs/aws_scale_ansible.md)</ul></li> <ul><li>[At scale deployment using PowerShell remoting](https://docs.microsoft.com/azure/azure-arc/servers/onboard-powershell) (Windows only)</ul></li>| One or more days depending on requirements, organizational processes (for example, Change and Release Management), and automation method used. |
+| Download the pre-defined installation script | Review and customize the pre-defined installation script for at-scale deployment of the Connected Machine agent to support your automated deployment requirements.<br><br> Sample at-scale onboarding resources:<br><br> <ul><li> [At-scale basic deployment script](onboard-service-principal.md)</ul></li> <ul><li>[At-scale onboarding VMware vSphere Windows Server VMs](https://github.com/microsoft/azure_arc/blob/main/docs/azure_arc_jumpstart/azure_arc_servers/scaled_deployment/vmware_scaled_powercli_win/_index.md)</ul></li> <ul><li>[At-scale onboarding VMware vSphere Linux VMs](https://github.com/microsoft/azure_arc/blob/main/docs/azure_arc_jumpstart/azure_arc_servers/scaled_deployment/vmware_scaled_powercli_linux/_index.md)</ul></li> <ul><li>[At-scale onboarding AWS EC2 instances using Ansible](https://github.com/microsoft/azure_arc/blob/main/docs/azure_arc_jumpstart/azure_arc_servers/scaled_deployment/aws_scaled_ansible/_index.md)</ul></li> <ul><li>[At-scale deployment using PowerShell remoting](https://docs.microsoft.com/azure/azure-arc/servers/onboard-powershell) (Windows only)</ul></li>| One or more days depending on requirements, organizational processes (for example, Change and Release Management), and automation method used. |
| [Create service principal](onboard-service-principal.md#create-a-service-principal-for-onboarding-at-scale) |Create a service principal to connect machines non-interactively using Azure PowerShell or from the portal.| One hour | | Deploy the Connected Machine agent to your target servers and machines |Use your automation tool to deploy the scripts to your servers and connect them to Azure.| One or more days depending on your release plan and if following a phased rollout. |
azure-functions Create First Function Vs Code Csharp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/create-first-function-vs-code-csharp.md
After you've verified that the function runs correctly on your local computer, i
## Next steps
-You have used [Visual Studio Code](functions-develop-vs-code.md?tabs=csharp) to create a function app with a simple HTTP-triggered function. In the next article, you expand that function by connecting to Azure Storage. To learn more about connecting to other Azure services, see [Add bindings to an existing function in Azure Functions](add-bindings-existing-function.md?tabs=csharp).
+You have used [Visual Studio Code](functions-develop-vs-code.md?tabs=csharp) to create a function app with a simple HTTP-triggered function. In the next article, you expand that function by connecting to either Azure Cosmos DB or Azure Storage. To learn more about connecting to other Azure services, see [Add bindings to an existing function in Azure Functions](add-bindings-existing-function.md?tabs=csharp).
+> [!div class="nextstepaction"]
+> [Connect to a database](functions-add-output-binding-cosmos-db-vs-code.md?pivots=programming-language-csharp)
> [!div class="nextstepaction"] > [Connect to an Azure Storage queue](functions-add-output-binding-storage-queue-vs-code.md?pivots=programming-language-csharp)
azure-functions Create First Function Vs Code Node https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/create-first-function-vs-code-node.md
In this section, you create a function app and related resources in your Azure s
## Next steps
-You have used [Visual Studio Code](functions-develop-vs-code.md?tabs=javascript) to create a function app with a simple HTTP-triggered function. In the next article, you expand that function by connecting to Azure Storage. To learn more about connecting to other Azure services, see [Add bindings to an existing function in Azure Functions](add-bindings-existing-function.md?tabs=javascript).
+You have used [Visual Studio Code](functions-develop-vs-code.md?tabs=javascript) to create a function app with a simple HTTP-triggered function. In the next article, you expand that function by connecting to either Azure Cosmos DB or Azure Storage. To learn more about connecting to other Azure services, see [Add bindings to an existing function in Azure Functions](add-bindings-existing-function.md?tabs=javascript).
+> [!div class="nextstepaction"]
+> [Connect to a database](functions-add-output-binding-cosmos-db-vs-code.md?pivots=programming-language-javascript)
> [!div class="nextstepaction"] > [Connect to an Azure Storage queue](functions-add-output-binding-storage-queue-vs-code.md?pivots=programming-language-javascript)
azure-functions Functions Add Output Binding Cosmos Db Vs Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-add-output-binding-cosmos-db-vs-code.md
+
+ Title: Connect Azure Functions to Azure Cosmos DB using Visual Studio Code
+description: Learn how to connect Azure Functions to an Azure Cosmos DB account by adding an output binding to your Visual Studio Code project.
+ Last updated : 03/23/2021++
+zone_pivot_groups: programming-languages-set-functions-temp
++
+# Connect Azure Functions to Azure Cosmos DB using Visual Studio Code
++
+This article shows you how to use Visual Studio Code to connect [Azure Cosmos DB](../cosmos-db/introduction.md) to the function you created in the previous quickstart article. The output binding that you add to this function writes data from the HTTP request to a JSON document stored in an Azure Cosmos DB container.
+
+Before you begin, you must complete the article, [Quickstart: Create an Azure Functions project from the command line](create-first-function-cli-csharp.md). If you already cleaned up resources at the end of that article, go through the steps again to recreate the function app and related resources in Azure.
+Before you begin, you must complete the article, [Quickstart: Create an Azure Functions project from the command line](create-first-function-cli-node.md). If you already cleaned up resources at the end of that article, go through the steps again to recreate the function app and related resources in Azure.
+
+## Configure your environment
+
+Before you get started, make sure to install the [Azure Databases extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-cosmosdb) for Visual Studio Code.
+
+## Create your Azure Cosmos DB account
+
+> [!IMPORTANT]
+> [Azure Cosmos DB serverless](../cosmos-db/serverless.md) is now available in preview. This consumption-based mode makes Azure Cosmos DB a strong option for serverless workloads. To use Azure Cosmos DB in serverless mode, choose **Serverless** as the **Capacity mode** when creating your account.
+
+1. In a new browser window, sign in to the [Azure portal](https://portal.azure.com/).
+
+2. Click **Create a resource** > **Databases** > **Azure Cosmos DB**.
+
+ :::image type="content" source="../../includes/media/cosmos-db-create-dbaccount/create-nosql-db-databases-json-tutorial-1.png" alt-text="The Azure portal Databases pane" border="true":::
+
+3. In the **Create Azure Cosmos DB Account** page, enter the settings for your new Azure Cosmos DB account.
+
+ Setting|Value|Description
+ ||
+ Subscription|*Your subscription*|Choose the Azure subscription where you created your Function App in the [previous article](./create-first-function-vs-code-csharp.md).
+ Resource Group|*Your resource group*|Choose the resource group where you created your Function App in the [previous article](./create-first-function-vs-code-csharp.md).
+ Account Name|*Enter a unique name*|Enter a unique name to identify your Azure Cosmos DB account.<br><br>The account name can use only lowercase letters, numbers, and hyphens (-), and must be between 3 and 31 characters long.
+ API|Core (SQL)|Select **Core (SQL)** to create a document database that you can query by using a SQL syntax. [Learn more about the Azure Cosmos DB SQL API](../cosmos-db/introduction.md).|
+ Location|*Select the region closest to your location*|Select a geographic location to host your Azure Cosmos DB account. Use the location that's closest to you or your users to get the fastest access to your data.
+ Capacity mode|Serverless or Provisioned throughput|Select **Serverless** to create an account in [serverless](../cosmos-db/serverless.md) mode. Select **Provisioned throughput** to create an account in [provisioned throughput](../cosmos-db/set-throughput.md) mode.<br><br>Choose **Serverless** if you're getting started with Azure Cosmos DB.
+
+4. Click **Review + create**. You can skip the **Network** and **Tags** section.
+
+5. Review the summary information and click **Create**.
+
+6. Wait for your new Azure Cosmos DB to be created, then select **Go to resource**.
+
+ :::image type="content" source="../cosmos-db/media/create-cosmosdb-resources-portal/azure-cosmos-db-account-deployment-successful.png" alt-text="The creation of the Azure Cosmos DB account is complete" border="true":::
+
+## Create an Azure Cosmos DB database and container
+
+From your Azure Cosmos DB account, select **Data Explorer**, then **New Container**. Create a new database named *my-database*, a new container named *my-container* and choose `/id` as the [partition key](../cosmos-db/partitioning-overview.md).
++
+## Update your function app settings
+
+In the [previous quickstart article](./create-first-function-vs-code-csharp.md), you created a function app in Azure. In this article, you update your Function App to write JSON documents in the Azure Cosmos DB container you've created above. To connect to your Azure Cosmos DB account, you must add its connection string to your app settings. You then download the new setting to your local.settings.json file so you can connect to your Azure Cosmos DB account when running locally.
+
+1. In Visual Studio Code, locate the Azure Cosmos DB account you have just created. Right-click on its name, and select **Copy Connection String**.
+
+ :::image type="content" source="./media/functions-add-output-binding-cosmos-db-vs-code/copy-connection-string.png" alt-text="Copying the Azure Cosmos DB connection string" border="true":::
+
+1. Press <kbd>F1</kbd> to open the command palette, then search for and run the command `Azure Functions: Add New Setting...`.
+
+1. Choose the function app you created in the previous article. Provide the following information at the prompts:
+
+ + **Enter new app setting name**: Type `CosmosDbConnectionString`.
+
+ + **Enter value for "CosmosDbConnectionString"**: Paste the connection string of your Azure Cosmos DB account, as copied earlier.
+
+1. Press <kbd>F1</kbd> again to open the command palette, then search for and run the command `Azure Functions: Download Remote Settings...`.
+
+1. Choose the function app you created in the previous article. Select **Yes to all** to overwrite the existing local settings.
+
+## Register binding extensions
+
+Because you're using an Azure Cosmos DB output binding, you must have the corresponding bindings extension installed before you run the project.
++
+With the exception of HTTP and timer triggers, bindings are implemented as extension packages. Run the following [dotnet add package](/dotnet/core/tools/dotnet-add-package) command in the Terminal window to add the Storage extension package to your project.
+
+```bash
+dotnet add package Microsoft.Azure.WebJobs.Extensions.CosmosDB
+```
+++
+Your project has been configured to use [extension bundles](functions-bindings-register.md#extension-bundles), which automatically installs a predefined set of extension packages.
+
+Extension bundles usage is enabled in the host.json file at the root of the project, which appears as follows:
+++
+Now, you can add the Azure Cosmos DB output binding to your project.
+
+## Add an output binding
+
+In Functions, each type of binding requires a `direction`, `type`, and a unique `name` to be defined in the function.json file. The way you define these attributes depends on the language of your function app.
++
+In a C# class library project, the bindings are defined as binding attributes on the function method. The *function.json* file required by Functions is then auto-generated based on these attributes.
+
+Open the *HttpExample.cs* project file and add the following parameter to the `Run` method definition:
+
+```csharp
+[CosmosDB(
+ databaseName: "my-database",
+ collectionName: "my-container",
+ ConnectionStringSetting = "CosmosDbConnectionString")]IAsyncCollector<dynamic> documentsOut,
+```
+
+The `documentsOut` parameter is an IAsyncCollector<T> type, which represents a collection of JSON documents that will be written to your Azure Cosmos DB container when the function completes. Specific attributes specifies the name of the container and the name of its parent database. The connection string for your Azure Cosmos DB account is set by the `ConnectionStringSettingAttribute`.
+
+The Run method definition should now look like the following:
+
+```csharp
+[FunctionName("HttpExample")]
+public static async Task<IActionResult> Run(
+ [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req,
+ [CosmosDB(
+ databaseName: "my-database",
+ collectionName: "my-container",
+ ConnectionStringSetting = "CosmosDbConnectionString")]IAsyncCollector<dynamic> documentsOut,
+ ILogger log)
+```
+++
+Binding attributes are defined directly in the function.json file. Depending on the binding type, additional properties may be required. The [Azure Cosmos DB output configuration](./functions-bindings-cosmosdb-v2-output.md#configuration) describes the fields required for an Azure Cosmos DB output binding. The extension makes it easy to add bindings to the function.json file.
+
+To create a binding, right-click (Ctrl+click on macOS) the `function.json` file in your HttpTrigger folder and choose **Add binding...**. Follow the prompts to define the following binding properties for the new binding:
+
+| Prompt | Value | Description |
+| -- | -- | -- |
+| **Select binding direction** | `out` | The binding is an output binding. |
+| **Select binding with direction "out"** | `Azure Cosmos DB` | The binding is an Azure Cosmos DB binding. |
+| **The name used to identify this binding in your code** | `outputDocument` | Name that identifies the binding parameter referenced in your code. |
+| **The Cosmos DB database where data will be written** | `my-database` | The name of the Azure Cosmos DB database containing the target container. |
+| **Database collection where data will be written** | `my-container` | The name of the Azure Cosmos DB container where the JSON documents will be written. |
+| **If true, creates the Cosmos DB database and collection** | `false` | The target database and container already exist. |
+| **Select setting from "local.setting.json"** | `CosmosDbConnectionString` | The name of an application setting that contains the connection string for the Azure Cosmos DB account. |
+| **Partition key (optional)** | *leave blank* | Only required when the output binding creates the container. |
+| **Collection throughput (optional)** | *leave blank* | Only required when the output binding creates the container. |
+
+A binding is added to the `bindings` array in your function.json, which should look like the following:
+
+```json
+{
+ "type": "cosmosDB",
+ "direction": "out",
+ "name": "outputDocument",
+ "databaseName": "my-database",
+ "collectionName": "my-container",
+ "createIfNotExists": "false",
+ "connectionStringSetting": "CosmosDbConnectionString"
+}
+```
++
+## Add code that uses the output binding
++
+Add code that uses the `documentsOut` output binding object to create a JSON document. Add this code before the method returns.
+
+```csharp
+if (!string.IsNullOrEmpty(name))
+{
+ // Add a JSON document to the output container.
+ await documentsOut.AddAsync(new
+ {
+ // create a random ID
+ id = System.Guid.NewGuid().ToString(),
+ name = name
+ });
+}
+```
+
+At this point, your function should look as follows:
+
+```csharp
+[FunctionName("HttpExample")]
+public static async Task<IActionResult> Run(
+ [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req,
+ [CosmosDB(
+ databaseName: "my-database",
+ collectionName: "my-container",
+ ConnectionStringSetting = "CosmosDbConnectionString")]IAsyncCollector<dynamic> documentsOut,
+ ILogger log)
+{
+ log.LogInformation("C# HTTP trigger function processed a request.");
+
+ string name = req.Query["name"];
+
+ string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
+ dynamic data = JsonConvert.DeserializeObject(requestBody);
+ name = name ?? data?.name;
+
+ if (!string.IsNullOrEmpty(name))
+ {
+ // Add a JSON document to the output container.
+ await documentsOut.AddAsync(new
+ {
+ // create a random ID
+ id = System.Guid.NewGuid().ToString(),
+ name = name
+ });
+ }
+
+ string responseMessage = string.IsNullOrEmpty(name)
+ ? "This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response."
+ : $"Hello, {name}. This HTTP triggered function executed successfully.";
+
+ return new OkObjectResult(responseMessage);
+}
+```
+++
+Add code that uses the `outputDocument` output binding object on `context.bindings` to create a JSON document. Add this code before the `context.res` statement.
+
+```javascript
+if (name) {
+ context.bindings.outputDocument = JSON.stringify({
+ // create a random ID
+ id: new Date().toISOString() + Math.random().toString().substr(2,8),
+ name: name
+ });
+}
+```
+
+At this point, your function should look as follows:
+
+```javascript
+module.exports = async function (context, req) {
+ context.log('JavaScript HTTP trigger function processed a request.');
+
+ const name = (req.query.name || (req.body && req.body.name));
+ const responseMessage = name
+ ? "Hello, " + name + ". This HTTP triggered function executed successfully."
+ : "This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response.";
+
+ if (name) {
+ context.bindings.outputDocument = JSON.stringify({
+ // create a random ID
+ id: new Date().toISOString() + Math.random().toString().substr(2,8),
+ name: name
+ });
+ }
+
+ context.res = {
+ // status: 200, /* Defaults to 200 */
+ body: responseMessage
+ };
+}
+```
++
+## Run the function locally
+
+1. As in the previous article, press <kbd>F5</kbd> to start the function app project and Core Tools.
+
+1. With Core Tools running, go to the **Azure: Functions** area. Under **Functions**, expand **Local Project** > **Functions**. Right-click (Ctrl-click on Mac) the `HttpExample` function and choose **Execute Function Now...**.
+
+ :::image type="content" source="../../includes/media/functions-run-function-test-local-vs-code/execute-function-now.png" alt-text="Execute function now from Visual Studio Code":::
+
+1. In **Enter request body** you see the request message body value of `{ "name": "Azure" }`. Press Enter to send this request message to your function.
+
+1. After a response is returned, press <kbd>Ctrl + C</kbd> to stop Core Tools.
+
+### Verify that a JSON document has been created
+
+1. On the Azure portal, go back to your Azure Cosmos DB account and select **Data Explorer**.
+
+1. Expand your database and container, and select **Items** to list the documents created in your container.
+
+1. Verify that a new JSON document has been created by the output binding.
+
+ :::image type="content" source="./media/functions-add-output-binding-cosmos-db-vs-code/verify-output.png" alt-text="Verifying that a new document has been created in the Azure Cosmos DB container" border="true":::
+
+## Redeploy and verify the updated app
+
+1. In Visual Studio Code, press F1 to open the command palette. In the command palette, search for and select `Azure Functions: Deploy to function app...`.
+
+1. Choose the function app that you created in the first article. Because you're redeploying your project to the same app, select **Deploy** to dismiss the warning about overwriting files.
+
+1. After deployment completes, you can again use the **Execute Function Now...** feature to trigger the function in Azure.
+
+1. Again [check the documents created in your Azure Cosmos DB container](#verify-that-a-json-document-has-been-created) to verify that the output binding again generates a new JSON document.
+
+## Clean up resources
+
+In Azure, *resources* refer to function apps, functions, storage accounts, and so forth. They're grouped into *resource groups*, and you can delete everything in a group by deleting the group.
+
+You created resources to complete these quickstarts. You may be billed for these resources, depending on your [account status](https://azure.microsoft.com/account/) and [service pricing](https://azure.microsoft.com/pricing/). If you don't need the resources anymore, here's how to delete them:
++
+## Next steps
+
+You've updated your HTTP triggered function to write JSON documents to an Azure Cosmos DB container. Now you can learn more about developing Functions using Visual Studio Code:
+++ [Develop Azure Functions using Visual Studio Code](functions-develop-vs-code.md)+++ [Azure Functions triggers and bindings](functions-triggers-bindings.md).++ [Examples of complete Function projects in C#](/samples/browse/?products=azure-functions&languages=csharp).+++ [Azure Functions C# developer reference](functions-dotnet-class-library.md) ++ [Examples of complete Function projects in JavaScript](/samples/browse/?products=azure-functions&languages=javascript).+++ [Azure Functions JavaScript developer guide](functions-reference-node.md)
azure-maps How To Manage Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/how-to-manage-authentication.md
Request a token from the Azure AD token endpoint. In your Azure AD request, use
For more information about requesting access tokens from Azure AD for users and service principals, see [Authentication scenarios for Azure AD](../active-directory/develop/authentication-vs-authorization.md) and view specific scenarios in the table of [Scenarios](./how-to-manage-authentication.md#determine-authentication-and-authorization).
+## Manage and rotate shared keys
+
+Your Azure Maps subscription keys are similar to a root password for your Azure Maps account. Always be careful to protect your subscription keys. Use Azure Key Vault to manage and rotate your keys securely. Avoid distributing access keys to other users, hard-coding them, or saving them anywhere in plain text that is accessible to others. Rotate your keys if you believe they may have been compromised.
+
+> [!NOTE]
+> Microsoft recommends using Azure Active Directory (Azure AD) to authorize requests if possible, instead of Shared Key. Azure AD provides superior security and ease of use over Shared Key.
+
+### Manually rotate subscription keys
+
+Microsoft recommends that you rotate your subscription keys periodically to help keep your Azure Maps account secure. If possible, use Azure Key Vault to manage your access keys. If you are not using Key Vault, you will need to rotate your keys manually.
+
+Two subscription keys are assigned so that you can rotate your keys. Having two keys ensures that your application maintains access to Azure Maps throughout the process.
+
+To rotate your Azure Maps subscription keys in the Azure portal:
+
+1. Update your application code to reference the secondary key for the Azure Maps account and deploy.
+2. Navigate to your Azure Maps account in the [Azure portal](https://portal.azure.com/).
+3. Under **Settings**, select **Authentication**.
+4. To regenerate the primary key for your Azure Maps account, select the **Regenerate** button next to the primary key.
+5. Update your application code to reference the new primary key and deploy.
+6. Regenerate the secondary key in the same manner.
+
+> [!WARNING]
+> Microsoft recommends using only one of the keys in all of your applications at the same time. If you use Key 1 in some places and Key 2 in others, you will not be able to rotate your keys without some applications losing access.
+ ## Next steps For more information, see [Azure AD and Azure Maps Web SDK](./how-to-use-map-control.md).
azure-maps How To Use Map Control https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/how-to-use-map-control.md
If directly accessing the Azure Maps REST services, change the URL domain to `at
If developing using a JavaScript framework, one of the following open-source projects may be useful: -- [ng-azure-maps](https://github.com/arnaudleclerc/ng-azure-maps) - Angular 10 wrapper around Azure maps.-- [AzureMapsControl.Components](https://github.com/arnaudleclerc/AzureMapsControl.Components) - An Azure Maps Blazor component.-- [Azure Maps React Component](https://github.com/WiredSolutions/react-azure-maps) - A react wrapper for the Azure Maps control.-- [Vue Azure Maps](https://github.com/rickyruiz/vue-azure-maps) - An Azure Maps component for Vue application.
+* [ng-azure-maps](https://github.com/arnaudleclerc/ng-azure-maps) - Angular 10 wrapper around Azure maps.
+* [AzureMapsControl.Components](https://github.com/arnaudleclerc/AzureMapsControl.Components) - An Azure Maps Blazor component.
+* [Azure Maps React Component](https://github.com/WiredSolutions/react-azure-maps) - A react wrapper for the Azure Maps control.
+* [Vue Azure Maps](https://github.com/rickyruiz/vue-azure-maps) - An Azure Maps component for Vue application.
## Next steps
Learn how to style a map:
> [!div class="nextstepaction"] > [Choose a map style](choose-map-style.md)
-To add more data to your map:
+Learn best practices and see samples:
> [!div class="nextstepaction"]
-> [Create a map](map-create.md)
+> [Best practices](web-sdk-best-practices.md)
> [!div class="nextstepaction"] > [Code samples](/samples/browse/?products=azure-maps)
To add more data to your map:
For a list of samples showing how to integrate Azure Active Directory (AAD) with Azure Maps, see: > [!div class="nextstepaction"]
-> [Azure AD authentication samples](https://github.com/Azure-Samples/Azure-Maps-AzureAD-Samples)
+> [Azure AD authentication samples](https://github.com/Azure-Samples/Azure-Maps-AzureAD-Samples)
azure-maps Web Sdk Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/web-sdk-best-practices.md
+
+ Title: Azure Maps Web SDK best practices | Microsoft Azure Maps
+description: Learn tips & tricks to optimize your use of the Azure Maps Web SDK.
++ Last updated : 3/22/2021++++++
+# Azure Maps Web SDK best practices
+
+This document focuses on best practices for the Azure Maps Web SDK, however, many of the best practices and optimizations outlined can be applied to all other Azure Maps SDKs.
+
+The Azure Maps Web SDK provides a powerful canvas for rendering large spatial data sets in many different ways. In some cases, there are multiple ways to render data the same way, but depending on the size of the data set and the desired functionality, one method may perform better than others. This article highlights best practices and tips and tricks to maximize performance and create a smooth user experience.
+
+Generally, when looking to improve performance of the map, look for ways to reduce the number of layers and sources, and the complexity of the data sets and rendering styles being used.
+
+## Security basics
+
+The single most important part of your application is its security. If your application isnΓÇÖt secure a hacker can ruin any application, no matter how good the user experience might be. The following are some tips to keep your Azure Maps application secure. When using Azure, be sure to familiarize yourself with the security tools available to you. See this document for an [introduction to Azure security](https://docs.microsoft.com/azure/security/fundamentals/overview).
+
+> [!IMPORTANT]
+> Azure Maps provides two methods of authentication.
+> * Subscription key-based authentication
+> * Azure Active Directory authentication
+> Use Azure Active Directory in all production applications.
+> Subscription key-based authentication is simple and what most mapping platforms use as a light way method to measure your usage of the platform for billing purposes. However, this is not a secure form of authentication and should only be used locally when developing apps. Some platforms provide the ability to restrict which IP addresses and/or HTTP referrer is in requests, however, this information can easily be spoofed. If you do use subscription keys, be sure to [rotate them regularly](how-to-manage-authentication.md#manage-and-rotate-shared-keys).
+> Azure Active Directory is an enterprise identity service that has a large selection of security features and settings for all sorts of application scenarios. Microsoft recommends that all production applications using Azure Maps use Azure Active Directory for authentication.
+> Learn more about [managing authentication in Azure Maps](how-to-manage-authentication.md) in this document.
+
+### Secure your private data
+
+When data is added to the Azure Maps interactive map SDKs, it is rendered locally on the end userΓÇÖs device and is never sent back out to the internet for any reason.
+
+If your application is loading data that should not be publicly accessible, make sure that the data is stored in a secure location, is accessed in a secure manner, and that the application itself is locked down and only available to your desired users. If any of these steps are skipped, an unauthorized person has the potential to access this data. Azure Active Directory can assist you with locking this down.
+
+See this tutorial on [adding authentication to your web app running on Azure App Service](https://docs.microsoft.com/azure/app-service/scenario-secure-app-authentication-app-service)
+
+### Use the latest versions of Azure Maps
+
+The Azure Maps SDKs go through regular security testing along with any external dependency libraries that may be used by the SDKs. Any known security issue is fixed in a timely manner and released to production. If your application points to the latest major version of the hosted version of the Azure Maps Web SDK, it will automatically receive all minor version updates that will include security related fixes.
+
+If self-hosting the Azure Maps Web SDK via the NPM module, be sure to use the caret (^) symbol to in combination with the Azure Maps NPM package version number in your `package.json` file so that it will always point to the latest minor version.
+
+```json
+"dependencies": {
+ "azure-maps-control": "^2.0.30"
+}
+```
+
+## Optimize initial map load
+
+When a web page is loading, one of the first things you want to do is start rendering something as soon as possible so that the user isnΓÇÖt staring at a blank screen.
+
+### Watch the maps ready event
+
+Similarly, when the map initially loads often it is desired to load data on it as quickly as possible, so the user isnΓÇÖt looking at an empty map. Since the map loads resources asynchronously, you have to wait until the map is ready to be interacted with before trying to render your own data on it. There are two events you can wait for, a `load` event and a `ready` event. The load event will fire after the map has finished completely loading the initial map view and every map tile has loaded. The ready event will fire when the minimal map resources needed to start interacting with the map. The ready event can often fire in half the time of the load event and thus allow you to start loading your data into the map sooner.
+
+### Lazy load the Azure Maps Web SDK
+
+If the map isnΓÇÖt needed right away, lazy load the Azure Maps Web SDK until it is needed. This will delay the loading of the JavaScript and CSS files used by the Azure Maps Web SDK until needed. A common scenario where this occurs is when the map is loaded in a tab or flyout panel that isnΓÇÖt displayed on page load.
+The following code sample shows how to delay the loading the Azure Maps Web SDK until a button is pressed.
+
+<br/>
+
+<iframe height="500" style="width: 100%;" scrolling="no" title="Lazy load the map" src="https://codepen.io/azuremaps/embed/vYEeyOv?height=500&theme-id=default&default-tab=js,result" frameborder="no" allowtransparency="true" allowfullscreen="true">
+ See the Pen <a href='https://codepen.io/azuremaps/pen/vYEeyOv'>Lazy load the map</a> by Azure Maps
+ (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
+</iframe>
+
+### Add a placeholder for the map
+
+If the map takes a while to load due to network limitations or other priorities within your application, consider adding a small background image to the map `div` as a placeholder for the map. This will fill the void of the map `div` while it is loading.
+
+### Set initial map style and camera options on initialization
+
+Often apps want to load the map to a specific location or style. Sometimes developers will wait until the map has loaded (or wait for the `ready` event), and then use the `setCemer` or `setStyle` functions of the map. This will often take longer to get to the desired initial map view since many resources end up being loaded by default before the resources needed for the desired map view are loaded. A better approach is to pass in the desired map camera and style options into the map when initializing it.
+
+## Optimize data sources
+
+The Web SDK has two data sources,
+
+* **GeoJSON source**: Known as the `DataSource` class, manages raw location data in GeoJSON format locally. Good for small to medium data sets (upwards of hundreds of thousands of features).
+* **Vector tile source**: Known at the `VectorTileSource` class, loads data formatted as vector tiles for the current map view, based on the maps tiling system. Ideal for large to massive data sets (millions or billions of features).
+
+### Use tile-based solutions for large datasets
+
+If working with larger datasets containing millions of features, the recommended way to achieve optimal performance is to expose the data using a server-side solution such as vector or raster image tile service.
+Vector tiles are optimized to load only the data that is in view with the geometries clipped to the focus area of the tile and generalized to match the resolution of the map for the zoom level of the tile.
+
+The [Azure Maps Creator platform](creator-indoor-maps.md) provides the ability to retrieve data in vector tile format. Other data formats can be using tools such as [Tippecanoe](https://github.com/mapbox/tippecanoe) or one of the many [resources list on this page](https://github.com/mapbox/awesome-vector-tiles).
+
+It is also possible to create a custom service that renders datasets as raster image tiles on the server-side and load the data using the TileLayer class in the map SDK. This provides exceptional performance as the map only needs to load and manage a few dozen images at most. However, there are some limitations with using raster tiles since the raw data is not available locally. A secondary service is often required to power any type of interaction experience, for example, find out what shape a user clicked on. Additionally, the file size of a raster tile is often larger than a compressed vector tile that contains generalized and zoom level optimized geometries.
+
+Learn more about data sources in the [Create a data source](create-data-source-web-sdk.md) document.
+
+### Combine multiple datasets into a single vector tile source
+
+The less data sources the map has to manage, the faster it can process all features to be displayed. In particular, when it comes to tile sources, combining two vector tile sources together cuts the number of HTTP requests to retrieve the tiles in half, and the total amount of data would be slightly smaller since there is only one file header.
+
+Combining multiple data sets in a single vector tile source can be achieved using a tool such as [Tippecanoe](https://github.com/mapbox/tippecanoe). Data sets can be combined into a single feature collection or separated into separate layers within the vector tile known as source-layers. When connecting a vector tile source to a rendering layer, you would specify the source-layer that contains the data that you want to render with the layer.
+
+### Reduce the number of canvas refreshes due to data updates
+
+There are several ways data in a `DataSource` class can be added or updated. Listed below are the different methods and some considerations to ensure good performance.
+
+* The data sources `add` function can be used to add one or more features to a data source. Each time this function is called it will trigger a map canvas refresh. If adding many features, combine them into an array or feature collection and passing them into this function once, rather than looping over a data set and calling this function for each feature.
+* The data sources `setShapes` function can be used to overwrite all shapes in a data source. Under the hood, it combines the data sources `clear` and `add` functions together and does a single map canvas refresh instead of two, which is much faster. Be sure to use this when you want to update all data in a data source.
+* The data sources `importDataFromUrl` function can be used to load a GeoJSON file via a URL into a data source. Once the data has been downloaded, it is passed into the data sources `add` function. If the GeoJSON file is hosted on a different domain, be sure that the other domain supports cross domain requests (CORs). If it doesnΓÇÖt consider copying the data to a local file on your domain or creating a proxy service that has CORs enabled. If the file is large, consider converting it into a vector tile source.
+* If features are wrapped with the `Shape` class, the `addProperty`, `setCoordinates`, and `setProperties` functions of the shape will all trigger an update in the data source and a map canvas refresh. All features returned by the data sources `getShapes` and `getShapeById` functions are automatically wrapped with the `Shape` class. If you want to update several shapes, it is faster to convert them to JSON using the data sources `toJson` function, editing the GeoJSON, then passing this data into the data sources `setShapes` function.
+
+### Avoid calling the data sources clear function unnecessarily
+
+Calling the clear function of the `DataSource` class causes a map canvas refresh. If the `clear` function is called multiple times in a row, a delay can occur while the map waits for each refresh to occur.
+
+A common scenario where this often appears in applications is when an app clears the data source, downloads new data, clears the data source again then adds the new data to the data source. Depending on the desired user experience, the following alternatives would be better.
+
+* Clear the data before downloading the new data, then pass the new data into the data sources `add` or `setShapes` function. If this is the only data set on the map, the map will be empty while the new data is downloading.
+* Download the new data, then pass it into the data sources `setShapes` function. This will replace all the data on the map.
+
+### Remove unused features and properties
+
+If your dataset contains features that arenΓÇÖt going to be used in your app, remove them. Similarly, remove any properties on features that arenΓÇÖt needed. This has several benefits:
+
+* Reduces the amount of data that has to be downloaded.
+* Reduces the number of features that need to be looped through when rendering the data.
+* Can sometimes help simplify or remove data-driven expressions and filters, which mean less processing required at render time.
+
+When features have a lot of properties or content, it is much more performant to limit what gets added to the data source to just those needed for rendering and to have a separate method or service for retrieving the additional property or content when needed. For example, if you have a simple map displaying locations on a map when clicked a bunch of detailed content is displayed. If you want to use data driven styling to customize how the locations are rendered on the map, only load the properties needed into the data source. When you want to display the detailed content, use the ID of the feature to retrieve the additional content separately. If the content is stored on the server-side, a service can be used to retrieve it asynchronously, which would drastically reduce the amount of data that needs to be downloaded when the map is initially loaded.
+
+Additionally, reducing the number of significant digits in the coordinates of features can also significantly reduce the data size. It is not uncommon for coordinates to contain 12 or more decimal places; however, six decimal places have an accuracy of about 0.1 meter, which is often more precise than the location the coordinate represents (six decimal places is recommended when working with small location data such as indoor building layouts). Having any more than six decimal places will likely make no difference in how the data is rendered and will only require the user to download more data for no added benefit.
+
+Here is a list of [useful tools for working with GeoJSON data](https://github.com/tmcw/awesome-geojson).
+
+### Use a separate data source for rapidly changing data
+
+Sometimes there is a need to rapidly update data on the map for things such as showing live updates of streaming data or animating features. When a data source is updated, the rendering engine will loop through and render all features in the data source. Separating static data from rapidly changing data into different data sources can significantly reduce the number of features that are re-rendered on each update to the data source and improve overall performance.
+
+If using vector tiles with live data, an easy way to support updates is to use the `expires` response header. By default, any vector tile source or raster tile layer will automatically reload tiles when the `expires` date. The traffic flow and incident tiles in the map use this feature to ensure fresh real-time traffic data is displayed on the map. This feature can be disabled by setting the maps `refreshExpiredTiles` service option to `false`.
+
+### Adjust the buffer and tolerance options in GeoJSON data sources
+
+The `DataSource` class converts raw location data into vector tiles local for on-the-fly rendering. These local vector tiles clip the raw data to the bounds of the tile area with a bit of buffer to ensure smooth rendering between tiles. The smaller the `buffer` option is, the fewer overlapping data is stored in the local vector tiles and the better performance, however, the greater the change of rendering artifacts occurring. Try tweaking this option to get the right mix of performance with minimal rendering artifacts.
+
+The `DataSource` class also has a `tolerance` option that is used with the Douglas-Peucker simplification algorithm when reducing the resolution of geometries for rendering purposes. Increasing this tolerance value will reduce the resolution of geometries and in turn improve performance. Tweak this option to get the right mix of geometry resolution and performance for your data set.
+
+### Set the max zoom option of GeoJSON data sources
+
+The `DataSource` class converts raw location data into vector tiles local for on-the-fly rendering. By default, it will do this until zoom level 18, at which point, when zoomed in closer, it will sample data from the tiles generated for zoom level 18. This works well for most data sets that need to have high resolution when zoomed in at these levels. However, when working with data sets that are more likely to be viewed when zoomed out more, such as when viewing state or province polygons, setting the `minZoom` option of the data source to a smaller value such as `12` will reduce the amount computation, local tile generation that occurs, and memory used by the data source and increase performance.
+
+### Minimize GeoJSON response
+
+When loading GeoJSON data from a server either through a service or by loading a flat file, be sure to have the data minimized to remove unneeded space characters that makes the download size larger than needed.
+
+### Access raw GeoJSON using a URL
+
+It is possible to store GeoJSON objects inline inside of JavaScript, however this will use a lot of memory as copies of it will be stored across the variable you created for this object and the data source instance, which manages it within a separate web worker. Expose the GeoJSON to your app using a URL instead and the data source will load a single copy of data directly into the data sources web worker.
+
+## Optimize rendering layers
+
+Azure maps provides several different layers for rendering data on a map. There are many optimizations you can take advantage of to tailor these layers to your scenario the increase performances and the overall user experience.
+
+### Create layers once and reuse them
+
+The Azure Maps Web SDK is decided to be data driven. Data goes into data sources, which are then connected to rendering layers. If you want to change the data on the map, update the data in the data source or change the style options on a layer. This is often much faster than removing and then recreating layers whenever there is a change.
+
+### Consider bubble layer over symbol layer
+
+The bubble layer renders points as circles on the map and can easily have their radius and color styled using a data-driven expression. Since the circle is a simple shape for WebGL to draw, the rendering engine will be able to render these much faster than a symbol layer, which has to load and render an image. The performance difference of these two rendering layers is noticeable when rendering tens of thousands of points.
+
+### Use HTML markers and Popups sparingly
+
+Unlike most layers in the Azure Maps Web control that use WebGL for rendering, HTML Markers and Popups use traditional DOM elements for rendering. As such, the more HTML markers and Popups added a page, the more DOM elements there are. Performance can degrade after adding a few hundred HTML markers or popups. For larger data sets, consider either clustering your data or using a symbol or bubble layer. For popups, a common strategy is to create a single popup and reuse it by updating its content and position as shown in the below example:
+
+<br/>
+
+<iframe height='500' scrolling='no' title='Reusing Popup with Multiple Pins' src='//codepen.io/azuremaps/embed/rQbjvK/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true' style='width: 100%;'>See the Pen <a href='https://codepen.io/azuremaps/pen/rQbjvK/'>Reusing Popup with Multiple Pins</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
+</iframe>
+
+That said, if you only have a few points to render on the map, the simplicity of HTML markers may be preferred. Additionally, HTML markers can easily be made draggable if needed.
+
+### Combine layers
+
+The map is capable of rendering hundreds of layers, however, the more layers there are, the more time it takes to render a scene. One strategy to reduce the number of layers is to combine layers that have similar styles or can be styled using a [data-driven styles](data-driven-style-expressions-web-sdk.md).
+
+For example, consider a data set where all features have a `isHealthy` property that can have a value of `true` or `false`. If creating a bubble layer that renders different colored bubbles based on this property, there are several ways to do this as listed below from least performant to most performant.
+
+* Split the data into two data sources based on the `isHealthy` value and attach a bubble layer with a hard-coded color option to each data source.
+* Put all the data into a single data source and create two bubble layers with a hard-coded color option and a filter based on the `isHealthy` property.
+* Put all the data into a single data source, create a single bubble layer with a `case` style expression for the color option based on the `isHealthy` property. Here is a code sample that demonstrates this.
+
+```javascript
+var layer = new atlas.layer.BubbleLayer(source, null, {
+ color: [
+ 'case',
+
+ //Get the 'isHealthy' property from the feature.
+ ['get', 'isHealthy'],
+
+ //If true, make the color 'green'.
+ 'green',
+
+ //If false, make the color red.
+ 'red'
+ ]
+});
+```
+
+### Create smooth symbol layer animations
+
+Symbol layers have collision detection enabled by default. This collision detection aims to ensure that no two symbols overlap. The icon and text options of a symbol layer have two options,
+
+* `allowOverlap` - specifies if the symbol will be visible if it collides with other symbols.
+* `ignorePlacement` - specifies if the other symbols are allowed to collide with the symbol.
+
+Both of these options are set to `false` by default. When animating a symbol, the collision detection calculations will run on each frame of the animation, which can slow down the animation and make it look less fluid. To smooth the animation out, set these options to `true`.
+
+The following code sample a simple way to animate a symbol layer.
+
+<br/>
+
+<iframe height="500" style="width: 100%;" scrolling="no" title="Symbol layer animation" src="https://codepen.io/azuremaps/embed/oNgGzRd?height=500&theme-id=default&default-tab=js,result" frameborder="no" allowtransparency="true" allowfullscreen="true">
+ See the Pen <a href='https://codepen.io/azuremaps/pen/oNgGzRd'>Symbol layer animation</a> by Azure Maps
+ (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
+</iframe>
+
+### Specify zoom level range
+
+If your data meets one of the following criteria, be sure to specify the min and max zoom level of the layer so that the rendering engine can skip it when outside of the zoom level range.
+
+* If the data is coming from a vector tile source, often source layers for different data types are only available through a range of zoom levels.
+* If using a tile layer that doesnΓÇÖt have tiles for all zoom levels 0 through 24 and you want it to only rendering at the levels it has tiles, and not try to fill in missing tiles with tiles from other zoom levels.
+* If you only want to render a layer at certain zoom levels.
+All layers have a `minZoom` and `maxZoom` option where the layer will be rendered when between these zoom levels based on this logic ` maxZoom > zoom >= minZoom`.
+
+**Example**
+
+```javascript
+//Only render this layer between zoom levels 1 and 9.
+var layer = new atlas.layer.BubbleLayer(dataSource, null, {
+ minZoom: 1,
+ maxZoom: 10
+});
+```
+
+### Specify tile layer bounds and source zoom range
+
+By default, tile layers will load tiles across the whole globe. However, if the tile service only has tiles for a certain area the map will try to load tiles when outside of this area. When this happens, a request for each tile will be made and wait for a response that can block other requests being made by the map and thus slow down the rendering of other layers. Specifying the bounds of a tile layer will result in the map only requesting tiles that are within that bounding box. Also, if the tile layer is only available between certain zoom levels, specify the min and max source zoom for the same reason.
+
+**Example**
+
+```javascript
+var tileLayer = new atlas.layer.TileLayer({
+ tileUrl: 'myTileServer/{z}/{y}/{x}.png',
+ bounds: [-101.065, 14.01, -80.538, 35.176],
+ minSourceZoom: 1,
+ maxSourceZoom: 10
+});
+```
+
+### Use a blank map style when base map not visible
+
+If a layer is being overlaid on the map that will completely cover the base map, consider setting the map style to `blank` or `blank_accessible` so that the base map isnΓÇÖt rendered. A common scenario for doing this is when overlaying a full globe tile at has no opacity or transparent area above the base map.
+
+### Smoothly animate image or tile layers
+
+If you want to animate through a series of image or tile layers on the map. It is often faster to create a layer for each image or tile layer and to change the opacity than to update the source of a single layer on each animation frame. Hiding a layer by setting the opacity to zero and showing a new layer by setting its opacity to a value greater than zero is much faster than updating the source in the layer. Alternatively, the visibility of the layers can be toggled, but be sure to set the fade duration of the layer to zero, otherwise it will animate the layer when displaying it, which will cause a flicker effect since the previous layer would have been hidden before the new layer is visible.
+
+### Tweak Symbol layer collision detection logic
+
+The symbol layer has two options that exist for both icon and text called `allowOverlap` and `ignorePlacement`. These two options specify if the icon or text of a symbol can overlap or be overlapped. When these are set to `false`, the symbol layer will do calculations when rendering each point to see if it collides with any other already rendered symbol in the layer, and if it does, will not render the colliding symbol. This is good at reducing clutter on the map and reducing the number of objects rendered. By setting these options to `false`, this collision detection logic will be skipped, and all symbols will be rendered on the map. Tweak this option to get the best combination of performance and user experience.
+
+### Cluster large point data sets
+
+When working with large sets of data points you may find that when rendered at certain zoom levels, many of the points overlap and are only partial visible, if at all. Clustering is process of grouping points that are close together and representing them as a single clustered point. As the user zooms the map in, clusters will break apart into their individual points. This can significantly reduce the amount of data that needs to be rendered, make the map feel less cluttered, and improve performance. The `DataSource` class has options for clustering data locally. Additionally, many tools that generate vector tiles also have clustering options.
+
+Additionally, increase the size of the cluster radius to improve performance. The larger the cluster radius, the less clustered points there is to keep track of and render.
+Learn more in the [Clustering point data document](clustering-point-data-web-sdk.md)
+
+### Use weighted clustered heat maps
+
+The heat map layer can render tens of thousands of data points easily. For larger data sets, consider enabling clustering on the data source and using a small cluster radius and use the clusters `point_count` property as a weight for the height map. When the cluster radius is only a few pixels in size, there will be little visual difference in the rendered heat map. Using a larger cluster radius will improve performance more but may reduce the resolution of the rendered heat map.
+
+```javascript
+var layer = new atlas.layer.HeatMapLayer(source, null, {
+ weight: ['get', 'point_count']
+});
+```
+
+Learn more in the [Clustering and heat maps in this document](clustering-point-data-web-sdk.md #clustering-and-the-heat-maps-layer)
+
+### Keep image resources small
+
+Images can be added to the maps image sprite for rendering icons in a symbol layer or patterns in a polygon layer. Keep these images small to minimize the amount of data that has to be downloaded and the amount of space they take up in the maps image sprite. When using a symbol layer that scales the icon using the `size` option, use an image that is the maximum size your plan to display on the map and no bigger. This will ensure the icon is rendered with high resolution while minimizing the resources it uses. Additionally, SVGΓÇÖs can also be used as a smaller file format for simple icon images.
+
+## Optimize expressions
+
+[Data-driven style expressions](data-driven-style-expressions-web-sdk.md) provide a lot of flexibility and power for filtering and styling data on the map. There are many ways in which expressions can be optimized. Here are a few tips.
+
+### Reduce the complexity of filters
+
+Filters loop over all data in a data source and check to see if each filter matches the logic in the filter. If filters become complex, this can cause performance issues. Some possible strategies to address this include the following.
+
+* If using vector tiles, break up the data into different source layers.
+* If using the `DataSource` class, break that data up into separate data sources. Try to balance the number of data sources with the complexity of the filter. Too many data sources can cause performance issues too, so you might need to do some testing to find out what works best for your scenario.
+* When using a complex filter on a layer, consider using multiple layers with style expressions to reduce the complexity of the filter. Avoid creating a bunch of layers with hardcoded styles when style expressions can be used as a large number of layers can also cause performance issues.
+
+### Make sure expressions donΓÇÖt produce errors
+
+Expressions are often used to generate code to perform calculations or logical operations at render time. Just like the code in the rest of your application, be sure the calculations and logical make sense and are not error prone. Errors in expressions will cause issues in evaluating the expression, which can result in reduced performance and rendering issues.
+
+One common error to be mindful of is having an expression that relies on a feature property that might not exist on all features. For example, the following code uses an expression to set the color property of a bubble layer to the `myColor` property of a feature.
+
+```javascript
+var layer = new atlas.layer.BubbleLayer(source, null, {
+ color: ['get', 'myColor']
+});
+```
+
+The above code will function fine if all features in the data source have a `myColor` property, and the value of that property is a color. This may not be an issue if you have complete control of the data in the data source and know for certain all features will have a valid color in a `myColor` property. That said, to make this code safe from errors, a `case` expression can be used with the `has` expression to check that the feature has the `myColor` property. If it does, the `to-color` type expression can then be used to try to convert the value of that property to a color. If the color is invalid, a fallback color can be used. The following code demonstrates how to do this and sets the fallback color to green.
+
+```javascript
+var layer = new atlas.layer.BubbleLayer(source, null, {
+ color: [
+ 'case',
+
+ //Check to see if the feature has a 'myColor' property.
+ ['has', 'myColor'],
+
+ //If true, try validating that 'myColor' value is a color, or fallback to 'green'.
+ ['to-color', ['get', 'myColor'], 'green'],
+
+ //If false, return a fallback value.
+ 'green'
+ ]
+});
+```
+
+### Order boolean expressions from most specific to least specific
+
+When using boolean expressions that contain multiple conditional tests, order the conditional tests from most specific to least specific. By doing this, the first condition should reduce the amount of data the second condition has to be tested against, thus reducing the total number of conditional tests that need to be performed.
+
+### Simplify expressions
+
+Expressions can be powerful and sometimes complex. The simpler an expression is, the faster it will be evaluated. For example, if a simple comparison is needed, an expression like `['==', ['get', 'category'], 'restaurant']` would be better than using a match expression like `['match', ['get', 'category'], 'restaurant', true, false]`. In this case, if the property being checked is a boolean value, a `get` expression would be even simpler `['get','isRestaurant']`.
+
+## Web SDK troubleshooting
+
+The following are some tips to debugging some of the common issues encountered when developing with the Azure Maps Web SDK.
+
+**Why doesnΓÇÖt the map display when I load the web control?**
+
+Do the following:
+
+* Ensure that you have added your added authentication options to the map. If this is not added, the map will load with a blank canvas since it canΓÇÖt access the base map data without authentication and 401 errors will appear in the network tab of the browserΓÇÖs developer tools.
+* Ensure that you have an internet connection.
+* Check the console for errors of the browserΓÇÖs developer tools. Some errors may cause the map not to render. Debug your application.
+* Ensure you are using a [supported browser](supported-browsers.md).
+
+**All my data is showing up on the other side of the world, whatΓÇÖs going on?**
+Coordinates, also referred to as positions, in the Azure Maps SDKs aligns with the geospatial industry standard format of `[longitude, latitude]`. This same format is also how coordinates are defined in the GeoJSON schema; the core data formatted used within the Azure Maps SDKs. If your data is appearing on the opposite side of the world, it is most likely due to the longitude and latitude values being reversed in your coordinate/position information.
+
+**Why are HTML markers appearing in the wrong place in the web control?**
+
+Things to check:
+
+* If using custom content for the marker, ensure the `anchor` and `pixelOffset` options are correct. By default, the bottom center of the content is aligned with the position on the map.
+* Ensure that the CSS file for Azure Maps has been loaded.
+* Inspect the HTML marker DOM element to see if any CSS from your app has appended itself to the marker and is affecting its position.
+
+**Why are icons or text in the symbol layer appearing in the wrong place?**
+Check that the `anchor` and the `offset` options are correctly configured to align with the part of your image or text that you want to have aligned with the coordinate on the map.
+If the symbol is only out of place when the map is rotated, check the `rotationAlignment` option. By default, symbols we will rotate with the maps viewport so that they appear upright to the user. However, depending on your scenario, it may be desirable to lock the symbol to the mapΓÇÖs orientation. Set the `rotationAlignment` option to `ΓÇÖmapΓÇÖ` to do this.
+If the symbol is only out of place when the map is pitched/tilted, check the `pitchAlignment` option. By default, symbols we will stay upright with the maps viewport as the map is pitched or tilted. However, depending on your scenario, it may be desirable to lock the symbol to the mapΓÇÖs pitch. Set the `pitchAlignment` option to `ΓÇÖmapΓÇÖ` to do this.
+
+**Why isnΓÇÖt any of my data appearing on the map?**
+
+Things to check:
+
+* Check the console in the browserΓÇÖs developer tools for errors.
+* Ensure that a data source has been created and added to the map, and that the data source has been connected to a rendering layer that has also been added to the map.
+* Add break points in your code and step through it to ensure data is being added to the data source and the data source and layers are being added to the map without any errors occurring.
+* Try removing data-driven expressions from your rendering layer. It's possible that one of them may have an error in it that is causing the issue.
+
+**Can I use the Azure Maps Web SDK in a sandboxed iframe?**
+
+Yes. Note that [Safari has a bug](https://bugs.webkit.org/show_bug.cgi?id=170075) that prevents sandboxed iframes from running web workers, which is requirement of the Azure Maps Web SDK. The solution is to add the `"allow-same-origin"` tag to the sandbox property of the iframe.
+
+## Get support
+
+The following are the different ways to get support for Azure Maps depending on your issue.
+
+**How do I report a data issue or an issue with an address?**
+
+Azure Maps has a data feedback tool where data issues can be reported and tracked. [https://feedback.azuremaps.com/](https://feedback.azuremaps.com/) Each issue submitted generates a unique URL you can use to track the progress of the data issue. The time it takes to resolve a data issue varies depending on the type of issue and how easy it is to verify the change is correct. Once fixed, the render service will see the update in the weekly update, while other services such as geocoding and routing will see the update in the monthly update. Detailed instructions on how to report a data issue is provided in this [document](how-to-use-feedback-tool.md).
+
+**How do I report a bug in a service or API?**
+
+https://azure.com/support
+
+**Where do I get technical help for Azure Maps?**
+
+If related to the Azure Maps visual in Power BI: https://powerbi.microsoft.com/support/
+For all other Azure Maps
+or the developer forums: https://docs.microsoft.com/answers/topics/azure-maps.html
+
+**How do I make a feature request?**
+
+Make a feature request on our user voice site: https://feedback.azure.com/forums/909172-azure-maps
+
+## Next steps
+
+See the following articles for more tips on improving the user experience in your application.
+
+> [!div class="nextstepaction"]
+> [Make your application accessible](map-accessibility.md)
+
+Learn more about the terminology used by Azure Maps and the geospatial industry.
+
+> [!div class="nextstepaction"]
+> [Azure Maps glossary](glossary.md)
azure-monitor Agent Linux Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/agent-linux-troubleshoot.md
This article provides help troubleshooting errors you might experience with the
If none of these steps work for you, the following support channels are also available: * Customers with Premier support benefits can open a support request with [Premier](https://premier.microsoft.com/).
-* Customers with Azure support agreements can open a support request [in the Azure portal](https://manage.windowsazure.com/?getsupport=true).
+* Customers with Azure support agreements can open a support request [in the Azure portal](https://azure.microsoft.com/support/options/).
* Diagnose OMI Problems with the [OMI troubleshooting guide](https://github.com/Microsoft/omi/blob/master/Unix/doc/diagnose-omi-problems.md). * File a [GitHub Issue](https://github.com/Microsoft/OMS-Agent-for-Linux/issues). * Visit the Log Analytics Feedback page to review submitted ideas and bugs [https://aka.ms/opinsightsfeedback](https://aka.ms/opinsightsfeedback) or file a new one.
azure-monitor Agent Windows Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/agent-windows-troubleshoot.md
This article provides help troubleshooting errors you might experience with the
If none of these steps work for you, the following support channels are also available: * Customers with Premier support benefits can open a support request with [Premier](https://premier.microsoft.com/).
-* Customers with Azure support agreements can open a support request [in the Azure portal](https://manage.windowsazure.com/?getsupport=true).
+* Customers with Azure support agreements can open a support request [in the Azure portal](https://azure.microsoft.com/support/options/).
* Visit the Log Analytics Feedback page to review submitted ideas and bugs [https://aka.ms/opinsightsfeedback](https://aka.ms/opinsightsfeedback) or file a new one. ## Log Analytics Troubleshooting Tool
azure-monitor Java Standalone Sampling Overrides https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-standalone-sampling-overrides.md
+
+ Title: Sampling overrides (preview) - Azure Monitor Application Insights for Java
+description: Learn to configure sampling overrides in Azure Monitor Application Insights for Java.
+ Last updated : 03/22/2021+++++
+# Sampling overrides (preview) - Azure Monitor Application Insights for Java
+
+> [!NOTE]
+> The sampling overrides feature is in preview.
+
+Here are some use cases for sampling overrides:
+ * Suppress collecting telemetry for health checks.
+ * Suppress collecting telemetry for noisy dependency calls.
+ * Reduce the noise from health checks or noisy dependency calls without suppressing them completely.
+ * Collect 100% of telemetry for an important request type (e.g. `/login`) even though you have default sampling
+ configured to something lower.
+
+## Terminology
+
+Before you learn about sampling overrides, you should understand the term *span*. A span is a general term for:
+
+* An incoming request.
+* An outgoing dependency (for example, a remote call to another service).
+* An in-process dependency (for example, work being done by subcomponents of the service).
+
+For sampling overrides, these span components are important:
+
+* Attributes
+
+The span attributes represent both standard and custom properties of a given request or dependency.
+
+## Getting started
+
+To begin, create a configuration file named *applicationinsights.json*. Save it in the same directory as *applicationinsights-agent-\*.jar*. Use the following template.
+
+```json
+{
+ "connectionString": "InstrumentationKey=00000000-0000-0000-0000-000000000000",
+ "sampling": {
+ "percentage": 10
+ },
+ "preview": {
+ "sampling": {
+ "overrides": [
+ {
+ "attributes": [
+ ...
+ ],
+ "percentage": 0
+ },
+ {
+ "attributes": [
+ ...
+ ],
+ "percentage": 100
+ }
+ ]
+ }
+ }
+}
+```
+
+## How it works
+
+When a span is started, the attributes present on the span at that time are used to check if any of the sampling
+overrides match.
+
+If one of the sampling overrides match, then its sampling percentage is used to decide whether to sample the span or
+not.
+
+Only the first sampling override that matches is used.
+
+If no sampling overrides match:
+
+* If this is the first span in the trace, then the [normal sampling percentage](./java-standalone-config.md#sampling)
+ is used.
+* If this is not the first span in the trace, then the parent sampling decision is used.
+
+> [!IMPORTANT]
+> When a decision has been made to not collect a span, then all downstream spans will also not be collected,
+> even if there are sampling overrides that match the downstream span.
+> This behavior is necessary because otherwise broken traces would result, with downstream spans being collected
+> but being parented to spans that were not collected.
+
+> [!NOTE]
+> The sampling decision is based on hashing the traceId (also known as the operationId) to a number between 0 and 100,
+> and that hash is then compared to the sampling percentage.
+> Since all spans in a given trace will have the same traceId, they will have the same hash,
+> and so the sampling decision will be consistent across the whole trace.
+
+## Example: Suppress collecting telemetry for health checks
+
+This will suppress collecting telemetry for all requests to `/health-checks`.
+
+This will also suppress collecting any downstream spans (dependencies) that would normally be collected under
+`/health-checks`.
+
+```json
+{
+ "connectionString": "InstrumentationKey=00000000-0000-0000-0000-000000000000",
+ "preview": {
+ "sampling": {
+ "overrides": [
+ {
+ "attributes": [
+ {
+ "key": "http.url",
+ "value": "https?://[^/]+/health-check",
+ "matchType": "regexp"
+ }
+ ],
+ "percentage": 0
+ }
+ ]
+ }
+ }
+}
+```
+
+## Example: Suppress collecting telemetry for a noisy dependency call
+
+This will suppress collecting telemetry for all `GET my-noisy-key` redis calls.
+
+```json
+{
+ "connectionString": "InstrumentationKey=00000000-0000-0000-0000-000000000000",
+ "preview": {
+ "sampling": {
+ "overrides": [
+ {
+ "attributes": [
+ {
+ "key": "db.system",
+ "value": "redis",
+ "matchType": "strict"
+ },
+ {
+ "key": "db.statement",
+ "value": "GET my-noisy-key",
+ "matchType": "strict"
+ }
+ ],
+ "percentage": 0
+ }
+ ]
+ }
+ }
+}
+```
+
+## Example: Collect 100% of telemetry for an important request type
+
+This will collect 100% of telemetry for `/login`.
+
+Since downstream spans (dependencies) respect the parent's sampling decision
+(absent any sampling override for that downstream span),
+those will also be collected for all '/login' requests.
+
+```json
+{
+ "connectionString": "InstrumentationKey=00000000-0000-0000-0000-000000000000",
+ "sampling": {
+ "percentage": 10
+ },
+ "preview": {
+ "sampling": {
+ "overrides": [
+ {
+ "attributes": [
+ {
+ "key": "http.url",
+ "value": "https?://[^/]+/login",
+ "matchType": "regexp"
+ }
+ ],
+ "percentage": 100
+ }
+ ]
+ }
+ }
+}
+```
+
+## Common span attributes
+
+This section lists some common span attributes that sampling overrides can use.
+
+### HTTP spans
+
+| Attribute | Type | Description |
+||||
+| `http.method` | string | HTTP request method.|
+| `http.url` | string | Full HTTP request URL in the form `scheme://host[:port]/path?query[#fragment]`. The fragment isn't usually transmitted over HTTP. But if the fragment is known, it should be included.|
+| `http.status_code` | number | [HTTP response status code](https://tools.ietf.org/html/rfc7231#section-6).|
+| `http.flavor` | string | Type of HTTP protocol. |
+| `http.user_agent` | string | Value of the [HTTP User-Agent](https://tools.ietf.org/html/rfc7231#section-5.5.3) header sent by the client. |
+
+### JDBC spans
+
+| Attribute | Type | Description |
+||||
+| `db.system` | string | Identifier for the database management system (DBMS) product being used. |
+| `db.connection_string` | string | Connection string used to connect to the database. It's recommended to remove embedded credentials.|
+| `db.user` | string | Username for accessing the database. |
+| `db.name` | string | String used to report the name of the database being accessed. For commands that switch the database, this string should be set to the target database, even if the command fails.|
+| `db.statement` | string | Database statement that's being run.|
azure-monitor Profiler Aspnetcore Linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/profiler-aspnetcore-linux.md
The following instructions apply to all Windows, Linux, and Mac development envi
dotnet add package Microsoft.ApplicationInsights.Profiler.AspNetCore ```
-1. Enable Application Insights in Program.cs:
-
- ```csharp
- public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
- WebHost.CreateDefaultBuilder(args)
- .UseApplicationInsights() // Add this line of code to Enable Application Insights
- .UseStartup<Startup>();
- ```
-
-1. Enable Profiler in Startup.cs:
+1. Enable Application Insights and Profiler in Startup.cs:
```csharp public void ConfigureServices(IServiceCollection services) {
+ services.AddApplicationInsightsTelemetry(); // Add this line of code to enable Application Insights.
services.AddServiceProfiler(); // Add this line of code to Enable Profiler services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_1); }
If you use custom containers that are hosted by Azure App Service, follow the in
Enable Service Profiler for a containerized ASP.NET Core application](https://github.com/Microsoft/ApplicationInsights-Profiler-AspNetCore/tree/master/examples/EnableServiceProfilerForContainerApp) to enable Application Insights Profiler. Report any issues or suggestions to the Application Insights GitHub repository:
-[ApplicationInsights-Profiler-AspNetCore: Issues](https://github.com/Microsoft/ApplicationInsights-Profiler-AspNetCore/issues).
+[ApplicationInsights-Profiler-AspNetCore: Issues](https://github.com/Microsoft/ApplicationInsights-Profiler-AspNetCore/issues).
azure-monitor Sql Insights Enable https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/sql-insights-enable.md
Each type of SQL offers methods for your monitoring virtual machine to securely
### Azure SQL Databases
-[Tutorial - Connect to an Azure SQL server using an Azure Private Endpoint - Azure portal](../../private-link/tutorial-private-endpoint-sql-portal.md) provides an example for how to setup a private endpoint that you can use to access your database. If you use this method, you will need to ensure your monitoring virtual machines is in the same VNET and subnet that you will be using for the private endpoint. You can then create the private endpoint on your database if you have not already done so.
+SQL insights supports accessing your Azure SQL Database via it's public endpoint as well as from it's virtual network.
-If you use a [firewall setting](../../azure-sql/database/firewall-configure.md) to provide access to your SQL Database, you need to add a firewall rule to provide access from the public IP address of the monitoring virtual machine. You can access the firewall settings from the **Azure SQL Database Overview** page in the portal.
+For access via the public endpoint, you would add a rule under the **Firewall settings** page and the [IP firewall settings](https://docs.microsoft.com/azure/azure-sql/database/network-access-controls-overview#ip-firewall-rules) section. For specifying access from a virtual network, you can set [virtual network firewall rules](https://docs.microsoft.com/azure/azure-sql/database/network-access-controls-overview#virtual-network-firewall-rules) and set the [service tags required by the Azure Monitor agent](https://docs.microsoft.com/azure/azure-monitor/agents/azure-monitor-agent-overview#networking). [This article](https://docs.microsoft.com/azure/azure-sql/database/network-access-controls-overview#ip-vs-virtual-network-firewall-rules) describes the differences between these two types of firewall rules.
:::image type="content" source="media/sql-insights-enable/set-server-firewall.png" alt-text="Set server firewall" lightbox="media/sql-insights-enable/set-server-firewall.png"::: :::image type="content" source="media/sql-insights-enable/firewall-settings.png" alt-text="Firewall settings." lightbox="media/sql-insights-enable/firewall-settings.png":::
+> [!NOTE]
+> SQL insights does not currently support Azure Private Endpoint for Azure SQL Database. We recommend using [Service Tags](https://docs.microsoft.com/azure/virtual-network/service-tags-overview) on your network security group or virtual network firewall settings that the [Azure Monitor agent supports](https://docs.microsoft.com/azure/azure-monitor/agents/azure-monitor-agent-overview#networking).
+ ### Azure SQL Managed Instances If your monitoring virtual machine will be in the same VNet as your SQL MI resources, then see [Connect inside the same VNet](https://docs.microsoft.com/azure/azure-sql/managed-instance/connect-application-instance#connect-inside-the-same-vnet). If your monitoring virtual machine will be in the different VNet than your SQL MI resources, then see [Connect inside a different VNet](https://docs.microsoft.com/azure/azure-sql/managed-instance/connect-application-instance#connect-inside-a-different-vnet).
azure-monitor Sql Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/sql-insights-overview.md
See [Enable SQL insights](sql-insights-enable.md) for the detailed procedure to
## Data collected by SQL insights
-In the public preview, SQL insights only supports the remote method of monitoring. The [telegraf agent](https://www.influxdata.com/time-series-platform/telegraf/) is not installed on the SQL Server. It uses the [SQL Server input plugin for telegraf](https://www.influxdata.com/integration/microsoft-sql-server/) and use the three groups of queries for the different types of SQL it monitors: Azure SQL DB, Azure SQL Managed Instance, SQL server running on an Azure VM.
+
+SQL insights only supports the remote method of monitoring SQL. We do not install any agents on the VMs that are running SQL Server. One or more dedicated monitoring VMs are required which we use to remotely collect data from your SQL resources.
+
+Each of these monitoring VMs will have the [Azure Monitor agent](https://docs.microsoft.com/azure/azure-monitor/agents/azure-monitor-agent-overview) installed on them along with the Workload insights (WLI) extension.
+
+The WLI extension includes the open source [telegraf agent](https://www.influxdata.com/time-series-platform/telegraf/). We use [data collection rules](https://docs.microsoft.com/azure/azure-monitor/agents/data-collection-rule-overview) to configure the [sqlserver input plugin](https://www.influxdata.com/integration/microsoft-sql-server/) to specify the data to collect from Azure SQL DB, Azure SQL Managed Instance, and SQL Server running on an Azure VM.
The following tables summarize the following:
azure-monitor Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/service-limits.md
This article lists limits in different areas of Azure Monitor.
## Next Steps - [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/)-- [Monitoring usage and estimated costs in Azure Monitor](/usage-estimated-costs.md)
+- [Monitoring usage and estimated costs in Azure Monitor](/azure/azure-monitor/usage-estimated-costs)
- [Manage usage and costs for Application Insights](app/pricing.md)
azure-netapp-files Azacsnap Cmd Ref Delete https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azacsnap-cmd-ref-delete.md
It is possible to delete volume snapshots and database catalog entries with the
The `-c delete` command has the following options: -- `--delete hana` when used with the options `--hanasid <SID>` and `--hanabackupid <HANA backup id>` will delete entries from the SAP HANA backup catalog matching the criteria.
+- `--delete hana` when used with the options `--dbsid <SID>` and `--hanabackupid <HANA backup id>` will delete entries from the SAP HANA backup catalog matching the criteria.
- `--delete storage` when used with the option `--snapshot <snapshot name>` will delete the snapshot from the back-end storage system. -- `--delete sync` when used with options `--hanasid <SID>` and `--hanabackupid <HANA backup id>` gets the storage snapshot name from the backup catalog for the `<HANA backup id>`, and then deletes the entry in the backup catalog _and_ the snapshot from any of the volumes containing the named snapshot.
+- `--delete sync` when used with options `--dbsid <SID>` and `--hanabackupid <HANA backup id>` gets the storage snapshot name from the backup catalog for the `<HANA backup id>`, and then deletes the entry in the backup catalog _and_ the snapshot from any of the volumes containing the named snapshot.
- `--delete sync` when used with `--snapshot <snapshot name>` will check for any entries in the backup catalog for the `<snapshot name>`, gets the SAP HANA backup ID and deletes both the entry in the backup catalog _and_ the snapshot from any of the volumes containing the named snapshot.
The `-c delete` command has the following options:
### Delete a snapshot using `sync` option` ```bash
-azacsnap -c delete --delete sync --hanasid H80 --hanabackupid 157979797979
+azacsnap -c delete --delete sync --dbsid H80 --hanabackupid 157979797979
``` > [!NOTE]
and the snapshot from any of the volumes containing the named snapshot.
### Delete a snapshot using `hana` option` ```bash
-azacsnap -c delete --delete hana --hanasid H80 --hanabackupid 157979797979
+azacsnap -c delete --delete hana --dbsid H80 --hanabackupid 157979797979
``` > [!NOTE]
azure-netapp-files Azacsnap Cmd Ref Details https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azacsnap-cmd-ref-details.md
List snapshot details called with snapshotFilter ''
> [!NOTE] > This example shows output for snapshots run using the previous version (v4.3) as well as snapshots taken with the latest version (5.0).
+The following example has output for **Azure NetApp Files**, note the reduced information with this command as the information is not exposed by the service.
+
+```bash
+azacsnap -c details --details snapshots
+```
+
+```output
+List snapshot details called with snapshotFilter ''
+#, Volume, SnapshotName
+#1, HANADATA_P, snapmirror.dd59bda4-d507-11ea-99fc-00a098f31b77_2155201518.2021-03-22_020000
+#2, HANADATA_P, quarter_hourly__2021-03-22T020002-3678211Z
+#3, HANADATA_P, quarter_hourly__2021-03-22T014502-0716526Z
+#4, HANADATA_P, quarter_hourly__2021-03-22T013002-4080651Z
+#5, HANADATA_P, quarter_hourly__2021-03-22T011502-8376045Z
+#6, HANADATA_P, quarter_hourly__2021-03-22T010002-8810203Z
+#7, HANADATA_P, quarter_hourly__2021-03-22T004502-5983306Z
+#8, HANADATA_P, quarter_hourly__2021-03-22T003002-1168834Z
+#9, HANADATA_P, quarter_hourly__2021-03-22T001501-9781108Z
+#10, HANADATA_P, quarter_hourly__2021-03-22T000002-0865483Z
+#1, HANASHARED_P, quarter_hourly__2021-03-22T020002-3678211Z
+#2, HANASHARED_P, quarter_hourly__2021-03-22T014502-0716526Z
+#3, HANASHARED_P, quarter_hourly__2021-03-22T013002-4080651Z
+#4, HANASHARED_P, quarter_hourly__2021-03-22T011502-8376045Z
+#5, HANASHARED_P, quarter_hourly__2021-03-22T010002-8810203Z
+#6, HANASHARED_P, quarter_hourly__2021-03-22T004502-5983306Z
+#7, HANASHARED_P, quarter_hourly__2021-03-22T003002-1168834Z
+#8, HANASHARED_P, quarter_hourly__2021-03-22T001501-9781108Z
+#9, HANASHARED_P, quarter_hourly__2021-03-22T000002-0865483Z
+#10, HANASHARED_P, quarter_hourly__2021-03-21T234502-3935816Z
+#1, HANALOGBACKUP_P, logs_5mins__2021-03-22T021002-5462356Z
+#2, HANALOGBACKUP_P, logs_5mins__2021-03-22T020502-2390356Z
+#3, HANALOGBACKUP_P, logs_5mins__2021-03-22T015502-3928726Z
+#4, HANALOGBACKUP_P, logs_5mins__2021-03-22T015001-9228768Z
+#5, HANALOGBACKUP_P, logs_5mins__2021-03-22T014002-5554548Z
+#6, HANALOGBACKUP_P, logs_5mins__2021-03-22T013502-1781392Z
+#7, HANALOGBACKUP_P, logs_5mins__2021-03-22T012502-6686004Z
+#8, HANALOGBACKUP_P, logs_5mins__2021-03-22T012002-7348198Z
+#9, HANALOGBACKUP_P, logs_5mins__2021-03-22T011002-2234413Z
+```
+ ### Output of the `azacsnap -c details --details replication` command This command checks the storage replication status from the primary site to DR location and *must*
azure-netapp-files Azacsnap Cmd Ref Restore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azacsnap-cmd-ref-restore.md
The `-c restore` command has the following options:
- `--restore revertvolume` Reverts the target volume to a prior state based on the most recent snapshot. Using this command as part of DR Failover into the paired DR region. This command **stops** storage replication from the primary site to the secondary site, and reverts the target DR volume(s) to their latest available snapshot on the DR volumes along with recommended filesystem mountpoints for the reverted DR volumes. This command should be run on the Azure Large Instance system **in the DR region** (that is, the target fail-over system). > [!NOTE] > The sub-command (`--restore revertvolume`) is only available for Azure Large Instance and is not available for Azure NetApp Files.-- `--hanasid <SAP HANA SID>` is the SAP HANA SID being selected from the configuration file to apply the volume restore commands to.
+- `--dbsid <SAP HANA SID>` is the SAP HANA SID being selected from the configuration file to apply the volume restore commands to.
- `[--configfile <config filename>]` is an optional parameter allowing for custom configuration file names.
volumes, otherwise the Production volumes could have clones created.
### Output of the `azacsnap -c restore --restore snaptovol` command (for Single-Node scenario) ```output
-> azacsnap --configfile DR.json -c restore --restore snaptovol --hanasid H80
+> azacsnap --configfile DR.json -c restore --restore snaptovol --dbsid H80
* This program is designed for those customers who have previously installed the Production HANA instance in the Disaster Recovery Location either as a stand-alone instance or as part of a multi-purpose environment.
azure-netapp-files Azacsnap Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azacsnap-disaster-recovery.md
HDB stop
#### Step 4: Restore the volumes ```bash
-azacsnap -c restore --restore revertvolume --hanasid H80
+azacsnap -c restore --restore revertvolume --dbsid H80
``` **_Output of the DR failover command_**. ```bash
-azacsnap --configfile DR.json -c restore --restore revertvolume --hanasid H80
+azacsnap --configfile DR.json -c restore --restore revertvolume --dbsid H80
``` ```output
azure-netapp-files Azacsnap Installation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azacsnap-installation.md
database, change the IP address, usernames, and passwords as appropriate:
USER: AZACSNAP ```
-### Additional instructions for using the log trimmer (SAP HANA 2.0 and later)
-
-If using the log trimmer, then the following example commands set up a user (AZACSNAP) in the
-TENANT database(s) on an SAP HANA 2.0 database system. Remember to change the IP address,
-usernames, and passwords as appropriate:
-
-1. Connect to the TENANT database to create the user, tenant-specific details are `<IP_address_of_host>` and `<SYSTEM_USER_PASSWORD>`. Also, note the port (`30015`) required to communicate with the TENANT database.
-
- ```bash
- hdbsql -n <IP_address_of_host>:30015 - i 00 -u SYSTEM -p <SYSTEM_USER_PASSWORD>
- ```
-
- ```output
- Welcome to the SAP HANA Database interactive terminal.
-
- Type: \h for help with commands
- \q to quit
-
- hdbsql TENANTDB=>
- ```
-
-1. Create the user
-
- This example creates the AZACSNAP user in the SYSTEMDB.
-
- ```sql
- hdbsql TENANTDB=> CREATE USER AZACSNAP PASSWORD <AZACSNAP_PASSWORD_CHANGE_ME> NO FORCE_FIRST_PASSWORD_CHANGE;
- ```
-
-1. Grant the user permissions
-
- This example sets the permission for the AZACSNAP user to allow for performing a database
- consistent storage snapshot.
-
- ```sql
- hdbsql TENANTDB=> GRANT BACKUP ADMIN, CATALOG READ, MONITORING TO AZACSNAP;
- ```
-
-1. *OPTIONAL* - Prevent user's password from expiring
-
- > [!NOTE]
- > Check with corporate policy before making this change.
-
- This example disables the password expiration for the AZACSNAP user, without this change the user's password will expire preventing snapshots to be taken correctly.
-
- ```sql
- hdbsql TENANTDB=> ALTER USER AZACSNAP DISABLE PASSWORD LIFETIME;
- ```
-
-> [!NOTE]
-> Repeat these steps for all the tenant databases. It's possible to get the connection details for all the tenants using the following SQL query against the SYSTEMDB.
-
-```sql
-SELECT HOST, SQL_PORT, DATABASE_NAME FROM SYS_DATABASES.M_SERVICES WHERE SQL_PORT LIKE '3%'
-```
-
-See the following example query and output.
-
-```bash
-hdbsql -jaxC -n 10.90.0.31:30013 -i 00 -u SYSTEM -p <SYSTEM_USER_PASSWORD> " SELECT HOST,SQL_PORT, DATABASE_NAME FROM SYS_DATABASES.M_SERVICES WHERE SQL_PORT LIKE '3%' "
-```
-
-```output
-sapprdhdb80,30013,SYSTEMDB
-sapprdhdb80,30015,H81
-sapprdhdb80,30041,H82
-```
- ### Using SSL for communication with SAP HANA The `azacsnap` tool utilizes SAP HANA's `hdbsql` command to communicate with SAP HANA. This
The following are always used when using the `azacsnap --ssl` option:
- `-e` - Enables TLS encryptionTLS/SSL encryption. The server chooses the highest available. - `-ssltrustcert` - Specifies whether to validate the server's certificate. - `-sslhostnameincert "*"` - Specifies the host name used to verify serverΓÇÖs identity. By
- specifying `"*"` as the host name, then the server's host name is not validated.
+ specifying `"*"` as the host name, then the server's host name is not validated.
SSL communication also requires Key Store and Trust Store files. While it is possible for these files to be stored in default locations on a Linux installation, to ensure the
azure-netapp-files Azacsnap Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azacsnap-introduction.md
# What is Azure Application Consistent Snapshot tool (preview)
-Azure Application Consistent Snapshot tool (AzAcSnap) is a command-line tool that enables you to simplify data protection for third-party databases (SAP HANA) in Linux environments (for example, SUSE and RHEL).
+Azure Application Consistent Snapshot tool (AzAcSnap) is a command-line tool that enables data protection for third-party databases by handling all the orchestration required to put them into an application consistent state before taking a storage snapshot, after which it returns them to an operational state.
+
+## Supported Platforms and OS
+
+- **Databases**
+ - SAP HANA (refer to [support matrix](azacsnap-get-started.md#snapshot-support-matrix-from-sap) for details)
+
+- **Operating Systems**
+ - SUSE Linux Enterprise Server 12+
+ - Red Hat Enterprise Linux 7+
## Benefits of using AzAcSnap
azure-netapp-files Azacsnap Tips https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azacsnap-tips.md
az role definition create --role-definition '{ \
}' ```
+For restore options to work successfully, the AzAcSnap service principal also needs to be able to create volumes. In this case the role definition needs an additional action, therefore the complete service principal should look like the following example.
+
+```bash
+az role definition create --role-definition '{ \
+ "Name": "Azure Application Consistent Snapshot tool", \
+ "IsCustom": "true", \
+ "Description": "Perform snapshots and restores on ANF volumes.", \
+ "Actions": [ \
+ "Microsoft.NetApp/*/read", \
+ "Microsoft.NetApp/netAppAccounts/capacityPools/volumes/snapshots/write", \
+ "Microsoft.NetApp/netAppAccounts/capacityPools/volumes/snapshots/delete", \
+ "Microsoft.NetApp/netAppAccounts/capacityPools/volumes/write" \
+ ], \
+ "NotActions": [], \
+ "DataActions": [], \
+ "NotDataActions": [], \
+ "AssignableScopes": ["/subscriptions/<insert your subscription id>"] \
+}'
+```
++ ## Take snapshots manually Before executing any backup commands (`azacsnap -c backup`), check the configuration by running the test commands and verify they get executed successfully. Correct execution of these tests proved `azacsnap` can communicate with the installed SAP HANA database and the underlying storage system of the SAP HANA on **Azure Large Instance** or **Azure NetApp Files** system.
Key attributes of storage volume snapshots:
## Next steps -- [Troubleshoot](azacsnap-troubleshoot.md)
+- [Troubleshoot](azacsnap-troubleshoot.md)
azure-netapp-files Azacsnap Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azacsnap-troubleshoot.md
Example output from `/var/log/messages` file.
Dec 17 09:01:13 azacsnap-rhel azacsnap: Database # 1 (PR1) : completed ok ```
+## Failed communication with Azure NetApp Files
+
+When validating communication with Azure NetApp Files, communication might fail or time-out. Check to ensure firewall rules are not blocking outbound traffic from the system running AzAcSnap to the following addresses and TCP/IP ports:-
+
+- (https://)management.azure.com:443
+- (https://)login.microsoftonline.com:443
+ ## Failed communication with SAP HANA When validating communication with SAP HANA by running a test with `azacsnap -c test --test hana` and it provides the following error:
azure-netapp-files Azure Netapp Files Create Volumes Smb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-create-volumes-smb.md
na ms.devlang: na Previously updated : 03/01/2021 Last updated : 03/19/2021 # Create an SMB volume for Azure NetApp Files
azure-netapp-files Azure Netapp Files Solution Architectures https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md
na ms.devlang: na Previously updated : 03/08/2021 Last updated : 03/19/2021 # Solution architectures using Azure NetApp Files
azure-netapp-files Create Active Directory Connections https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/create-active-directory-connections.md
na ms.devlang: na Previously updated : 03/01/2021 Last updated : 03/19/2021 # Create and manage Active Directory connections for Azure NetApp Files
azure-netapp-files Solutions Benefits Azure Netapp Files Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/solutions-benefits-azure-netapp-files-sql-server.md
na ms.devlang: na Previously updated : 02/08/2021 Last updated : 03/19/2021 # Benefits of using Azure NetApp Files for SQL Server deployment
azure-netapp-files Troubleshoot Dual Protocol Volumes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/troubleshoot-dual-protocol-volumes.md
This article describes resolutions to error conditions you might have when creat
| The SMB or dual-protocol volume creation fails with the following error: <br> `{"code":"DeploymentFailed","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.","details":[{"code":"InternalServerError", "message":"Error when creating - Failed to create the Active Directory machine account \"SMBTESTAD-A452\". Reason: Kerberos Error: Pre-authentication information was invalid Details: Error: Machine account creation procedure failed\n [ 567] Loaded the preliminary configuration.\n [ 671] Successfully connected to ip 10.x.x.x, port 88 using TCP\n**[ 1099] FAILURE: Could not authenticate as\n** 'user@contoso.com': CIFS server account password does\n** not match password stored in Active Directory\n** (KRB5KDC_ERR_PREAUTH_FAILED)\n. "}]}` | Make sure that the password entered for joining the AD connection is correct. | | The SMB or dual-protocol volume creation fails with the following error: <br> `{"code":"DeploymentFailed","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.","details":[{"code":"InternalServerError","message":"Error when creating - Failed to create the Active Directory machine account \"SMBTESTAD-D9A2\". Reason: SecD Error: ou not found Details: Error: Machine account creation procedure failed\n [ 561] Loaded the preliminary configuration.\n [ 665] Successfully connected to ip 10.x.x.x, port 88 using TCP\n [ 1039] Successfully connected to ip 10.x.x.x, port 389 using TCP\n**[ 1147] FAILURE: Specifed OU 'OU=AADDC Com' does not exist in\n** contoso.com\n. "}]}` | Make sure that the OU path specified for joining the AD connection is correct. If you use Azure ADDS, make sure that the organizational unit path is `OU=AADDC Computers`. | | The SMB or dual-protocol volume creation fails with the following error: <br> `Failed to create the Active Directory machine account \"SMB-ANF-VOL. Reason: LDAP Error: Local error occurred Details: Error: Machine account creation procedure failed. [nnn] Loaded the preliminary configuration. [nnn] Successfully connected to ip 10.x.x.x, port 88 using TCP [nnn] Successfully connected to ip 10.x.x.x, port 389 using [nnn] Entry for host-address: 10.x.x.x not found in the current source: FILES. Ignoring and trying next available source [nnn] Source: DNS unavailable. Entry for host-address:10.x.x.x found in any of the available sources\n*[nnn] FAILURE: Unable to SASL bind to LDAP server using GSSAPI: local error [nnn] Additional info: SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information (Cannot determine realm for numeric host address) [nnn] Unable to connect to LDAP (Active Directory) service on contoso.com (Error: Local error) [nnn] Unable to make a connection (LDAP (Active Directory):contosa.com, result: 7643. ` | The pointer (PTR) record of the AD host machine might be missing on the DNS server. You need to create a reverse lookup zone on the DNS server, and then add a PTR record of the AD host machine in that reverse lookup zone. <br> For example, assume that the IP address of the AD machine is `10.x.x.x`, the hostname of the AD machine (as found by using the `hostname` command) is `AD1`, and the domain name is `contoso.com`. The PTR record added to the reverse lookup zone should be `10.x.x.x` -> `contoso.com`. |
-| The SMB or dual-protocol volume creation fails with the following error: <br> `Failed to create the Active Directory machine account \"SMB-ANF-VOL\". Reason: Kerberos Error: KDC has no support for encryption type Details: Error: Machine account creation procedure failed [nnn]Loaded the preliminary configuration. [nnn]Successfully connected to ip 10.x.x.x, port 88 using TCP [nnn]FAILURE: Could not authenticate as 'contosa.com': KDC has no support for encryption type (KRB5KDC_ERR_ETYPE_NOSUPP) ` | Make sure that [AES Encryption](/create-active-directory-connections.md#create-an-active-directory-connection) is enabled both in the Active Directory connection and for the service account. |
+| The SMB or dual-protocol volume creation fails with the following error: <br> `Failed to create the Active Directory machine account \"SMB-ANF-VOL\". Reason: Kerberos Error: KDC has no support for encryption type Details: Error: Machine account creation procedure failed [nnn]Loaded the preliminary configuration. [nnn]Successfully connected to ip 10.x.x.x, port 88 using TCP [nnn]FAILURE: Could not authenticate as 'contosa.com': KDC has no support for encryption type (KRB5KDC_ERR_ETYPE_NOSUPP) ` | Make sure that [AES Encryption](/azure/azure-netapp-files/create-active-directory-connections#create-an-active-directory-connection) is enabled both in the Active Directory connection and for the service account. |
| The SMB or dual-protocol volume creation fails with the following error: <br> `Failed to create the Active Directory machine account \"SMB-NTAP-VOL\". Reason: LDAP Error: Strong authentication is required Details: Error: Machine account creation procedure failed\n [ 338] Loaded the preliminary configuration.\n [ nnn] Successfully connected to ip 10.x.x.x, port 88 using TCP\n [ nnn ] Successfully connected to ip 10.x.x.x, port 389 using TCP\n [ 765] Unable to connect to LDAP (Active Directory) service on\n dc51.area51.com (Error: Strong(er) authentication\n required)\n*[ nnn] FAILURE: Unable to make a connection (LDAP (Active\n* Directory):contoso.com), result: 7609\n. "` | The LDAP Signing option is not selected, but the AD client has LDAP signing. [Enable LDAP Signing](create-active-directory-connections.md#create-an-active-directory-connection) and retry. | ## Next steps
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/whats-new.md
na ms.devlang: na Previously updated : 03/11/2021 Last updated : 03/19/2021
Azure NetApp Files is updated regularly. This article provides a summary about t
## March 2021
-* SMB Continuous Availability (CA) shares (Preview)
+* [SMB Continuous Availability (CA) shares](azure-netapp-files-create-volumes-smb.md#add-an-smb-volume) (Preview)
SMB Transparent Failover enables maintenance operations on the Azure NetApp Files service without interrupting connectivity to server applications storing and accessing data on SMB volumes. To support SMB Transparent Failover, Azure NetApp Files now supports the SMB Continuous Availability shares option for use with SQL Server applications over SMB running on Azure VMs. This feature is currently supported on Windows SQL Server. Linux SQL Server is not currently supported. Enabling this feature provides significant SQL Server performance improvements and scale and cost benefits for [Single Instance, Always-On Failover Cluster Instance and Always-On Availability Group deployments](azure-netapp-files-solution-architectures.md#sql-server). See [Benefits of using Azure NetApp Files for SQL Server deployment](solutions-benefits-azure-netapp-files-sql-server.md).
azure-percept How To Update Via Usb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/how-to-update-via-usb.md
This guide will show you how to successfully update your dev kit's operating sys
- Windows:
- ```bash
+ ```console
uuu -b emmc_full.txt fast-hab-fw.raw Azure-Percept-DK-<version number>.raw ```
azure-percept Overview Advanced Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/overview-advanced-code.md
Previously updated : 02/18/2021 Last updated : 03/23/2021 # Advanced development with Azure Percept
-With Azure Percept, software developers and data scientists are able to use advanced code workflows for AI Lifecycle management. Through a growing open source library, they can use samples to get started with their AI development journey and build production ready solutions.
-## Get started with the advanced development tutorials
+With Azure Percept, software developers and data scientists can use advanced code workflows for AI lifecycle management. Through a growing open source library, they can use samples to get started with their AI development journey and build production-ready solutions.
-Learn about all of the available [Azure Percept AI models](./overview-ai-models.md).
+## Get started with advanced development
-Please see the [Azure Percept DK advanced development GitHub](https://github.com/microsoft/azure-percept-advanced-development) for
+See the [Azure Percept DK advanced development GitHub](https://github.com/microsoft/azure-percept-advanced-development) for
up-to-date guidance, tutorials, and examples for things like:
-* Bringing a custom AI model to the device
-* Updating a model we already support with transfer learning
-* And more
+- Deploying a custom AI model to your Azure Percept DK
+- Updating a supported model with transfer learning
+- And more
## Next steps
-Learn about all of the available [Azure Percept AI models](./overview-ai-models.md). If none of these models suit your needs,
-feel free to use the advanced code journey to bring your own model or computer vision pipeline to the Percept DK,
-and if you feel it would help others, open a pull request too!
+Learn more about the available [Azure Percept AI models](./overview-ai-models.md). If none of these models suit your needs, use the advanced code journey to bring your own model or computer vision pipeline to the Percept DK. If you have a contribution that you think would help others, feel free open a pull request too.
azure-percept Overview Ai Models https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/overview-ai-models.md
Previously updated : 02/16/2021 Last updated : 03/23/2021 # Azure Percept AI models
-Azure Percept enables you to develop and deploy AI models directly to your Azure Percept DK from [Azure Percept Studio](https://go.microsoft.com/fwlink/?linkid=2135819). Model deployment utilizes [Azure IoT Hub](https://azure.microsoft.com/services/iot-hub/) and [Azure IoT Edge](https://azure.microsoft.com/services/iot-edge/#iotedge-overview).
+Azure Percept enables you to develop and deploy AI models directly to your [Azure Percept DK](./overview-azure-percept-dk.md) from [Azure Percept Studio](https://go.microsoft.com/fwlink/?linkid=2135819). Model deployment utilizes [Azure IoT Hub](https://azure.microsoft.com/services/iot-hub/) and [Azure IoT Edge](https://azure.microsoft.com/services/iot-edge/#iotedge-overview).
## Sample AI models
Azure Percept Studio contains sample models for the following applications:
- general object detection - products-on-shelf detection
-With pre-trained models, no coding or training data collection is required. Simply deploy your desired model to your Azure Percept DK from the portal and open your devkitΓÇÖs video stream to see the model inferencing in action. Model inferencing telemetry can also be accessed through the [Azure IoT Explorer](https://github.com/Azure/azure-iot-explorer/releases) tool.
+With pre-trained models, no coding or training data collection is required. Simply [deploy your desired model](./how-to-deploy-model.md) to your Azure Percept DK from the portal and open your devkitΓÇÖs [video stream](./how-to-view-video-stream.md) to see the model inferencing in action. [Model inferencing telemetry](./how-to-view-telemetry.md) can also be accessed through the [Azure IoT Explorer](https://github.com/Azure/azure-iot-explorer/releases) tool.
## Reference solutions
A [people counting reference solution](https://github.com/microsoft/Azure-Percep
## Custom no-code solutions
-Through Azure Percept Studio, you can develop custom [vision](./tutorial-nocode-vision.md) and speech solutions, no coding required.
+Through Azure Percept Studio, you can develop custom [vision](./tutorial-nocode-vision.md) and [speech](./tutorial-no-code-speech.md) solutions, no coding required.
-For custom vision solutions, both object detection and classification AI models are available. Simply upload and tag your training images, which can be taken directly with the Azure Percept Vision SoM of the Azure Percept DK, if desired. Model training and evaluation are easily performed in [Custom Vision](https://www.customvision.ai/), which is part of [Azure Cognitive Services](https://azure.microsoft.com/services/cognitive-services/#overview).
+For custom vision solutions, both object detection and classification AI models are available. Simply upload and tag your training images, which can be taken directly with the Azure Percept Vision SoM of the Azure Percept DK if desired. Model training and evaluation are easily performed in [Custom Vision](https://www.customvision.ai/), which is part of [Azure Cognitive Services](https://azure.microsoft.com/services/cognitive-services/#overview).
</br>
Pre-built voice assistant keywords and commands are available directly through t
Please see the [Azure Percept DK advanced development GitHub](https://github.com/microsoft/azure-percept-advanced-development) for up-to-date guidance, tutorials, and examples for things like:
-* Bringing a custom AI model to the device
-* Updating a model we already support with transfer learning
-* And more
+- Deploying a custom AI model to your Azure Percept DK
+- Updating a supported model with transfer learning
+- And more
azure-percept Overview Azure Percept Audio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/overview-azure-percept-audio.md
Previously updated : 02/18/2021 Last updated : 03/23/2021 # Introduction to Azure Percept Audio
-Azure Percept Audio is an accessory device that adds speech AI capabilities to the Azure Percept DK. It contains a preconfigured audio processor and a four-microphone linear array, enabling you to apply voice commanding, keyword spotting, and far field speech to local listening devices using Azure Cognitive Services. Azure Percept Audio enables device manufacturers to extend Azure Percept DK beyond vision capabilities to new, smart voice-activated devices. It is integrated out-of-the-box with Azure Percept DK, Azure Percept Studio, and other Azure edge management services. It is available for purchase at the [Microsoft online store](https://go.microsoft.com/fwlink/p/?LinkId=2155270).
+Azure Percept Audio is an accessory device that adds speech AI capabilities to [Azure Percept DK](./overview-azure-percept-dk.md). It contains a preconfigured audio processor and a four-microphone linear array, enabling you to use voice commands, keyword spotting, and far field speech with the help of Azure Cognitive Services. It is integrated out-of-the-box with Azure Percept DK, [Azure Percept Studio](https://go.microsoft.com/fwlink/?linkid=2135819), and other Azure edge management services. Azure Percept Audio is available for purchase at the [Microsoft online store](https://go.microsoft.com/fwlink/p/?LinkId=2155270).
> [!div class="nextstepaction"] > [Buy now](https://go.microsoft.com/fwlink/p/?LinkId=2155270)
-<!
->
</br> > [!VIDEO https://www.youtube.com/embed/Qj8NGn-7s5A]
Azure Percept Audio is an accessory device that adds speech AI capabilities to t
Azure Percept Audio contains the following major components: -- Production-ready Azure Percept Audio device (SoM) with four-microphone linear array and audio processing done by an XMOS Codec-- Developer (Interposer) board (includes 2x buttons, 3x LEDs, Micro USB, and 3.5 mm Audio Jack)
+- Production-ready Azure Percept Audio device (SoM) with a four-microphone linear array and audio processing via XMOS Codec
+- Developer (interposer) board: 2x buttons, 3x LEDs, Micro USB, and 3.5 mm audio jack
- Required cables: FPC cable, USB Micro Type-B to USB-A - Welcome card - Mechanical mounting plate with integrated 80/20 1010 series mount ## Compute capabilities ​
-Azure Percept Audio passes the audio input through the speech stack that runs on the CPU of the carrier board of the Azure Percept DK in a hybrid edge-cloud manner. Therefore, Azure Percept Audio requires a carrier board with an OS that supports the speech stack in order to perform. ​
+Azure Percept Audio passes audio input through the speech stack that runs on the CPU of the Azure Percept DK carrier board in a hybrid edge-cloud manner. Therefore, Azure Percept Audio requires a carrier board with an OS that supports the speech stack in order to perform. ​
-The processing is done as follows: ​
+The audio processing is done as follows: ​
- Azure Percept Audio: captures and converts the audio and sends it to the DK and audio jack. -- Azure Percept DK: the speech stack performs beam forming and echo cancellation and processes the incoming audio to optimize for speech. It then performs the keyword spotting.
+- Azure Percept DK: the speech stack performs beam forming and echo cancellation and processes the incoming audio to optimize for speech. After processing, it performs keyword spotting.
- Cloud: processes natural language commands and phrases, keyword verification, and retraining. ​
The processing is done as follows: ​
## Getting started - [Assemble your Azure Percept DK](./quickstart-percept-dk-unboxing.md)-- [Complete the Azure Percept DK setup experience](./quickstart-percept-dk-set-up.md) - [Connect your Azure Percept Audio device to your devkit](./quickstart-percept-audio-setup.md)
+- [Complete the Azure Percept DK setup experience](./quickstart-percept-dk-set-up.md)
## Build a no-code prototype
Build a [no-code speech solution](./tutorial-no-code-speech.md) in [Azure Percep
### Manage your no-code speech solution -- [Configure your voice assistant in Iot Hub](./how-to-manage-voice-assistant.md)-- [Configure your voice assistant in Azure Percept Studio](./how-to-configure-voice-assistant.md)
+- [Configure your voice assistant in Azure Percept Studio](./how-to-manage-voice-assistant.md)
+- [Configure your voice assistant in Iot Hub](./how-to-configure-voice-assistant.md)
- [Azure Percept Audio troubleshooting](./troubleshoot-audio-accessory-speech-module.md) ## Additional technical information
azure-percept Overview Azure Percept Dk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/overview-azure-percept-dk.md
Previously updated : 02/18/2021 Last updated : 03/23/2021 # Azure Percept DK overview
-Azure Percept DK is an edge AI and IoT development kit designed for developing vision and audio AI proof of concepts. When combined with [Azure Percept Studio](./overview-azure-percept-studio.md) and [Azure Percept Audio](./overview-azure-percept-audio.md), it becomes a powerful yet simple-to-use platform for building edge AI solutions for a wide range of vision or audio AI applications. It is available for purchase at the [Microsoft online store](https://go.microsoft.com/fwlink/p/?LinkId=2155270).
+Azure Percept DK is an edge AI development kit designed for developing vision and audio AI solutions with [Azure Percept Studio](./overview-azure-percept-studio.md). Azure Percept DK is available for purchase at the [Microsoft online store](https://go.microsoft.com/fwlink/p/?LinkId=2155270).
> [!div class="nextstepaction"] > [Buy now](https://go.microsoft.com/fwlink/p/?LinkId=2155270)
-<!
->
</br> > [!VIDEO https://www.youtube.com/embed/Qj8NGn-7s5A]
-## Key Features
+## Key features
-- **The ability to run AI at the edge**. With built-in hardware acceleration, it can run vision AI models without a connection to the cloud.-- **Hardware root of trust security built in**. See this overview of [Azure Percept Security](./overview-percept-security.md) for more details.-- **Seamless integration with [Azure Percept Studio](https://go.microsoft.com/fwlink/?linkid=2135819)** and other Azure services. Such as, Azure IoT Hub, Azure Cognitive Services and [Live Video Analytics](https://docs.microsoft.com/azure/media-services/live-video-analytics-edge/overview)-- **Seamless integration with optional [Azure Percept Audio](./overview-azure-percept-audio.md)**-- **Support for the top AI platforms**. Such as ONNX and TensorFlow.-- **Integration with the 80/20 railing system**. Making it easier to build prototypes in production environments. Learn more about [80/20 integration](./overview-8020-integration.md).
+- Run AI at the edge. With built-in hardware acceleration, the dev kit can run AI models without a connection to the cloud.
-## Hardware Components
+- Hardware root of trust security built in. Learn more about [Azure Percept security](./overview-percept-security.md).
-- The Azure Percept DK carrier board
+- Seamless integration with [Azure Percept Studio](https://go.microsoft.com/fwlink/?linkid=2135819) and other Azure services, such as Azure IoT Hub, Azure Cognitive Services, and [Live Video Analytics](https://docs.microsoft.com/azure/media-services/live-video-analytics-edge/overview).
+
+- Compatible with [Azure Percept Audio](./overview-azure-percept-audio.md), an optional accessory for building AI audio solutions.
+
+- Support for third-party AI tools, such as ONNX and TensorFlow.
+
+- Integration with the 80/20 railing system, which allows for endless device mounting configurations. Learn more about [80/20 integration](./overview-8020-integration.md).
+
+## Hardware components
+
+- Azure Percept DK carrier board:
- NXP iMX8m processor - Trusted Platform Module (TPM) version 2.0
- - WiFi and Bluetooth connectivity
- - See the full [data sheet](./azure-percept-dk-datasheet.md)
-- The Azure Percept Vision system on module (SoM)
+ - Wi-Fi and Bluetooth connectivity
+ - For more information, see the [Azure Percept DK datasheet](./azure-percept-dk-datasheet.md)
+
+- Azure Percept Vision system-on-module (SoM):
- Intel Movidius Myriad X (MA2085) vision processing unit (VPU)
- - RGB camera sensor with the ability to add a second
- - See the full [data sheet](./azure-percept-vision-datasheet.md)
+ - RGB camera sensor
+ - For more information, see the [Azure Percept Vision datasheet](./azure-percept-vision-datasheet.md)
-## Get Started with the Azure Percept DK
+## Getting started with Azure Percept DK
-- Complete these Quick Starts
+- Set up your dev kit:
- [Unbox and assemble the Azure Percept DK](./quickstart-percept-dk-unboxing.md)
- - [Set up the Azure Percept DK and run your first vision AI model](./quickstart-percept-dk-set-up.md)
-- Start building proof of concepts with these tutorials
+ - [Complete the Azure Percept DK setup experience](./quickstart-percept-dk-set-up.md)
+
+- Start building vision and audio solutions:
- [Create a no-code vision solution in Azure Percept Studio](./tutorial-nocode-vision.md)
- - [Create a voice assistant in Azure Percept Studio](./tutorial-no-code-speech.md)
+ - [Create a no-code speech solution in Azure Percept Studio](./tutorial-no-code-speech.md) (Azure Percept Audio accessory required)
## Next steps
azure-percept Overview Azure Percept Studio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/overview-azure-percept-studio.md
Previously updated : 02/18/2021 Last updated : 03/23/2021 # Azure Percept Studio Overview
-[Azure Percept Studio](https://go.microsoft.com/fwlink/?linkid=2135819) is the single launch point for creating edge AI models and solutions. Azure Percept Studio allows you to discover and complete guided workflows that make it easy to integrate edge AI capable hardware and powerful Azure AI and IoT cloud services.
+[Azure Percept Studio](https://go.microsoft.com/fwlink/?linkid=2135819) is the single launch point for creating edge AI models and solutions. Azure Percept Studio allows you to discover and complete guided workflows that make it easy to integrate edge AI-capable hardware and powerful Azure AI and IoT cloud services.
-In the Studio, you can see your edge AI capable devices as end points for collecting initial and ongoing training data as well as deployment targets for model iterations. Having access to devices and training data allows for rapid prototyping and iterative Edge AI model development for both [vision](./tutorial-nocode-vision.md) and [speech](./tutorial-no-code-speech.md) scenarios.
-<!
->
+In the Studio, you can see your edge AI-capable devices as end points for collecting initial and ongoing training data as well as deployment targets for model iterations. Having access to devices and training data allows for rapid prototyping and iterative edge AI model development for both [vision](./tutorial-nocode-vision.md) and [speech](./tutorial-no-code-speech.md) scenarios.
-The workflows in Azure Percept Studio integrate many underlying Azure AI and IoT services, like Azure IoT Hub, Custom Vision, Speech Studio, and Azure ML Services, so you can use these services to create an end-to-end solution, without significant pre-existing knowledge. If you are already familiar with these Azure services, you can also connect to and modify existing resources outside of the Azure Percept Studio.
-<!
->
+The workflows in Azure Percept Studio integrate many Azure AI and IoT services, like Azure IoT Hub, Custom Vision, Speech Studio, and Azure ML, so you can use these services to create an end-to-end solution without significant pre-existing knowledge. If you are already familiar with these Azure services, you can also connect to and modify existing Azure service resources outside of Azure Percept Studio.
+
+Regardless of if you are a beginner or an advanced AI model and solution developer, working on a prototype, or moving to a production solution, Azure Percept Studio offers access to workflows you can use to reduce friction around building edge AI solutions.
-Regardless of if you are a beginner or a more advanced AI model and solution developer, working on a prototype or moving to a production solution, for speech or vision Edge AI, the Azure Percept Studio offers access to workflows you can use to reduce friction around building Edge AI solutions.
-<!
->
</br> > [!VIDEO https://www.youtube.com/embed/rZsUuCytZWY] ## Next steps -- Check out Azure Percept Studio [here](https://go.microsoft.com/fwlink/?linkid=2135819)
+- Check out [Azure Percept Studio](https://go.microsoft.com/fwlink/?linkid=2135819)
- Get the Azure Percept DK and Azure Percept Audio accessory at the [Microsoft online store](https://go.microsoft.com/fwlink/p/?LinkId=2155270)-- Complete the Azure Percept DK setup [quick start guide](./quickstart-percept-dk-set-up.md)-- Try the tutorials for building no-code [vision](./tutorial-nocode-vision.md) and [speech](./tutorial-no-code-speech.md) solutions
+- Learn more about [Azure Percept AI models and solutions](./overview-ai-models.md)
azure-percept Overview Azure Percept https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/overview-azure-percept.md
Previously updated : 02/18/2021 Last updated : 03/23/2021
The integration challenges one faces when attempting to deploy edge AI solutions
- Identifying and selecting the right silicon to power the solutions. - Ensuring the collective security of the hardware, software, models, and data.-- The ability to build and manage solutions that seamlessly work, at scale.
+- The ability to build and manage solutions that seamlessly work at scale.
## Components of Azure Percept The main components of Azure Percept are:
-1. AI hardware reference design and certification programs.
-
- - Provides the ecosystem of hardware developers with patterns and best practices for developing edge AI hardware that can be integrated easily with Azure AI and IoT services.
-
-2. Azure Percept DK (devkit).
+1. [Azure Percept DK.](./overview-azure-percept-dk.md)
- - A development kit that is flexible enough to support a wide variety of prototyping scenarios for device builders, solution builders and customers.
+ - A development kit that is flexible enough to support a wide variety of prototyping scenarios for device builders, solution builders, and customers.
> [!div class="nextstepaction"] > [Buy now](https://go.microsoft.com/fwlink/p/?LinkId=2155270)
-3. Services and workflows to accelerate edge AI model and solution development.
+1. Services and workflows that accelerate edge AI model and solution development.
- Development workflows and pre-built models accessible from [Azure Percept Studio](https://go.microsoft.com/fwlink/?linkid=2135819). - Model development services.
- - Device management services for scale.
+ - Device management services for scaling.
- End-to-end security.
+1. AI hardware reference design and certification programs.
+
+ - Provides the ecosystem of hardware developers with patterns and best practices for developing edge AI hardware that can be integrated easily with Azure AI and IoT services.
+ ## Next steps
-Get started with [Azure Percept DK](./overview-azure-percept-dk.md).
+Learn more about [Azure Percept DK](./overview-azure-percept-dk.md) and [Azure Percept Studio](./overview-azure-percept-studio.md).
azure-portal Quickstart Portal Dashboard Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/quickstart-portal-dashboard-powershell.md
a JSON representation of a sample dashboard. For more information, see [The stru
```azurepowershell-interactive $myPortalDashboardTemplateUrl = 'https://raw.githubusercontent.com/Azure/azure-docs-powershell-samples/master/azure-portal/portal-dashboard-template-testvm.json'
-$myPortalDashboardTemplatePath = "$env:TEMP\portal-dashboard-template-testvm.json"
+$myPortalDashboardTemplatePath = "$HOME\portal-dashboard-template-testvm.json"
Invoke-WebRequest -Uri $myPortalDashboardTemplateUrl -OutFile $myPortalDashboardTemplatePath -UseBasicParsing ```
To remove the VM and associated dashboard, delete the resource group that contai
```azurepowershell-interactive Remove-AzResourceGroup -Name $resourceGroupName
+Remove-Item -Path "$HOME\portal-dashboard-template-testvm.json"
``` ## Next steps
azure-resource-manager Azure Services Resource Providers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/azure-services-resource-providers.md
The resources providers that are marked with **- registered** are registered by
| Microsoft.AnalysisServices | [Azure Analysis Services](../../analysis-services/index.yml) | | Microsoft.ApiManagement | [API Management](../../api-management/index.yml) | | Microsoft.AppConfiguration | [Azure App Configuration](../../azure-app-configuration/index.yml) |
-| Microsoft.AppPlatform | [Azure Spring Cloud](../../spring-cloud/spring-cloud-overview.md) |
+| Microsoft.AppPlatform | [Azure Spring Cloud](../../spring-cloud/overview.md) |
| Microsoft.Attestation | Azure Attestation Service | | Microsoft.Authorization - [registered](#registration) | [Azure Resource Manager](../index.yml) | | Microsoft.Automation | [Automation](../../automation/index.yml) |
The resources providers that are marked with **- registered** are registered by
| Microsoft.MarketplaceApps | core | | Microsoft.MarketplaceOrdering - [registered](#registration) | core | | Microsoft.Media | [Media Services](../../media-services/index.yml) |
-| Microsoft.Microservices4Spring | [Azure Spring Cloud](../../spring-cloud/spring-cloud-overview.md) |
+| Microsoft.Microservices4Spring | [Azure Spring Cloud](../../spring-cloud/overview.md) |
| Microsoft.Migrate | [Azure Migrate](../../migrate/migrate-services-overview.md) | | Microsoft.MixedReality | [Azure Spatial Anchors](../../spatial-anchors/index.yml) | | Microsoft.NetApp | [Azure NetApp Files](../../azure-netapp-files/index.yml) |
azure-resource-manager Bicep Modules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/bicep-modules.md
output storageEndpoint object = stgModule.outputs.storageEndpoint
"resources": [ { "type": "Microsoft.Resources/deployments",
- "apiVersion": "2019-10-01",
+ "apiVersion": "2020-10-01",
"name": "storageDeploy", "properties": { ...
azure-resource-manager Bicep Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/bicep-overview.md
Title: Bicep language for Azure Resource Manager templates description: Describes the Bicep language for deploying infrastructure to Azure through Azure Resource Manager templates. Previously updated : 03/17/2021 Last updated : 03/23/2021 # What is Bicep (Preview)?
Bicep is a language for declaratively deploying Azure resources. You can use Bic
The JSON syntax for creating template can be verbose and require complicated expression. Bicep improves that experience without losing any of the capabilities of a JSON template. It's a transparent abstraction over the JSON for ARM templates. Each Bicep file compiles to a standard ARM template. Resource types, API versions, and properties that are valid in an ARM template are valid in a Bicep file. There are a few [known limitations](#known-limitations) in the current release.
+Bicep is currently in preview. To track the status of the work, see the [Bicep project repository](https://github.com/Azure/bicep).
+ To learn about Bicep, see the following video.
-> [!VIDEO https://mediusprodstatic.studios.ms/asset-cccfdaf2-cdbe-49dd-9c58-91a4fe5ff0fd/OD340_1920x1080_AACAudio_5429.mp4?sv=2018-03-28&sr=b&sig=N3DuBaTrK3nt5TGwIagTbCqjVrzgwiJ9at80MXQJFwg%3D&st=2021-03-02T01%3A22%3A57Z&se=2026-03-02T01%3A27%3A57Z&sp=r&rscd=filename%3DIGFY21Q3-OD340-Learn%2Beverything%2Babout%2Bthe%2Bnext%2Bgeneration%2Bof%2BARM.mp4]
+> [!VIDEO https://www.youtube.com/embed/sc1kJfcRQgY]
## Get started
-To start with Bicep, [install the tools](https://github.com/Azure/bicep/blob/main/docs/installing.md).
+To start with Bicep, [install the tools](bicep-install.md).
After installing the tools, try the [Bicep tutorial](./bicep-tutorial-create-first-bicep.md). The tutorial series walks you through the structure and capabilities of Bicep. You deploy Bicep files, and convert an ARM template into the equivalent Bicep file.
azure-resource-manager Common Deployment Errors https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/common-deployment-errors.md
To log debug information for a nested template, use the **debugSetting** element
```json { "type": "Microsoft.Resources/deployments",
- "apiVersion": "2016-09-01",
+ "apiVersion": "2020-10-01",
"name": "nestedTemplate", "properties": { "mode": "Incremental",
azure-resource-manager Copy Properties https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/copy-properties.md
The copy element has the following general format:
```json "copy": [ {
- "name": "<name-of-loop>",
+ "name": "<name-of-property>",
"count": <number-of-iterations>, "input": <values-for-the-property> }
azure-resource-manager Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deploy-cli.md
The deployment can take a few minutes to complete. When it finishes, you see a m
## Deploy remote template > [!NOTE]
-> Currently, Azure CLI doesn't support deploying remote Bicep files. To deploy a remote Bicep file, use CLI Bicep to compile the Bicep file to a JSON template first.
+> Currently, Azure CLI doesn't support deploying remote Bicep files. Use [Bicep CLI](./bicep-install.md#development-environment) to compile the Bicep file to a JSON template, and then load the JSON file to the remote location.
Instead of storing ARM templates on your local machine, you may prefer to store them in an external location. You can store templates in a source control repository (such as GitHub). Or, you can store them in an Azure storage account for shared access in your organization.
azure-resource-manager Deploy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deploy-powershell.md
The deployment can take several minutes to complete.
## Deploy remote template > [!NOTE]
-> Currently, Azure PowerShell doesn't support deploying remote Bicep files. To deploy a remote Bicep file, use CLI Bicep to compile the Bicep file to a JSON template first.
+> Currently, Azure PowerShell doesn't support deploying remote Bicep files. Use [Bicep CLI](./bicep-install.md#development-environment) to compile the Bicep file to a JSON template, and then load the JSON file to the remote location.
Instead of storing ARM templates on your local machine, you may prefer to store them in an external location. You can store templates in a source control repository (such as GitHub). Or, you can store them in an Azure storage account for shared access in your organization.
azure-resource-manager Deploy Rest https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deploy-rest.md
You can target your deployment to a resource group, Azure subscription, manageme
- To deploy to a **resource group**, use [Deployments - Create](/rest/api/resources/deployments/createorupdate). The request is sent to: ```HTTP
- PUT https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.Resources/deployments/{deploymentName}?api-version=2020-06-01
+ PUT https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.Resources/deployments/{deploymentName}?api-version=2020-10-01
``` - To deploy to a **subscription**, use [Deployments - Create At Subscription Scope](/rest/api/resources/deployments/createorupdateatsubscriptionscope). The request is sent to: ```HTTP
- PUT https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Resources/deployments/{deploymentName}?api-version=2020-06-01
+ PUT https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Resources/deployments/{deploymentName}?api-version=2020-10-01
``` For more information about subscription level deployments, see [Create resource groups and resources at the subscription level](deploy-to-subscription.md).
You can target your deployment to a resource group, Azure subscription, manageme
- To deploy to a **management group**, use [Deployments - Create At Management Group Scope](/rest/api/resources/deployments/createorupdateatmanagementgroupscope). The request is sent to: ```HTTP
- PUT https://management.azure.com/providers/Microsoft.Management/managementGroups/{groupId}/providers/Microsoft.Resources/deployments/{deploymentName}?api-version=2020-06-01
+ PUT https://management.azure.com/providers/Microsoft.Management/managementGroups/{groupId}/providers/Microsoft.Resources/deployments/{deploymentName}?api-version=2020-10-01
``` For more information about management group level deployments, see [Create resources at the management group level](deploy-to-management-group.md).
You can target your deployment to a resource group, Azure subscription, manageme
- To deploy to a **tenant**, use [Deployments - Create Or Update At Tenant Scope](/rest/api/resources/deployments/createorupdateattenantscope). The request is sent to: ```HTTP
- PUT https://management.azure.com/providers/Microsoft.Resources/deployments/{deploymentName}?api-version=2020-06-01
+ PUT https://management.azure.com/providers/Microsoft.Resources/deployments/{deploymentName}?api-version=2020-10-01
``` For more information about tenant level deployments, see [Create resources at the tenant level](deploy-to-tenant.md).
The examples in this article use resource group deployments.
1. To deploy a template, provide your subscription ID, the name of the resource group, the name of the deployment in the request URI. ```HTTP
- PUT https://management.azure.com/subscriptions/<YourSubscriptionId>/resourcegroups/<YourResourceGroupName>/providers/Microsoft.Resources/deployments/<YourDeploymentName>?api-version=2019-10-01
+ PUT https://management.azure.com/subscriptions/<YourSubscriptionId>/resourcegroups/<YourResourceGroupName>/providers/Microsoft.Resources/deployments/<YourDeploymentName>?api-version=2020-10-01
``` In the request body, provide a link to your template and parameter file. For more information about the parameter file, see [Create Resource Manager parameter file](parameter-files.md).
The examples in this article use resource group deployments.
1. To get the status of the template deployment, use [Deployments - Get](/rest/api/resources/deployments/get). ```HTTP
- GET https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.Resources/deployments/{deploymentName}?api-version=2019-10-01
+ GET https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.Resources/deployments/{deploymentName}?api-version=2020-10-01
``` ## Deployment name
azure-resource-manager Deploy To Management Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deploy-to-management-group.md
From a management group level deployment, you can target a subscription within t
"resources": [ { "type": "Microsoft.Resources/deployments",
- "apiVersion": "2020-06-01",
+ "apiVersion": "2020-10-01",
"name": "nestedSub", "location": "[parameters('nestedLocation')]", "subscriptionId": "[parameters('nestedSubId')]",
From a management group level deployment, you can target a subscription within t
"resources": [ { "type": "Microsoft.Resources/resourceGroups",
- "apiVersion": "2020-06-01",
+ "apiVersion": "2020-10-01",
"name": "[parameters('nestedRG')]", "location": "[parameters('nestedLocation')]" }
From a management group level deployment, you can target a subscription within t
}, { "type": "Microsoft.Resources/deployments",
- "apiVersion": "2020-06-01",
+ "apiVersion": "2020-10-01",
"name": "nestedRG", "subscriptionId": "[parameters('nestedSubId')]", "resourceGroup": "[parameters('nestedRG')]",
azure-resource-manager Deploy To Resource Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deploy-to-resource-group.md
From a resource group deployment, you can switch to the level of a subscription
"resources": [ { "type": "Microsoft.Resources/resourceGroups",
- "apiVersion": "2020-06-01",
+ "apiVersion": "2020-10-01",
"name": "[parameters('newResourceGroupName')]", "location": "[parameters('location')]", "properties": {}
azure-resource-manager Deploy To Subscription https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deploy-to-subscription.md
The following template creates an empty resource group.
"resources": [ { "type": "Microsoft.Resources/resourceGroups",
- "apiVersion": "2020-06-01",
+ "apiVersion": "2020-10-01",
"name": "[parameters('rgName')]", "location": "[parameters('rgLocation')]", "properties": {}
Use the [copy element](copy-resources.md) with resource groups to create more th
"resources": [ { "type": "Microsoft.Resources/resourceGroups",
- "apiVersion": "2020-06-01",
+ "apiVersion": "2020-10-01",
"location": "[parameters('rgLocation')]", "name": "[concat(parameters('rgNamePrefix'), copyIndex())]", "copy": {
The following example creates a resource group, and deploys a storage account to
"resources": [ { "type": "Microsoft.Resources/resourceGroups",
- "apiVersion": "2020-06-01",
+ "apiVersion": "2020-10-01",
"name": "[parameters('rgName')]", "location": "[parameters('rgLocation')]", "properties": {} }, { "type": "Microsoft.Resources/deployments",
- "apiVersion": "2020-06-01",
+ "apiVersion": "2020-10-01",
"name": "storageDeployment", "resourceGroup": "[parameters('rgName')]", "dependsOn": [
azure-resource-manager Deployment History Deletions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deployment-history-deletions.md
Title: Deployment history deletions description: Describes how Azure Resource Manager automatically deletes deployments from the deployment history. Deployments are deleted when the history is close to exceeding the limit of 800. Previously updated : 10/01/2020 Last updated : 03/23/2021 # Automatic deletions from deployment history
lockid=$(az lock show --resource-group lockedRG --name deleteLock --output tsv -
az lock delete --ids $lockid ```
+## Required permissions
+
+The deletions are requested under the identity of the user who deployed the template. To delete deployments, the user must have access to the **Microsoft.Resources/deployments/delete** action. If the user doesn't have the required permissions, deployments aren't deleted from the history.
+
+If the current user doesn't have the required permissions, automatic deletion is attempted again during the next deployment.
+ ## Opt out of automatic deletions You can opt out of automatic deletions from the history. **Use this option only when you want to manage the deployment history yourself.** The limit of 800 deployments in the history is still enforced. If you exceed 800 deployments, you'll receive an error and your deployment will fail.
azure-resource-manager Deployment Modes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deployment-modes.md
The following example shows a linked template set to incremental deployment mode
"resources": [ { "type": "Microsoft.Resources/deployments",
- "apiVersion": "2017-05-10",
+ "apiVersion": "2020-10-01",
"name": "linkedTemplate", "properties": { "mode": "Incremental",
azure-resource-manager Deployment Script Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deployment-script-template.md
Previously updated : 03/18/2021 Last updated : 03/23/2021
The following JSON is an example. For more information, see the latest [template
> [!NOTE] > The example is for demonstration purposes. The properties `scriptContent` and `primaryScriptUri` can't coexist in a template.
+> [!NOTE]
+> The _scriptContent_ shows a script with multiple lines. The Azure portal and Azure DevOps pipeline can't parse a deployment script with multiple lines. You can either chain the PowerShell commands (by using semicolons or _\\r\\n_ or _\\n_) into one line, or use the `primaryScriptUri` property with an external script file. There are many free JSON string escape/unescape tools available. For example, [https://www.freeformatter.com/json-escape.html](https://www.freeformatter.com/json-escape.html).
+ Property value details: - `identity`: For deployment script API version 2020-10-01 or later, a user-assigned managed identity is optional unless you need to perform any Azure-specific actions in the script. For the API version 2019-10-01-preview, a managed identity is required as the deployment script service uses it to execute the scripts. Currently, only user-assigned managed identity is supported.
Property value details:
- `environmentVariables`: Specify the environment variables to pass over to the script. For more information, see [Develop deployment scripts](#develop-deployment-scripts). - `scriptContent`: Specify the script content. To run an external script, use `primaryScriptUri` instead. For examples, see [Use inline script](#use-inline-scripts) and [Use external script](#use-external-scripts).
- > [!NOTE]
- > The Azure portal can't parse a deployment script with multiple lines. To deploy a template with deployment script from the Azure portal, you can either chain the PowerShell commands by using semicolons into one line, or use the `primaryScriptUri` property with an external script file.
- - `primaryScriptUri`: Specify a publicly accessible URL to the primary deployment script with supported file extensions. For more information, see [Use external scripts](#use-external-scripts). - `supportingScriptUris`: Specify an array of publicly accessible URLs to supporting files that are called in either `scriptContent` or `primaryScriptUri`. For more information, see [Use external scripts](#use-external-scripts). - `timeout`: Specify the maximum allowed script execution time specified in the [ISO 8601 format](https://en.wikipedia.org/wiki/ISO_8601). Default value is **P1D**.
azure-resource-manager Error Job Size Exceeded https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/error-job-size-exceeded.md
Title: Job size exceeded error description: Describes how to troubleshoot errors when job size or template are too large. Previously updated : 01/19/2021 Last updated : 03/23/2021 # Resolve errors for job size exceeded
You get this error when the deployment exceeds one of the allowed limits. Typica
The deployment job can't exceed 1 MB. The job includes metadata about the request. For large templates, the metadata combined with the template can exceed the allowed size for a job. - The template can't exceed 4 MB. The 4-MB limit applies to the final state of the template after it has been expanded for resource definitions that use [copy](copy-resources.md) to create many instances. The final state also includes the resolved values for variables and parameters. Other limits for the template are:
Other limits for the template are:
* 64 output values * 24,576 characters in a template expression
+When using copy loops to deploy resource, do not use the loop name as a dependency:
+
+```json
+dependsOn: [ "nicLoop" ]
+```
+
+Instead, use the instance of the resource from the loop that you need to depend on. For example:
+
+```json
+dependsOn: [
+ "[resourceId('Microsoft.Network/networkInterfaces', concat('nic-', copyIndex()))]"
+]
+```
+ ## Solution 1 - Simplify template Your first option is to simplify the template. This option works when your template deploys lots of different resource types. Consider dividing the template into [linked templates](linked-templates.md). Divide your resource types into logical groups and add a linked template for each group. For example, if you need to deploy lots of networking resources, you can move those resources to a linked template.
You can set other resources as dependent on the linked template, and [get values
## Solution 2 - Reduce name size
-Try to shorten the length of the names you use for [parameters](template-parameters.md), [variables](template-variables.md), and [outputs](template-outputs.md). When these values are repeated through copy loops, a large name gets multiplied many times.
-
-## Solution 3 - Use serial copy
-
-Consider changing your copy loop from [parallel to serial processing](copy-resources.md#serial-or-parallel). Use this option only when you suspect the error comes from deploying a large number of resources through copy. This change can significantly increase your deployment time because the resources aren't deployed in parallel.
+Try to shorten the length of the names you use for [parameters](template-parameters.md), [variables](template-variables.md), and [outputs](template-outputs.md). When these values are repeated through copy loops, a large name gets multiplied many times.
azure-resource-manager Error Not Found https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/error-not-found.md
Title: Resource not found errors description: Describes how to resolve errors when a resource can't be found. The error can occur when deploying an Azure Resource Manager template or when taking management actions. Previously updated : 06/10/2020 Last updated : 03/23/2021 # Resolve resource not found errors
When deploying a template, look for expressions that use the [reference](templat
```json "[reference(resourceId('exampleResourceGroup', 'Microsoft.Storage/storageAccounts', 'myStorage'), '2017-06-01')]" ```+
+## Solution 6 - after deleting resource
+
+When you delete a resource, there may be a short amount of time when the resource still appears in the portal but isn't actually available. If you select the resource, you'll get an error that says the resource isn't found. Refresh the portal to get the latest view.
+
+If the problem continues after a short wait, [contact support](https://azure.microsoft.com/support/options/).
azure-resource-manager Key Vault Parameter https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/key-vault-parameter.md
The following template dynamically creates the key vault ID and passes it as a p
"resources": [ { "type": "Microsoft.Resources/deployments",
- "apiVersion": "2018-05-01",
+ "apiVersion": "2020-10-01",
"name": "dynamicSecret", "properties": { "mode": "Incremental",
azure-resource-manager Linked Templates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/linked-templates.md
To nest a template, add a [deployments resource](/azure/templates/microsoft.reso
"resources": [ { "type": "Microsoft.Resources/deployments",
- "apiVersion": "2019-10-01",
+ "apiVersion": "2020-10-01",
"name": "nestedTemplate1", "properties": { "mode": "Incremental",
The following example deploys a storage account through a nested template.
"resources": [ { "type": "Microsoft.Resources/deployments",
- "apiVersion": "2019-10-01",
+ "apiVersion": "2020-10-01",
"name": "nestedTemplate1", "properties": { "mode": "Incremental",
You set the scope through the `expressionEvaluationOptions` property. By default
```json { "type": "Microsoft.Resources/deployments",
- "apiVersion": "2019-10-01",
+ "apiVersion": "2020-10-01",
"name": "nestedTemplate1", "properties": { "expressionEvaluationOptions": {
The following template demonstrates how template expressions are resolved accord
"resources": [ { "type": "Microsoft.Resources/deployments",
- "apiVersion": "2019-10-01",
+ "apiVersion": "2020-10-01",
"name": "nestedTemplate1", "properties": { "expressionEvaluationOptions": {
The following example deploys a SQL server and retrieves a key vault secret to u
"resources": [ { "type": "Microsoft.Resources/deployments",
- "apiVersion": "2019-10-01",
+ "apiVersion": "2020-10-01",
"name": "dynamicSecret", "properties": { "mode": "Incremental",
The following excerpt shows which values are secure and which aren't secure.
{ "name": "outer", "type": "Microsoft.Resources/deployments",
- "apiVersion": "2019-10-01",
+ "apiVersion": "2020-10-01",
"properties": { "expressionEvaluationOptions": { "scope": "outer"
The following excerpt shows which values are secure and which aren't secure.
{ "name": "inner", "type": "Microsoft.Resources/deployments",
- "apiVersion": "2019-10-01",
+ "apiVersion": "2020-10-01",
"properties": { "expressionEvaluationOptions": { "scope": "inner"
To link a template, add a [deployments resource](/azure/templates/microsoft.reso
"resources": [ { "type": "Microsoft.Resources/deployments",
- "apiVersion": "2019-10-01",
+ "apiVersion": "2020-10-01",
"name": "linkedTemplate", "properties": { "mode": "Incremental",
You can provide the parameters for your linked template either in an external fi
"resources": [ { "type": "Microsoft.Resources/deployments",
- "apiVersion": "2019-10-01",
+ "apiVersion": "2020-10-01",
"name": "linkedTemplate", "properties": { "mode": "Incremental",
To pass parameter values inline, use the `parameters` property.
"resources": [ { "type": "Microsoft.Resources/deployments",
- "apiVersion": "2019-10-01",
+ "apiVersion": "2020-10-01",
"name": "linkedTemplate", "properties": { "mode": "Incremental",
The following example template shows how to use `copy` with a nested template.
"resources": [ { "type": "Microsoft.Resources/deployments",
- "apiVersion": "2019-10-01",
+ "apiVersion": "2020-10-01",
"name": "[concat('nestedTemplate', copyIndex())]", // yes, copy works here "copy": {
The following template links to the preceding template. It creates three public
"resources": [ { "type": "Microsoft.Resources/deployments",
- "apiVersion": "2019-10-01",
+ "apiVersion": "2020-10-01",
"name": "[concat('linkedTemplate', copyIndex())]", "copy": { "count": 3,
The following example shows how to pass a SAS token when linking to a template:
"resources": [ { "type": "Microsoft.Resources/deployments",
- "apiVersion": "2019-10-01",
+ "apiVersion": "2020-10-01",
"name": "linkedTemplate", "properties": { "mode": "Incremental",
azure-resource-manager Quickstart Create Template Specs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/quickstart-create-template-specs.md
To deploy a template spec, use the same deployment commands as you would use to
"resources": [ { "type": "Microsoft.Resources/deployments",
- "apiVersion": "2020-06-01",
+ "apiVersion": "2020-10-01",
"name": "demo", "properties": { "templateLink": {
Rather than creating a new template spec for the revised template, add a new ver
"resources": [ { "type": "Microsoft.Resources/deployments",
- "apiVersion": "2020-06-01",
+ "apiVersion": "2020-10-01",
"name": "demo", "properties": { "templateLink": {
azure-resource-manager Template Functions Date https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-functions-date.md
The next example shows how to use a value from the function when setting a tag v
"resources": [ { "type": "Microsoft.Resources/resourceGroups",
- "apiVersion": "2018-05-01",
+ "apiVersion": "2020-10-01",
"name": "[parameters('rgName')]", "location": "westeurope", "tags": {
The next example shows how to use a value from the function when setting a tag v
param utcShort string = utcNow('d') param rgName string
-resource myRg 'Microsoft.Resources/resourceGroups@2018-05-01' = {
+resource myRg 'Microsoft.Resources/resourceGroups@2020-10-01' = {
name: rgName location: 'westeurope' tags: {
azure-resource-manager Template Specs Create Linked https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-specs-create-linked.md
The `relativePath` property is always relative to the template file where `relat
}, { "type": "Microsoft.Resources/deployments",
- "apiVersion": "2020-06-01",
+ "apiVersion": "2020-10-01",
"name": "createStorage", "properties": { "mode": "Incremental",
azure-resource-manager Template Specs Deploy Linked Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-specs-deploy-linked-template.md
To deploy a template spec in an ARM template, add a [deployments resource](/azur
}, { "type": "Microsoft.Resources/deployments",
- "apiVersion": "2020-06-01",
+ "apiVersion": "2020-10-01",
"name": "createStorage", "properties": { "mode": "Incremental",
azure-resource-manager Templates Cloud Consistency https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/templates-cloud-consistency.md
The following code shows how the templateLink parameter refers to a nested templ
"resources": [ { "type": "Microsoft.Resources/deployments",
- "apiVersion": "2017-05-10",
+ "apiVersion": "2020-10-01",
"name": "linkedTemplate", "properties": { "mode": "incremental",
Throughout the template, links are generated by combining the base URI (from the
"resources": [ { "type": "Microsoft.Resources/deployments",
- "apiVersion": "2019-10-01",
+ "apiVersion": "2020-10-01",
"name": "shared", "properties": { "mode": "Incremental",
azure-sql Azure Hybrid Benefit https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/azure-hybrid-benefit.md
To set or update the license type by using PowerShell:
To set or update the license type by using the Azure CLI: - [az sql db create](/cli/azure/sql/db#az-sql-db-create)-- [az sql db update](/cli/azure/sql/db#az-sql-db-update) - [az sql mi create](/cli/azure/sql/mi#az-sql-mi-create) - [az sql mi update](/cli/azure/sql/mi#az-sql-mi-update)
azure-sql Automatic Tuning Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/automatic-tuning-overview.md
Previously updated : 03/30/2020 Last updated : 03/23/2021 # Automatic tuning in Azure SQL Database and Azure SQL Managed Instance [!INCLUDE[appliesto-sqldb-sqlmi](../includes/appliesto-sqldb-sqlmi.md)]
For an overview of how automatic tuning works and for typical usage scenarios, s
## Enable automatic tuning -- You [enable automatic tuning for Azure SQL Database in the Azure portal](automatic-tuning-enable.md) or by using the [ALTER DATABASE](/sql/t-sql/statements/alter-database-transact-sql-set-options?view=azuresqldb-current) T-SQL statement.-- You enable automatic tuning for Azure SQL Managed Instance by using the [ALTER DATABASE](/sql/t-sql/statements/alter-database-transact-sql-set-options?view=azuresqldb-mi-current) T-SQL statement.
+- You [enable automatic tuning for Azure SQL Database in the Azure portal](automatic-tuning-enable.md) or by using the [ALTER DATABASE](/sql/t-sql/statements/alter-database-transact-sql-set-options?view=azuresqldb-current&preserve-view=true) T-SQL statement.
+- You enable automatic tuning for Azure SQL Managed Instance by using the [ALTER DATABASE](/sql/t-sql/statements/alter-database-transact-sql-set-options?view=azuresqldb-mi-current&preserve-view=true) T-SQL statement.
## Automatic tuning options
The automatic tuning options available in Azure SQL Database and Azure SQL Manag
| Automatic tuning option | Single database and pooled database support | Instance database support | | :-- | -- | -- | | **CREATE INDEX** - Identifies indexes that may improve performance of your workload, creates indexes, and automatically verifies that performance of queries has improved. | Yes | No |
-| **DROP INDEX** - Identifies redundant and duplicate indexes daily, except for unique indexes, and indexes that were not used for a long time (>90 days). Please note that this option is not compatible with applications using partition switching and index hints. Dropping unused indexes is not supported for Premium and Business Critical service tiers. | Yes | No |
+| **DROP INDEX** - Drops unused (over the last 90 days) and duplicate indexes. Unique indexes, including indexes supporting primary key and unique constraints, are never dropped. This option may be automatically disabled when queries with index hints are present in the workload, or when the workload performs partition switching. On Premium and Business Critical service tiers, this option will never drop unused indexes, but will drop duplicate indexes, if any. | Yes | No |
| **FORCE LAST GOOD PLAN** (automatic plan correction) - Identifies Azure SQL queries using an execution plan that is slower than the previous good plan, and queries using the last known good plan instead of the regressed plan. | Yes | Yes | ### Automatic tuning for SQL Database
To learn about building email notifications for automatic tuning recommendations
### Automatic tuning for Azure SQL Managed Instance
-Automatic tuning for SQL Managed Instance only supports **FORCE LAST GOOD PLAN**. For more information about configuring automatic tuning options through T-SQL, see [Automatic tuning introduces automatic plan correction](https://azure.microsoft.com/blog/automatic-tuning-introduces-automatic-plan-correction-and-t-sql-management/) and [Automatic plan correction](/sql/relational-databases/automatic-tuning/automatic-tuning?view=sql-server-ver15#automatic-plan-correction).
+Automatic tuning for SQL Managed Instance only supports **FORCE LAST GOOD PLAN**. For more information about configuring automatic tuning options through T-SQL, see [Automatic tuning introduces automatic plan correction](https://azure.microsoft.com/blog/automatic-tuning-introduces-automatic-plan-correction-and-t-sql-management/) and [Automatic plan correction](/sql/relational-databases/automatic-tuning/automatic-tuning#automatic-plan-correction).
## Next steps
azure-sql High Availability Sla https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/high-availability-sla.md
A failover can be initiated using PowerShell, REST API, or Azure CLI:
|Deployment type|PowerShell|REST API| Azure CLI| |:|:|:|:| |Database|[Invoke-AzSqlDatabaseFailover](/powershell/module/az.sql/invoke-azsqldatabasefailover)|[Database failover](/rest/api/sql/databases/failover)|[az rest](/cli/azure/reference-index#az-rest) may be used to invoke a REST API call from Azure CLI|
-|Elastic pool|[Invoke-AzSqlElasticPoolFailover](/powershell/module/az.sql/invoke-azsqlelasticpoolfailover)|[Elastic pool failover](/rest/api/sql/elasticpools(failover)/failover/)|[az rest](/cli/azure/reference-index#az-rest) may be used to invoke a REST API call from Azure CLI|
+|Elastic pool|[Invoke-AzSqlElasticPoolFailover](/powershell/module/az.sql/invoke-azsqlelasticpoolfailover)|[Elastic pool failover](/rest/api/sql/elasticpools/failover)|[az rest](/cli/azure/reference-index#az-rest) may be used to invoke a REST API call from Azure CLI|
|Managed Instance|[Invoke-AzSqlInstanceFailover](/powershell/module/az.sql/Invoke-AzSqlInstanceFailover/)|[Managed Instances - Failover](/rest/api/sql/managed%20instances%20-%20failover/failover)|[az sql mi failover](/cli/azure/sql/mi/#az-sql-mi-failover)| > [!IMPORTANT]
azure-sql Service Tier Hyperscale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/service-tier-hyperscale.md
These are the current limitations to the Hyperscale service tier as of GA. We'r
| When changing Azure SQL Database service tier to Hyperscale, the operation fails if the database has any data files larger than 1 TB | In some cases, it may be possible to work around this issue by [shrinking](file-space-manage.md#shrinking-data-files) the large files to be less than 1 TB before attempting to change the service tier to Hyperscale. Use the following query to determine the current size of database files. `SELECT file_id, name AS file_name, size * 8. / 1024 / 1024 AS file_size_GB FROM sys.database_files WHERE type_desc = 'ROWS'`;| | SQL Managed Instance | Azure SQL Managed Instance isn't currently supported with Hyperscale databases. | | Elastic Pools | Elastic Pools aren't currently supported with Hyperscale.|
-| Migration to Hyperscale is currently a one-way operation | Once a database is migrated to Hyperscale, it can't be migrated directly to a non-Hyperscale service tier. At present, the only way to migrate a database from Hyperscale to non-Hyperscale is to export/import using a bacpac file or other data movement technologies (Bulk Copy, Azure Data Factory, Azure Databricks, SSIS, etc.) Bacpac export/import from Azure portal, from PowerShell using [New-AzSqlDatabaseExport](/powershell/module/az.sql/new-azsqldatabaseexport) or [New-AzSqlDatabaseImport](/powershell/module/az.sql/new-azsqldatabaseimport), from Azure CLI using [az sql db export](/cli/azure/sql/db#az-sql-db-export) and [az sql db import](/cli/azure/sql/db#az-sql-db-import), and from [REST API](/rest/api/sql/databases%20-%20import%20export) is not supported. Bacpac import/export for smaller Hyperscale databases (up to 200 GB) is supported using SSMS and [SqlPackage](/sql/tools/sqlpackage) version 18.4 and later. For larger databases, bacpac export/import may take a long time, and may fail for various reasons.|
+| Migration to Hyperscale is currently a one-way operation | Once a database is migrated to Hyperscale, it can't be migrated directly to a non-Hyperscale service tier. At present, the only way to migrate a database from Hyperscale to non-Hyperscale is to export/import using a bacpac file or other data movement technologies (Bulk Copy, Azure Data Factory, Azure Databricks, SSIS, etc.) Bacpac export/import from Azure portal, from PowerShell using [New-AzSqlDatabaseExport](/powershell/module/az.sql/new-azsqldatabaseexport) or [New-AzSqlDatabaseImport](/powershell/module/az.sql/new-azsqldatabaseimport), from Azure CLI using [az sql db export](/cli/azure/sql/db#az-sql-db-export) and [az sql db import](/cli/azure/sql/db#az-sql-db-import), and from [REST API](/rest/api/sql/) is not supported. Bacpac import/export for smaller Hyperscale databases (up to 200 GB) is supported using SSMS and [SqlPackage](/sql/tools/sqlpackage) version 18.4 and later. For larger databases, bacpac export/import may take a long time, and may fail for various reasons.|
| Migration of databases with In-Memory OLTP objects | Hyperscale supports a subset of In-Memory OLTP objects, including memory-optimized table types, table variables, and natively compiled modules. However, when any kind of In-Memory OLTP objects are present in the database being migrated, migration from Premium and Business Critical service tiers to Hyperscale is not supported. To migrate such a database to Hyperscale, all In-Memory OLTP objects and their dependencies must be dropped. After the database is migrated, these objects can be recreated. Durable and non-durable memory-optimized tables are not currently supported in Hyperscale, and must be changed to disk tables.| | Geo Replication | You can't yet configure geo-replication for Azure SQL Database Hyperscale. | | Database Copy | Database copy on Hyperscale is now in public preview. |
azure-vmware Tutorial Deploy Vmware Hcx https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/tutorial-deploy-vmware-hcx.md
For an end-to-end overview of this procedure, view the [Azure VMware Solution: H
1. From **Select Distributed Switches for Network Extensions**, select the switches that contain the virtual machines to be migrated to Azure VMware Solution on a layer-2 extended network. Then select **Continue**. > [!NOTE]
- > If you are not migrating virtual machines on layer-2 extended networks, you can skip this step.
+ > If you are not migrating virtual machines on layer-2 (L2) extended networks, you can skip this step.
:::image type=" content" source="media/tutorial-vmware-hcx/select-layer-2-distributed-virtual-switch.png" alt-text="Screenshot that shows the selection of distributed virtual switches and the Continue button." lightbox="media/tutorial-vmware-hcx/select-layer-2-distributed-virtual-switch.png":::
For more information on using HCX, go to the VMware technical documentation:
* [VMware HCX Documentation](https://docs.vmware.com/en/VMware-HCX/https://docsupdatetracker.net/index.html) * [Migrating Virtual Machines with VMware HCX](https://docs.vmware.com/en/VMware-HCX/services/user-guide/GUID-D0CD0CC6-3802-42C9-9718-6DA5FEC246C6.html?hWord=N4IghgNiBcIBIGEAaACAtgSwOYCcwBcMB7AOxAF8g) * [HCX required ports](https://ports.vmware.com/home/VMware-HCX)
-* [Set up an HCX proxy servr before you approve the license key](https://docs.vmware.com/en/VMware-HCX/services/user-guide/GUID-920242B3-71A3-4B24-9ACF-B20345244AB2.html?hWord=N4IghgNiBcIA4CcD2APAngAgBIGEAaIAvkA)
+* [Set up an HCX proxy server before you approve the license key](https://docs.vmware.com/en/VMware-HCX/services/user-guide/GUID-920242B3-71A3-4B24-9ACF-B20345244AB2.html?hWord=N4IghgNiBcIA4CcD2APAngAgBIGEAaIAvkA)
backup Backup Azure Delete Vault https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-delete-vault.md
To properly delete a vault, you must follow the steps in this order:
- **MABS or DPM management servers**: Go to the vault dashboard menu > **Backup Infrastructure** > **Backup Management Servers**. If you have DPM or Azure Backup Server (MABS), then all items listed here must be deleted or unregistered along with their backup data. [Follow these steps](#delete-protected-items-on-premises) to delete the management servers. - **Step 4**: You must ensure all registered storage accounts are deleted. Go to the vault dashboard menu > **Backup Infrastructure** > **Storage Accounts**. If you have storage accounts listed here, then you must unregister all of them. To learn how to unregister the account, see [Unregister a storage account](manage-afs-backup.md#unregister-a-storage-account).
+- **Step 5**: Ensure there are no Private endpoints created for the vault. Go to Vault dashboard menu > **Private endpoint Connections** under 'Settings' > if the vault has any Private endpoint connections created or attempted to be created, ensure they are removed before proceeding with vault delete.
After you've completed these steps, you can continue to [delete the vault](#delete-the-recovery-services-vault).
backup Backup Azure Restore Files From Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-restore-files-from-vm.md
To restore files or folders from the recovery point, go to the virtual machine a
3. In the Backup dashboard menu, select **File Recovery**.
- ![Select File Recovery](./media/backup-azure-restore-files-from-vm/vm-backup-menu-file-recovery-button.png)
+ ![Select File Recovery](./media/backup-azure-restore-files-from-vm/vm-backup-menu-file-recovery-button.png)32
The **File Recovery** menu opens.
See requirements to restore files from backed-up VMs with large disk:<br>
[Windows OS](#for-backed-up-vms-with-large-disks-windows)<br> [Linux OS](#for-backed-up-vms-with-large-disks-linux)
+After you choose the correct machine to run the ILR script, ensure that it meets the [OS requirements](#step-3-os-requirements-to-successfully-run-the-script) and [access requirements](#step-4-access-requirements-to-successfully-run-the-script).
## Step 3: OS requirements to successfully run the script
The script also requires Python and bash components to execute and connect secur
| .NET | 4.6.2 and above | | TLS | 1.2 should be supported |
+Also, ensure that you have the [right machine to execute the ILR script](#step-2-ensure-the-machine-meets-the-requirements-before-executing-the-script) and it meets the [access requirements](#step-4-access-requirements-to-successfully-run-the-script).
+ ## Step 4: Access requirements to successfully run the script If you run the script on a computer with restricted access, ensure there's access to:
For Linux, the script requires 'open-iscsi' and 'lshw' components to connect to
The access to `download.microsoft.com` is required to download components used to build a secure channel between the machine where the script is run and the data in the recovery point.
+Also, ensure that you have the [right machine to execute the ILR script](#step-2-ensure-the-machine-meets-the-requirements-before-executing-the-script) and it meets the [OS requirements](#step-3-os-requirements-to-successfully-run-the-script).
## Step 5: Running the script and identifying volumes ### For Windows
-After you meet all the requirements listed in Step 2, Step 3 and Step 4, copy the script from the downloaded location (usually the Downloads folder), right-click the executable or script and run it with Administrator credentials. When prompted, type the password or paste the password from memory, and press Enter. Once the valid password is entered, the script connects to the recovery point.
+After you meet all the requirements listed in [Step 2](#step-2-ensure-the-machine-meets-the-requirements-before-executing-the-script), [Step 3](#step-3-os-requirements-to-successfully-run-the-script) and [Step 4](#step-4-access-requirements-to-successfully-run-the-script), copy the script from the downloaded location (usually the Downloads folder), see [Step 1 to learn how to generate and download script](#step-1-generate-and-download-script-to-browse-and-recover-files). Right-click the executable file and run it with Administrator credentials. When prompted, type the password or paste the password from memory, and press Enter. Once the valid password is entered, the script connects to the recovery point.
![Executable output](./media/backup-azure-restore-files-from-vm/executable-output.png)
If the file recovery process hangs after you run the file-restore script (for ex
### For Linux
-For Linux machines, a python script is generated. Download the script and copy it to the relevant/compatible Linux server. You may have to modify the permissions to execute it with ```chmod +x <python file name>```. Then run the python file with ```./<python file name>```.
+After you meet all the requirements listed in [Step 2](#step-2-ensure-the-machine-meets-the-requirements-before-executing-the-script), [Step 3](#step-3-os-requirements-to-successfully-run-the-script) and [Step 4](#step-4-access-requirements-to-successfully-run-the-script), generate a python script for Linux machines. See [Step 1 to learn how to generate and download script](#step-1-generate-and-download-script-to-browse-and-recover-files). Download the script and copy it to the relevant/compatible Linux server. You may have to modify the permissions to execute it with ```chmod +x <python file name>```. Then run the python file with ```./<python file name>```.
In Linux, the volumes of the recovery point are mounted to the folder where the script is run. The attached disks, volumes, and the corresponding mount paths are shown accordingly. These mount paths are visible to users having root level access. Browse through the volumes mentioned in the script output.
baremetal-infrastructure Concepts Baremetal Infrastructure Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/baremetal-infrastructure/concepts-baremetal-infrastructure-overview.md
Title: Overview of BareMetal Infrastructure Preview in Azure
description: Overview of the BareMetal Infrastructure in Azure. - Last updated 1/4/2021
baremetal-infrastructure Connect Baremetal Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/baremetal-infrastructure/connect-baremetal-infrastructure.md
Title: Connect BareMetal Instance units in Azure description: Learn how to identify and interact with BareMetal Instance units the Azure portal or Azure CLI. - Last updated 03/19/2021
batch Automatic Certificate Rotation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/automatic-certificate-rotation.md
+
+ Title: Enable automatic certificate rotation in a Batch pool
+description: You can create a Batch pool with a managed identity and a certificate that will automatically be renewed.
+ Last updated : 03/23/2021+++
+# Enable automatic certificate rotation in a Batch pool
+
+ You can create a Batch pool with a certificate that will automatically be renewed. To do so, your pool must be created with a [user-assigned managed identity](managed-identity-pools.md) that will have access to the certificate in [Azure Key Vault](../key-vault/general/overview.md).
+
+> [!IMPORTANT]
+> Support for Azure Batch pools with user-assigned managed identities is currently in public preview for the following regions: West US 2, South Central US, East US, US Gov Arizona and US Gov Virginia.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Create a user-assigned identity
+
+First, [create your user-assigned managed identity](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md#create-a-user-assigned-managed-identity) in the same tenant as your Batch account. This managed identity does not need to be in the same resource group or even in the same subscription.
+
+Be sure to note the **Client ID** of the user-assigned managed identity. You'll need this value later.
++
+## Create your certificate
+
+Next, you'll need to create a certificate and add it to Azure Key Vault. If you haven't already created a key vault, you'll need to do that first. For instructions, see [Quickstart: Set and retrieve a certificate from Azure Key Vault using the Azure portal](../key-vault/certificates/quick-create-portal.md).
+
+When creating your certificate, be sure to set **Lifetime Action Type** to automatically renew, and specify the number of days after which the certificate should renew.
++
+After your certificate has been created, make note of its **Secret Identifier**. You'll need this value later.
++
+## Add an access policy in Azure Key Vault
+
+In your key vault, assign a Key Vault access policy that allows your user-assigned managed identity to access secrets and certificates. For detailed instructions, see [Assign a Key Vault access policy using the Azure portal](../key-vault/general/assign-access-policy-portal.md).
+
+## Create a Batch pool with a user-assigned managed identity
+
+Create a Batch pool with your managed identity by using the [Batch .NET management library](/dotnet/api/overview/azure/batch#management-library). For more information, see [Configure managed identities in Batch pools](managed-identity-pools.md).
+
+The following example uses the Batch Management REST API to create a pool. Be sure to use your certificate's **Secret Identifier** for `observedCertificates` and your managed identity's **Client ID** for `msiClientId`, replacing the example data below.
+
+REST API URI
+
+```http
+PUT https://management.azure.com/subscriptions/<subscriptionid>/resourceGroups/<resourcegroupName>/providers/Microsoft.Batch/batchAccounts/<batchaccountname>/pools/<poolname>?api-version=2021-01-01
+```
+
+Request Body
+
+```json
+{
+ "name": "test2",
+ "type": "Microsoft.Batch/batchAccounts/pools",
+ "properties": {
+ "vmSize": "STANDARD_DS2_V2",
+ "taskSchedulingPolicy": {
+ "nodeFillType": "Pack"
+ },
+ "deploymentConfiguration": {
+ "virtualMachineConfiguration": {
+ "imageReference": {
+ "publisher": "canonical",
+ "offer": "ubuntuserver",
+ "sku": "18.04-lts",
+ "version": "latest"
+ },
+ "nodeAgentSkuId": "batch.node.ubuntu 18.04",
+ "extensions": [
+ {
+ "name": "KVExtensions",
+ "type": "KeyVaultForLinux",
+ "publisher": "Microsoft.Azure.KeyVault",
+ "typeHandlerVersion": "1.0",
+ "autoUpgradeMinorVersion": true,
+ "settings": {
+ "secretsManagementSettings": {
+ "pollingIntervalInS": "300",
+ "certificateStoreLocation": "/var/lib/waagent/Microsoft.Azure.KeyVault",
+ "requireInitialSync": true,
+ "observedCertificates": [
+ "https://testkvwestus2s.vault.azure.net/secrets/authcertforumatesting/8f5f3f491afd48cb99286ba2aacd39af"
+ ]
+ },
+ "authenticationSettings": {
+ "msiEndpoint": "http://169.254.169.254/metadata/identity",
+ "msiClientId": "b9f6dd56-d2d6-4967-99d7-8062d56fd84c"
+ } }, "protectedSettings":{}
+ } ] }
+ },
+ "scaleSettings": {
+ "fixedScale": {
+ "targetDedicatedNodes": 1,
+ "resizeTimeout": "PT15M"
+ }
+ },
+ },
+ "identity": {
+ "type": "UserAssigned",
+ "userAssignedIdentities": {
+ "/subscriptions/042998e4-36dc-4b7d-8ce3-a7a2c4877d33/resourceGroups/ACR/providers/Microsoft.ManagedIdentity/userAssignedIdentities/testumaforpools": {}
+ }
+ }
+}
+
+```
+
+## Validate the certificate
+
+To confirm that the certificate has been successfully deployed, log in to the compute node. You should see output similar to the following:
+
+```
+root@74773db5fe1b42ab9a4b6cf679d929da000000:/var/lib/waagent/Microsoft.Azure.KeyVault.KeyVaultForLinux-1.0.1363.13/status# cat 1.status
+[{"status":{"code":0,"formattedMessage":{"lang":"en","message":"Successfully started Key Vault extension service. 2021-03-03T23:12:23Z"},"operation":"Service start.","status":"success"},"timestampUTC":"2021-03-03T23:12:23Z","version":"1.0"}]root@74773db5fe1b42ab9a4b6cf679d929da000000:/var/lib/waagent/Microsoft.Azure.KeyVault.KeyVaultForLinux-1.0.1363.13/status#
+```
+
+## Next steps
+
+- Learn more about [Managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md).
+- Learn how to use [customer-managed keys with user-managed identities](batch-customer-managed-key.md).
batch Managed Identity Pools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/managed-identity-pools.md
Title: Configure managed identities in Batch pools description: Learn how to enable user-assigned managed identities on Batch pools and how to use managed identities within the nodes. Previously updated : 02/10/2021 Last updated : 03/23/2021
For more information, see [How to use managed identities for Azure resources on
- Learn more about [Managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md). - Learn how to use [customer-managed keys with user-managed identities](batch-customer-managed-key.md).
+- Learn how to [enable automatic certificate rotation in a Batch pool](automatic-certificate-rotation.md).
cdn Cdn Create A Storage Account With Cdn https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cdn/cdn-create-a-storage-account-with-cdn.md
In the preceding steps, you created a CDN profile and an endpoint in a resource
## Next steps > [!div class="nextstepaction"]
-> [Tutorial: Use CDN to server static content from a web app](cdn-add-to-web-app.md)
+> [Tutorial: Use CDN to serve static content from a web app](cdn-add-to-web-app.md)
cloud-services-extended-support Certificates And Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/certificates-and-key-vault.md
Key Vault is used to store certificates that are associated to Cloud Services (e
:::image type="content" source="media/certs-and-key-vault-1.png" alt-text="Image shows selecting access policies from the key vault blade.":::
-3. Ensure the access policies include the following properties:
+3. Ensure the access policies include the following property:
- **Enable access to Azure Virtual Machines for deployment**
- - **Enable access to Azure Resource Manager for template deployment**
:::image type="content" source="media/certs-and-key-vault-2.png" alt-text="Image shows access policies window in the Azure portal.":::
Key Vault is used to store certificates that are associated to Cloud Services (e
```json <Certificate name="<your cert name>" thumbprint="<thumbprint in key vault" thumbprintAlgorithm="sha1" /> ```
+6. For deployment via ARM Template, certificateUrl can be found by navigating to the certificate in the key vault labeled as Secret Identifier
+
+ :::image type="content" source="media/certs-and-key-vault-6.png" alt-text="Image shows the secret identifier field in the key vault.":::
## Next steps - Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services (extended support).
cloud-services-extended-support Deploy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/deploy-powershell.md
Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services
$networkProfile = @{loadBalancerConfiguration = $loadBalancerConfig} ```
-9. Create a Key Vault. This Key Vault will be used to store certificates that are associated with the Cloud Service (extended support) roles. Ensure that you have enabled 'Access policies' (in portal) for access to 'Azure Virtual Machines for deployment' and 'Azure Resource Manager for template deployment'. The Key Vault must be located in the same region and subscription as cloud service and have a unique name. For more information see [Use certificates with Azure Cloud Services (extended support)](certificates-and-key-vault.md).
+9. Create a Key Vault. This Key Vault will be used to store certificates that are associated with the Cloud Service (extended support) roles. The Key Vault must be located in the same region and subscription as cloud service and have a unique name. For more information see [Use certificates with Azure Cloud Services (extended support)](certificates-and-key-vault.md).
```powershell New-AzKeyVault -Name "ContosKeyVaultΓÇ¥ -ResourceGroupName ΓÇ£ContosOrgΓÇ¥ -Location ΓÇ£East USΓÇ¥
Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services
10. Update the Key Vault access policy and grant certificate permissions to your user account. ```powershell
+ Set-AzKeyVaultAccessPolicy -VaultName 'ContosKeyVault' -ResourceGroupName 'ContosOrg' -EnabledForDeployment
Set-AzKeyVaultAccessPolicy -VaultName 'ContosKeyVault' -ResourceGroupName 'ContosOrg' -UserPrincipalName 'user@domain.com' -PermissionsToCertificates create,get,list,delete ```
cloud-services-extended-support Deploy Prerequisite https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/deploy-prerequisite.md
Deployments that utilized the old diagnostics plugins need the settings removed
## Key Vault creation
-Key Vault is used to store certificates that are associated to Cloud Services (extended support). Add the certificates to Key Vault, then reference the certificate thumbprints in Service Configuration file. You also need to enable Key Vault 'Access policies' (in portal) for access to 'Azure Virtual Machines for deployment' and 'Azure Resource Manager for template deployment' so that Cloud Services (extended support) resource can retrieve certificate stored as secrets from Key Vault. You can create a key vault in the [Azure portal](../key-vault/general/quick-create-portal.md) or by using [PowerShell](../key-vault/general/quick-create-powershell.md). The key vault must be created in the same region and subscription as the cloud service. For more information, see [Use certificates with Azure Cloud Services (extended support)](certificates-and-key-vault.md).
+Key Vault is used to store certificates that are associated to Cloud Services (extended support). Add the certificates to Key Vault, then reference the certificate thumbprints in Service Configuration file. You also need to enable Key Vault 'Access policies' (in portal) for 'Azure Virtual Machines for deployment' so that Cloud Services (extended support) resource can retrieve certificate stored as secrets from Key Vault. You can create a key vault in the [Azure portal](../key-vault/general/quick-create-portal.md) or by using [PowerShell](../key-vault/general/quick-create-powershell.md). The key vault must be created in the same region and subscription as the cloud service. For more information, see [Use certificates with Azure Cloud Services (extended support)](certificates-and-key-vault.md).
## Next steps - Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services (extended support).
cloud-services-extended-support Deploy Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/deploy-template.md
This tutorial explains how to create a Cloud Service (extended support) deployme
```
-4. Add your key vault reference in the `OsProfile` section of the ARM template. Key Vault is used to store certificates that are associated to Cloud Services (extended support). Add the certificates to Key Vault, then reference the certificate thumbprints in Service Configuration (.cscfg) file. You also need to enable Key Vault for appropriate permissions so that Cloud Services (extended support) resource can retrieve certificate stored as secrets from Key Vault. The key vault must be located in the same region and subscription as cloud service and have a unique name. For more information, see [using certificates with Cloud Services (extended support)](certificates-and-key-vault.md).
+4. Add your key vault reference in the `OsProfile` section of the ARM template. Key Vault is used to store certificates that are associated to Cloud Services (extended support). Add the certificates to Key Vault, then reference the certificate thumbprints in Service Configuration (.cscfg) file. You also need to enable Key Vault 'Access policies' for 'Azure Virtual Machines for deployment'(on portal) so that Cloud Services (extended support) resource can retrieve certificate stored as secrets from Key Vault. The key vault must be located in the same region and subscription as cloud service and have a unique name. For more information, see [using certificates with Cloud Services (extended support)](certificates-and-key-vault.md).
```json "osProfile": {
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Content-Moderator/overview.md
Azure Content Moderator is an AI service that lets you handle content that is po
You may want to build content filtering software into your app to comply with regulations or maintain the intended environment for your users.
+This documentation contains the following article types:
+
+* [**Quickstarts**](client-libraries.md) are getting-started instructions to guide you through making requests to the service.
+* [**How-to guides**](try-text-api.md) contain instructions for using the service in more specific or customized ways.
+* [**Concepts**](text-moderation-api.md) provide in-depth explanations of the service functionality and features.
+* [**Tutorials**](ecommerce-retail-catalog-moderation.md) are longer guides that show you how to use the service as a component in broader business solutions.
+ ## Where it's used The following are a few scenarios in which a software developer or team would require a content moderation service:
cognitive-services Select Domain https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/select-domain.md
From the settings tab of your Custom Vision project, you can select a domain for
|Domain|Purpose| |||
-|__General__| Optimized for a broad range of image classification tasks. If none of the other domains are appropriate, or if you're unsure of which domain to choose, select the General domain. ID: `ee85a74c-405e-4adc-bb47-ffa8ca0c9f31`|
+|__General__| Optimized for a broad range of image classification tasks. If none of the other specific domains are appropriate, or if you're unsure of which domain to choose, select one of the General domains. ID: `ee85a74c-405e-4adc-bb47-ffa8ca0c9f31`|
|__General [A1]__| Optimized for better accuracy with comparable inference time as General domain. Recommended for larger datasets or more difficult user scenarios. This domain requires more training time. ID: `a8e3c40f-fb4a-466f-832a-5e457ae4a344`|
+|__General [A2]__| Optimized for better accuracy with faster inference time than General[A1] and General domains. Recommended for most datasets. This domain requires less training time than General and General [A1] domains. ID: `2e37d7fb-3a54-486a-b4d6-cfc369af0018` |
|__Food__|Optimized for photographs of dishes as you would see them on a restaurant menu. If you want to classify photographs of individual fruits or vegetables, use the Food domain. ID: `c151d5b5-dd07-472a-acc8-15d29dea8518`| |__Landmarks__|Optimized for recognizable landmarks, both natural and artificial. This domain works best when the landmark is clearly visible in the photograph. This domain works even if the landmark is slightly obstructed by people in front of it. ID: `ca455789-012d-4b50-9fec-5bb63841c793`| |__Retail__|Optimized for images that are found in a shopping catalog or shopping website. If you want high-precision classifying between dresses, pants, and shirts, use this domain. ID: `b30a91ae-e3c1-4f73-a81e-c270bff27c39`| |__Compact domains__| Optimized for the constraints of real-time classification on edge devices.| +
+> [!NOTE]
+> The General[A1] and General[A2] domains can be used for a broad set of scenarios and are optimized for accuracy. Use the General[A2] model for better inference speed and shorter training time. For larger datasets, you may want to use General[A1] to render better accuracy than General[A2], though it requires more training and inference time. The General model requires more inference time than both General[A1] and General[A2].
+ ## Object Detection |Domain|Purpose|
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/Overview/overview.md
QnA Maker is a cloud-based Natural Language Processing (NLP) service that allows
QnA Maker is commonly used to build conversational client applications, which include social media applications, chat bots, and speech-enabled desktop applications.
+QnA Maker doesn't store customer data. All customer data (question answers and chatlogs) is stored in the region the customer deploys the dependent service instances in. For more details on dependent services see [here](https://docs.microsoft.com/azure/cognitive-services/qnamaker/concepts/plan?tabs=v1).
+ ## When to use QnA Maker * **When you have static information** - Use QnA Maker when you have static information in your knowledge base of answers. This knowledge base is custom to your needs, which you've built with documents such as [PDFs and URLs](../Concepts/data-sources-and-content.md).
cognitive-services How To Speech Synthesis Viseme https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-speech-synthesis-viseme.md
zone_pivot_groups: programming-languages-speech-services-nomore-variant
# Get facial pose events
+> [!NOTE]
+> Viseme only works for `en-US-AriaNeural` voice in West US (`westus`) region for now, and will be available for all `en-US` voices by the end of April, 2021.
+ A viseme is the visual description of a phoneme in spoken language. It defines the position of the face and mouth when speaking a word. Each viseme depicts the key facial poses for a specific set of phonemes.
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/language-support.md
https://cris.ai -> Click on Adaptation Data -> scroll down to section "Pronuncia
| English (South Africa) | `en-ZA` | Text | | | English (Tanzania) | `en-TZ` | Text | | | English (United Kingdom) | `en-GB` | Audio (20201019)<br>Text<br>Pronunciation| Yes |
-| English (United States) | `en-US` | Audio (20201019)<br>Text<br>Pronunciation| Yes |
+| English (United States) | `en-US` | Audio (20201019, 20210223)<br>Text<br>Pronunciation| Yes |
| Estonian(Estonia) | `et-EE` | Text | | | Filipino (Philippines) | `fil-PH`| Text | | | Finnish (Finland) | `fi-FI` | Text | Yes |
cognitive-services Rest Text To Speech https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/rest-text-to-speech.md
This response has been truncated to illustrate the structure of a response.
```json [
-
+ { "Name": "Microsoft Server Speech Text to Speech Voice (en-US, AriaNeural)", "DisplayName": "Aria",
This response has been truncated to illustrate the structure of a response.
"VoiceType": "Neural", "Status": "GA" },
-
+ ...
-
+ { "Name": "Microsoft Server Speech Text to Speech Voice (ga-IE, OrlaNeural)", "DisplayName": "Orla",
This response has been truncated to illustrate the structure of a response.
"VoiceType": "Neural", "Status": "Preview" },
-
+ ...
-
+ { "Name": "Microsoft Server Speech Text to Speech Voice (zh-CN, YunxiNeural)", "DisplayName": "Yunxi",
This response has been truncated to illustrate the structure of a response.
}, ...
-
+ { "Name": "Microsoft Server Speech Text to Speech Voice (ar-EG, Hoda)", "DisplayName": "Hoda",
audio-24khz-160kbitrate-mono-mp3 audio-24khz-96kbitrate-mono-mp3
audio-24khz-48kbitrate-mono-mp3 ogg-24khz-16bit-mono-opus raw-48khz-16bit-mono-pcm riff-48khz-16bit-mono-pcm audio-48khz-96kbitrate-mono-mp3 audio-48khz-192kbitrate-mono-mp3
+webm-16khz-16bit-mono-opus webm-24khz-16bit-mono-opus
``` > [!NOTE]
-> If your selected voice and output format have different bit rates, the audio is resampled as necessary.
+> If your selected voice and output format have different bit rates, the audio is resampled as necessary.
> ogg-24khz-16bit-mono-opus can be decoded with [opus codec](https://opus-codec.org/downloads/) ### Request body
cognitive-services Speech Synthesis Markup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-synthesis-markup.md
While using SSML, keep in mind that special characters, such as quotation marks,
## Supported SSML elements
-Each SSML document is created with SSML elements (or tags). These elements are used to adjust pitch, prosody, volume, and more. The following sections detail how each element is used, and when an element is required or optional.
+Each SSML document is created with SSML elements (or tags). These elements are used to adjust pitch, prosody, volume, and more. The following sections detail how each element is used, and when an element is required or optional.
> [!IMPORTANT] > Don't forget to use double quotes around attribute values. Standards for well-formed, valid XML requires attribute values to be enclosed in double quotation marks. For example, `<prosody volume="90">` is a well-formed, valid element, but `<prosody volume=90>` is not. SSML may not recognize attribute values that are not in quotes.
The `voice` element is required. It is used to specify the voice that is used fo
## Use multiple voices
-Within the `speak` element, you can specify multiple voices for text-to-speech output. These voices can be in different languages. For each voice, the text must be wrapped in a `voice` element.
+Within the `speak` element, you can specify multiple voices for text-to-speech output. These voices can be in different languages. For each voice, the text must be wrapped in a `voice` element.
**Attributes**
Currently, speaking style adjustments are supported for these neural voices:
* `zh-CN-XiaoxuanNeural` (Preview) * `zh-CN-XiaoruiNeural` (Preview)
-The intensity of speaking style can be further changed to better fit your use case. You can specify a stronger or softer style with `styledegree` to make the speech more expressive or subdued.
+The intensity of speaking style can be further changed to better fit your use case. You can specify a stronger or softer style with `styledegree` to make the speech more expressive or subdued.
Currently, speaking style adjustments are supported for these neural voices: * `zh-CN-XiaoxiaoNeural`
Use this table to determine which speaking styles are supported for each neural
| | `style="fearful"` | Expresses a scared and nervous tone, with higher pitch, higher vocal energy, and faster rate. The speaker is in a state of tenseness and uneasiness. | | | `style="disgruntled"` | Expresses a disdainful and complaining tone. Speech of this emotion displays displeasure and contempt. | | | `style="serious"` | Expresses a strict and commanding tone. Speaker often sounds stiffer and much less relaxed with firm cadence. |
-| | `style="affectionate"` | Expresses a warm and affectionate tone, with higher pitch and vocal energy. The speaker is in a state of attracting the attention of the listener. The ΓÇ£personalityΓÇ¥ of the speaker is often endearing in nature. |
-| | `style="gentle"` | Expresses a mild, polite, and pleasant tone, with lower pitch and vocal energy |
-| | `style="lyrical"` | Expresses emotions in a melodic and sentimental way |
-| `zh-CN-YunyangNeural` | `style="customerservice"` | Expresses a friendly and helpful tone for customer support |
-| `zh-CN-YunyeNeural` | `style="calm"` | Expresses a cool, collected, and composed attitude when speaking. Tone, pitch, prosody is much more uniform compared to other types of speech. |
+| | `style="affectionate"` | Expresses a warm and affectionate tone, with higher pitch and vocal energy. The speaker is in a state of attracting the attention of the listener. The ΓÇ£personalityΓÇ¥ of the speaker is often endearing in nature. |
+| | `style="gentle"` | Expresses a mild, polite, and pleasant tone, with lower pitch and vocal energy |
+| | `style="lyrical"` | Expresses emotions in a melodic and sentimental way |
+| `zh-CN-YunyangNeural` | `style="customerservice"` | Expresses a friendly and helpful tone for customer support |
+| `zh-CN-YunyeNeural` | `style="calm"` | Expresses a cool, collected, and composed attitude when speaking. Tone, pitch, prosody is much more uniform compared to other types of speech. |
| | `style="cheerful"` | Expresses an upbeat and enthusiastic tone, with higher pitch and vocal energy | | | `style="sad"` | Expresses a sorrowful tone, with higher pitch, less intensity, and lower vocal energy. Common indicators of this emotion would be whimpers or crying during speech. | | | `style="angry"` | Expresses an angry and annoyed tone, with lower pitch, higher intensity, and higher vocal energy. The speaker is in a state of being irate, displeased, and offended. |
Use this table to determine which speaking styles are supported for each neural
| | `style="disgruntled"` | Expresses a disdainful and complaining tone. Speech of this emotion displays displeasure and contempt. | | | `style="serious"` | Expresses a strict and commanding tone. Speaker often sounds stiffer and much less relaxed with firm cadence. | | | `style="embarrassed"` | Expresses an uncertain and hesitant tone when the speaker is feeling uncomfortable |
-| | `style="affectionate"` | Expresses a warm and affectionate tone, with higher pitch and vocal energy. The speaker is in a state of attracting the attention of the listener. The ΓÇ£personalityΓÇ¥ of the speaker is often endearing in nature. |
-| | `style="gentle"` | Expresses a mild, polite, and pleasant tone, with lower pitch and vocal energy |
+| | `style="affectionate"` | Expresses a warm and affectionate tone, with higher pitch and vocal energy. The speaker is in a state of attracting the attention of the listener. The ΓÇ£personalityΓÇ¥ of the speaker is often endearing in nature. |
+| | `style="gentle"` | Expresses a mild, polite, and pleasant tone, with lower pitch and vocal energy |
| `zh-CN-XiaomoNeural` | `style="cheerful"` | Expresses an upbeat and enthusiastic tone, with higher pitch and vocal energy | | | `style="angry"` | Expresses an angry and annoyed tone, with lower pitch, higher intensity, and higher vocal energy. The speaker is in a state of being irate, displeased, and offended. | | | `style="fearful"` | Expresses a scared and nervous tone, with higher pitch, higher vocal energy, and faster rate. The speaker is in a state of tenseness and uneasiness. | | | `style="disgruntled"` | Expresses a disdainful and complaining tone. Speech of this emotion displays displeasure and contempt. | | | `style="serious"` | Expresses a strict and commanding tone. Speaker often sounds stiffer and much less relaxed with firm cadence. | | | `style="depressed"` | Expresses a melancholic and despondent tone with lower pitch and energy |
-| | `style="gentle"` | Expresses a mild, polite, and pleasant tone, with lower pitch and vocal energy |
+| | `style="gentle"` | Expresses a mild, polite, and pleasant tone, with lower pitch and vocal energy |
| `zh-CN-XiaoxuanNeural` | `style="cheerful"` | Expresses an upbeat and enthusiastic tone, with higher pitch and vocal energy | | | `style="angry"` | Expresses an angry and annoyed tone, with lower pitch, higher intensity, and higher vocal energy. The speaker is in a state of being irate, displeased, and offended. | | | `style="fearful"` | Expresses a scared and nervous tone, with higher pitch, higher vocal energy, and faster rate. The speaker is in a state of tenseness and uneasiness. | | | `style="disgruntled"` | Expresses a disdainful and complaining tone. Speech of this emotion displays displeasure and contempt. | | | `style="serious"` | Expresses a strict and commanding tone. Speaker often sounds stiffer and much less relaxed with firm cadence. | | | `style="depressed"` | Expresses a melancholic and despondent tone with lower pitch and energy |
-| | `style="gentle"` | Expresses a mild, polite, and pleasant tone, with lower pitch and vocal energy |
+| | `style="gentle"` | Expresses a mild, polite, and pleasant tone, with lower pitch and vocal energy |
| `zh-CN-XiaoruiNeural` | `style="sad"` | Expresses a sorrowful tone, with higher pitch, less intensity, and lower vocal energy. Common indicators of this emotion would be whimpers or crying during speech. | | | `style="angry"` | Expresses an angry and annoyed tone, with lower pitch, higher intensity, and higher vocal energy. The speaker is in a state of being irate, displeased, and offended. | | | `style="fearful"` | Expresses a scared and nervous tone, with higher pitch, higher vocal energy, and faster rate. The speaker is in a state of tenseness and uneasiness. |
Use the `break` element to insert pauses (or breaks) between words, or prevent p
``` ## Add silence
-Use the `mstts:silence` element to insert pauses before or after text, or between the 2 adjacent sentences.
+Use the `mstts:silence` element to insert pauses before or after text, or between the 2 adjacent sentences.
> [!NOTE]
->The difference between `mstts:silence` and `break` is that `break` can be added to any place in the text, but silence only works at the beginning or end of input text, or at the boundary of 2 adjacent sentences.
+>The difference between `mstts:silence` and `break` is that `break` can be added to any place in the text, but silence only works at the beginning or end of input text, or at the boundary of 2 adjacent sentences.
**Syntax**
Use the `mstts:silence` element to insert pauses before or after text, or betwee
| Attribute | Description | Required / Optional | |--|-||
-| `type` | Specifies the location of silence be added: <ul><li>Leading ΓÇô at the beginning of text </li><li>Tailing ΓÇô in the end of text </li><li>Sentenceboundary ΓÇô between adjacent sentences </li></ul> | Required |
+| `type` | Specifies the location of silence be added: <ul><li>`Leading` ΓÇô at the beginning of text </li><li>`Tailing` ΓÇô in the end of text </li><li>`Sentenceboundary` ΓÇô between adjacent sentences </li></ul> | Required |
| `Value` | Specifies the absolute duration of a pause in seconds or milliseconds,this value should be set less than 5000ms. Examples of valid values are `2s` and `500ms` | Required | **Example** In this example, `mtts:silence` is used to add 200 ms of silence between two sentences. ```xml
-<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US">
-<voice name="en-US-AriaNeural">
-<mstts:silence type="Sentenceboundary" value="200ms"/>
-If weΓÇÖre home schooling, the best we can do is roll with what each day brings and try to have fun along the way.
-A good place to start is by trying out the slew of educational apps that are helping children stay happy and smash their schooling at the same time.
-</voice>
-</speak>
+<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US">
+<voice name="en-US-AriaNeural">
+<mstts:silence type="Sentenceboundary" value="200ms"/>
+If weΓÇÖre home schooling, the best we can do is roll with what each day brings and try to have fun along the way.
+A good place to start is by trying out the slew of educational apps that are helping children stay happy and smash their schooling at the same time.
+</voice>
+</speak>
``` ## Specify paragraphs and sentences
Phonetic alphabets are composed of phones, which are made up of letters, numbers
Sometimes the text-to-speech service cannot accurately pronounce a word. For example, the name of a company, or a medical term. Developers can define how single entities are read in SSML using the `phoneme` and `sub` tags. However, if you need to define how multiple entities are read, you can create a custom lexicon using the `lexicon` tag. > [!NOTE]
-> Custom lexicon currently supports UTF-8 encoding.
+> Custom lexicon currently supports UTF-8 encoding.
> [!NOTE] > Custom lexicon is not supported for these 5 voices (et-EE-AnuNeural, ga-IE-OrlaNeural, lt-LT-OnaNeural, lv-LV-EveritaNeural and mt-MT-GarceNeural) at the moment.
To define how multiple entities are read, you can create a custom lexicon, which
```xml <?xml version="1.0" encoding="UTF-8"?>
-<lexicon version="1.0"
+<lexicon version="1.0"
xmlns="http://www.w3.org/2005/01/pronunciation-lexicon"
- xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
- xsi:schemaLocation="http://www.w3.org/2005/01/pronunciation-lexicon
+ xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+ xsi:schemaLocation="http://www.w3.org/2005/01/pronunciation-lexicon
http://www.w3.org/TR/2007/CR-pronunciation-lexicon-20071212/pls.xsd" alphabet="ipa" xml:lang="en-US"> <lexeme>
- <grapheme>BTW</grapheme>
- <alias>By the way</alias>
+ <grapheme>BTW</grapheme>
+ <alias>By the way</alias>
</lexeme> <lexeme>
- <grapheme> Benigni </grapheme>
+ <grapheme> Benigni </grapheme>
<phoneme> bɛˈniːnji</phoneme> </lexeme> </lexicon>
It's important to note, that you cannot directly set the pronunciation of a phra
```xml <lexeme>
- <grapheme>Scotland MV</grapheme>
- <alias>ScotlandMV</alias>
+ <grapheme>Scotland MV</grapheme>
+ <alias>ScotlandMV</alias>
</lexeme> <lexeme>
- <grapheme>ScotlandMV</grapheme>
+ <grapheme>ScotlandMV</grapheme>
<phoneme>ˈskɒtlənd.ˈmiːdiəm.weɪv</phoneme> </lexeme> ```
It's important to note, that you cannot directly set the pronunciation of a phra
You could also directly provide your expected `alias` for the acronym or abbreviated term. For example: ```xml <lexeme>
- <grapheme>Scotland MV</grapheme>
- <alias>Scotland Media Wave</alias>
+ <grapheme>Scotland MV</grapheme>
+ <alias>Scotland Media Wave</alias>
</lexeme> ```
After you've published your custom lexicon, you can reference it from your SSML.
> The `lexicon` element must be inside the `voice` element. ```xml
-<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis"
- xmlns:mstts="http://www.w3.org/2001/mstts"
+<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis"
+ xmlns:mstts="http://www.w3.org/2001/mstts"
xml:lang="en-US"> <voice name="en-US-JennyNeural"> <lexicon uri="http://www.example.com/customlexicon.xml"/>
After you've published your custom lexicon, you can reference it from your SSML.
</speak> ```
-When using this custom lexicon, "BTW" will be read as "By the way". "Benigni" will be read with the provided IPA "bɛˈniːnji".
+When using this custom lexicon, "BTW" will be read as "By the way". "Benigni" will be read with the provided IPA "bɛˈniːnji".
**Limitations** - File size: custom lexicon file size maximum limit is 100KB, if beyond this size, synthesis request will fail.
You can use the `sapi` as the vale for the `alphabet` attribute with custom lexi
```xml <?xml version="1.0" encoding="UTF-8"?>
-<lexicon version="1.0"
+<lexicon version="1.0"
xmlns="http://www.w3.org/2005/01/pronunciation-lexicon" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.w3.org/2005/01/pronunciation-lexicon
Because prosodic attribute values can vary over a wide range, the speech recogni
### Change speaking rate
-Speaking rate can be applied to Neural voices and standard voices at the word or sentence-level.
+Speaking rate can be applied to Neural voices and standard voices at the word or sentence-level.
**Example**
Pitch changes can be applied to standard voices at the word or sentence-level. W
<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US"> <voice name="en-US-AriaNeural"> <prosody contour="(60%,-60%) (100%,+80%)" >
- Were you the only person in the room?
+ Were you the only person in the room?
</prosody> </voice> </speak>
The `say-as` element may contain only text.
**Example** The speech synthesis engine speaks the following example as "Your first request was for one room on October nineteenth twenty ten with early arrival at twelve thirty five PM."
-
+ ```XML <speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US"> <voice name="en-US-JennyNeural">
Only one background audio file is allowed per SSML document. However, you can in
## Bookmark element
-The `bookmark` element allows you insert bookmarks in the SSML and get the audio offset of each bookmark of audio stream for asynchronous notification.
+The bookmark element allows you to insert custom markers in SSML to get the offset of each marker in the audio stream.
+We will not read out the bookmark elements.
+The bookmark element can be used to reference a specific location in the text or tag sequence.
+
+> [!NOTE]
+> `bookmark` element only works for `en-US-AriaNeural` voice in West US (`westus`) region for now.
**Syntax**
The `bookmark` element allows you insert bookmarks in the SSML and get the audio
| Attribute | Description | Required / Optional | |--|--||
-| `mark` | Specifies the bookmark text of the `bookmark` element. | Required. |
+| `mark` | Specifies the reference text of the `bookmark` element. | Required. |
**Example**
+As an example, you might want to know the time offset of each flower word as following
+ ```xml <speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="en-US">
- <voice name="en-US-GuyNeural">
- <bookmark mark='bookmark_one'/> one.
- <bookmark mark='bookmark_two'/> two. three. four.
+ <voice name="en-US-AriaNeural">
+ We are selling <bookmark mark='flower_1'/>roses and <bookmark mark='flower_2'/>daisies.
</voice> </speak> ```
You can subscribe to the `BookmarkReached` event in Speech SDK to get the bookma
> [!NOTE] > `BookmarkReached` event is only available since Speech SDK version 1.16.0.
+`BookmarkReached` events are raised as the output audio data becomes available, which will be faster than playback to an output device.
+
+* `AudioOffset` reports the output audio's elapsed time between the beginning of synthesis and the bookmark element. This is measured in hundred-nanosecond units (HNS) with 10,000 HNS equivalent to 1 millisecond.
+* `Text` is the reference text of the bookmark element, which is the string you set in the `mark` attribute.
# [C#](#tab/csharp)
synthesizer.BookmarkReached += (s, e) =>
}; ```
+For the example SSML above, the `BookmarkReached` event will be triggered twice, and the console output will be
+```text
+Bookmark reached. Audio offset: 825ms, bookmark text: flower_1.
+Bookmark reached. Audio offset: 1462.5ms, bookmark text: flower_2.
+```
+ # [C++](#tab/cpp) For more information, see <a href="https://docs.microsoft.com/cpp/cognitive-services/speech/speechsynthesizer#bookmarkreached" target="_blank"> `BookmarkReached` </a>.
For more information, see <a href="https://docs.microsoft.com/cpp/cognitive-serv
```cpp synthesizer->BookmarkReached += [](const SpeechSynthesisBookmarkEventArgs& e) {
- cout << "bookmark reached. "
+ cout << "Bookmark reached. "
// The unit of e.AudioOffset is tick (1 tick = 100 nanoseconds), divide by 10,000 to convert to milliseconds. << "Audio offset: " << e.AudioOffset / 10000 << "ms, "
- << "Bookmark text: " << e.Text << "." << endl;
+ << "bookmark text: " << e.Text << "." << endl;
}; ```
+For the example SSML above, the `BookmarkReached` event will be triggered twice, and the console output will be
+```text
+Bookmark reached. Audio offset: 825ms, bookmark text: flower_1.
+Bookmark reached. Audio offset: 1462.5ms, bookmark text: flower_2.
+```
+ # [Java](#tab/java) For more information, see <a href="https://docs.microsoft.com/java/api/com.microsoft.cognitiveservices.speech.speechsynthesizer.bookmarkReached#com_microsoft_cognitiveservices_speech_SpeechSynthesizer_BookmarkReached" target="_blank"> `BookmarkReached` </a>.
synthesizer.BookmarkReached.addEventListener((o, e) -> {
}); ```
+For the example SSML above, the `BookmarkReached` event will be triggered twice, and the console output will be
+```text
+Bookmark reached. Audio offset: 825ms, bookmark text: flower_1.
+Bookmark reached. Audio offset: 1462.5ms, bookmark text: flower_2.
+```
+ # [Python](#tab/python) For more information, see <a href="https://docs.microsoft.com/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechsynthesizer#bookmark-reached" target="_blank"> `bookmark_reached` </a>.
speech_synthesizer.bookmark_reached.connect(lambda evt: print(
"Bookmark reached: {}, audio offset: {}ms, bookmark text: {}.".format(evt, evt.audio_offset / 10000, evt.text))) ```
+For the example SSML above, the `bookmark_reached` event will be triggered twice, and the console output will be
+```text
+Bookmark reached, audio offset: 825ms, bookmark text: flower_1.
+Bookmark reached, audio offset: 1462.5ms, bookmark text: flower_2.
+```
+ # [JavaScript](#tab/javascript) For more information, see <a href="https://docs.microsoft.com/javascript/api/microsoft-cognitiveservices-speech-sdk/speechsynthesizer#bookmarkReached" target="_blank"> `bookmarkReached`</a>. ```javascript synthesizer.bookmarkReached = function (s, e) {
- window.console.log("(Bookmark reached), Audio offset: " + e.audioOffset / 10000 + "ms. Bookmark text: " + e.text);
+ window.console.log("(Bookmark reached), Audio offset: " + e.audioOffset / 10000 + "ms, bookmark text: " + e.text);
} ```
+For the example SSML above, the `bookmarkReached` event will be triggered twice, and the console output will be
+```text
+(Bookmark reached), Audio offset: 825ms, bookmark text: flower_1.
+(Bookmark reached), Audio offset: 1462.5ms, bookmark text: flower_2.
+```
+ # [Objective-C](#tab/objectivec) For more information, see <a href="https://docs.microsoft.com/objectivec/cognitive-services/speech/spxspeechsynthesizer#addbookmarkreachedeventhandler" target="_blank"> `addBookmarkReachedEventHandler` </a>.
For more information, see <a href="https://docs.microsoft.com/objectivec/cogniti
}]; ```
+For the example SSML above, the `BookmarkReached` event will be triggered twice, and the console output will be
+```text
+Bookmark reached. Audio offset: 825ms, bookmark text: flower_1.
+Bookmark reached. Audio offset: 1462.5ms, bookmark text: flower_2.
+```
+ # [Swift](#tab/swift) For more information, see <a href="https://docs.microsoft.com/swift/cognitive-services/speech/spxspeechsynthesizer#addbookmarkreachedeventhandler" target="_blank"> `addBookmarkReachedEventHandler` </a>.
cognitive-services Speech Translation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-translation.md
keywords: speech translation
[!INCLUDE [TLS 1.2 enforcement](../../../includes/cognitive-services-tls-announcement.md)]
-In this overview, you learn about the benefits and capabilities of the speech translation service, which enables real-time, multi-language speech-to-speech and speech-to-text translation of audio streams. With the Speech SDK, your applications, tools, and devices have access to source transcriptions and translation outputs for provided audio. Interim transcription and translation results are returned as speech is detected, and final results can be converted into synthesized speech.
-
-Microsoft's translation engine is powered by two different approaches: statistical machine translation (SMT) and neural machine translation (NMT). SMT uses advanced statistical analysis to estimate the best possible translations given the context of a few words. With NMT, neural networks are used to provide more accurate, natural-sounding translations by using the full context of sentences to translate words.
-
-Today, Microsoft uses NMT for translation to most popular languages. All [languages available for speech-to-speech translation](language-support.md#speech-translation) are powered by NMT. Speech-to-text translation may use SMT or NMT depending on the language pair. When the target language is supported by NMT, the full translation is NMT-powered. When the target language isn't supported by NMT, the translation is a hybrid of NMT and SMT, using English as a "pivot" between the two languages.
+In this overview, you learn about the benefits and capabilities of the speech translation service, which enables real-time, [multi-language speech-to-speech](language-support.md#speech-translation) and speech-to-text translation of audio streams. With the Speech SDK, your applications, tools, and devices have access to source transcriptions and translation outputs for provided audio. Interim transcription and translation results are returned as speech is detected, and final results can be converted into synthesized speech.
## Core features
cognitive-services Text To Speech https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/text-to-speech.md
In this overview, you learn about the benefits and capabilities of the text-to-s
* Adjust speaking styles with SSML - Speech Synthesis Markup Language (SSML) is an XML-based markup language used to customize speech-to-text outputs. With SSML, you can adjust pitch, add pauses, improve pronunciation, speed up or slow down speaking rate, increase or decrease volume, and attribute multiple voices to a single document. See the [how-to](speech-synthesis-markup.md) for adjusting speaking styles.
-* Visemes - [Visemes](how-to-speech-synthesis-viseme.md) are the key poses in observed speech, including the position of the lips, jaw and tongue when producing a particular phoneme. Visemes have a strong correlation with voices and phonemes. Using viseme events in Speech SDK, you can generate facial animation data, which can be used to animate faces in lip-reading communication, education, entertainment, and customer service.
+* Visemes - [Visemes](how-to-speech-synthesis-viseme.md) are the key poses in observed speech, including the position of the lips, jaw and tongue when producing a particular phoneme. Visemes have a strong correlation with voices and phonemes. Using viseme events in Speech SDK, you can generate facial animation data, which can be used to animate faces in lip-reading communication, education, entertainment, and customer service.
+
+> [!NOTE]
+> Viseme only works for `en-US-AriaNeural` voice in West US (`westus`) region for now, and will be available for all `en-US` voices by the end of April, 2021.
## Get started
cognitive-services Cognitive Services Apis Create Account Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/cognitive-services-apis-create-account-cli.md
keywords: cognitive services, cognitive intelligence, cognitive solutions, ai services Previously updated : 09/14/2020 Last updated : 3/22/2021
az group delete --name cognitive-services-resource-group
## See also
-* [Authenticate requests to Azure Cognitive Services](authentication.md)
-* [What is Azure Cognitive Services?](./what-are-cognitive-services.md)
-* [Natural language support](language-support.md)
-* [Docker container support](cognitive-services-container-support.md)
+* See **[Authenticate requests to Azure Cognitive Services](authentication.md)** on how to securely work with Cognitive Services.
+* See **[What are Azure Cognitive Services?](./what-are-cognitive-services.md)** to get a list of different categories within Cognitive Services.
+* See **[Natural language support](language-support.md)** to see the list of natural languages that Cognitive Services supports.
+* See **[Use Cognitive Services as containers](cognitive-services-container-support.md)** to understand how to use Cognitive Services on-prem.
+* See **[Plan and manage costs for Cognitive Services](plan-manage-costs.md)** to estimate cost of using Cognitive Services.
cognitive-services Cognitive Services Apis Create Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/cognitive-services-apis-create-account.md
If you want to clean up and remove a Cognitive Services subscription, you can de
## See also
-* [Authenticate requests to Azure Cognitive Services](authentication.md)
-* [What is Azure Cognitive Services?](./what-are-cognitive-services.md)
-* [Create a new resource using the Azure Management client library](.\cognitive-services-apis-create-account-client-library.md)
-* [Natural language support](language-support.md)
-* [Docker container support](cognitive-services-container-support.md)
+* See **[Authenticate requests to Azure Cognitive Services](authentication.md)** on how to securely work with Cognitive Services.
+* See **[What are Azure Cognitive Services?](./what-are-cognitive-services.md)** to get a list of different categories within Cognitive Services.
+* See **[Natural language support](language-support.md)** to see the list of natural languages that Cognitive Services supports.
+* See **[Use Cognitive Services as containers](cognitive-services-container-support.md)** to understand how to use Cognitive Services on-prem.
+* See **[Plan and manage costs for Cognitive Services](plan-manage-costs.md)** to estimate cost of using Cognitive Services.
cognitive-services Create Account Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/create-account-resource-manager-template.md
Previously updated : 09/14/2020 Last updated : 3/22/2021
az group delete --name $resourceGroupName
-## Next steps
+## See also
-* [Authenticate requests to Azure Cognitive Services](authentication.md)
-* [What is Azure Cognitive Services?](./what-are-cognitive-services.md)
-* [Natural language support](language-support.md)
-* [Docker container support](cognitive-services-container-support.md)
+* See **[Authenticate requests to Azure Cognitive Services](authentication.md)** on how to securely work with Cognitive Services.
+* See **[What are Azure Cognitive Services?](./what-are-cognitive-services.md)** to get a list of different categories within Cognitive Services.
+* See **[Natural language support](language-support.md)** to see the list of natural languages that Cognitive Services supports.
+* See **[Use Cognitive Services as containers](cognitive-services-container-support.md)** to understand how to use Cognitive Services on-prem.
+* See **[Plan and manage costs for Cognitive Services](plan-manage-costs.md)** to estimate cost of using Cognitive Services.
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/overview.md
Azure Form Recognizer is a cognitive service that lets you build automated data
Form Recognizer is composed of custom document processing models, prebuilt models for invoices, receipts, IDs and business cards, and the layout model. You can call Form Recognizer models by using a REST API or client library SDKs to reduce complexity and integrate it into your workflow or application.
-Form Recognizer is composed of the following
+This documentation contains the following article types:
+
+* [**Quickstarts**](quickstarts/client-library.md) are getting-started instructions to guide you through making requests to the service.
+* [**How-to guides**](build-training-data-set.md) contain instructions for using the service in more specific or customized ways.
+* [**Concepts**](concept-layout.md) provide in-depth explanations of the service functionality and features.
+* [**Tutorials**](tutorial-bulk-processing.md) are longer guides that show you how to use the service as a component in broader business solutions.
+
+## Form Recognizer features
+
+With Form Recognizer, you can easily extract and analyze form data with these features:
* **[Layout API](#layout-api)** - Extract text, selection marks, and tables structures, along with their bounding box coordinates, from documents. * **[Custom models](#custom-models)** - Extract text, key/value pairs, selection marks, and table data from forms. These models are trained with your own data, so they're tailored to your forms.
Form Recognizer is composed of the following
* [Business cards](./concept-business-cards.md) * [Identification (ID) cards](./concept-identification-cards.md)
-## Try it out
-To try out the Form Recognizer Service, go to the online Sample UI Tool:
-<!-- markdownlint-disable MD025 -->
-<!-- markdownlint-disable MD024 -->
+## Get started
+
+Use the Sample Form Recognizer Tool to try out Layout, Pre-built models and train a custom model for your documents. You will need an Azure subscription ([**create one for free**](https://azure.microsoft.com/free/cognitive-services)) and a [**Form Recognizer resource**](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) endpoint and key to try out the Form Recognizer service.
### [v2.1 preview](#tab/v2-1)
To try out the Form Recognizer Service, go to the online Sample UI Tool:
> [Try Form Recognizer](https://fott.azurewebsites.net/)
+Follow the [Client library / REST API quickstart](./quickstarts/client-library.md) to get started extracting data from your documents. We recommend that you use the free service when you're learning the technology. Remember that the number of free pages is limited to 500 per month.
-You will need an Azure subscription ([create one for free](https://azure.microsoft.com/free/cognitive-services)) and a [Form Recognizer resource](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) endpoint and key to try out the Form Recognizer service.
+You can also use the REST samples (GitHub) to get started -
+
+* Extract text, selection marks, and table structure from documents
+ * [Extract layout data - Python](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/FormRecognizer/rest/python-layout.md)
+* Train custom models and extract form data
+ * [Train without labels - Python](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/FormRecognizer/rest/python-train-extract.md)
+ * [Train with labels - Python](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/FormRecognizer/rest/python-labeled-data.md)
+* Extract data from invoices
+ * [Extract invoice data - Python](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/FormRecognizer/rest/python-invoices.md)
+* Extract data from sales receipts
+ * [Extract receipt data - Python](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/FormRecognizer/rest/python-receipts.md)
+* Extract data from business cards
+ * [Extract business card data - Python](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/FormRecognizer/rest/python-business-cards.md)
+
+### Review the REST APIs
+
+You'll use the following APIs to train models and extract structured data from forms.
+
+|Name |Description |
+|||
+| **Analyze Layout** | Analyze a document passed in as a stream to extract text, selection marks, tables, and structure from the document |
+| **Train Custom Model**| Train a new model to analyze your forms by using five forms of the same type. Set the _useLabelFile_ parameter to `true` to train with manually labeled data. |
+| **Analyze Form** |Analyze a form passed in as a stream to extract text, key/value pairs, and tables from the form with your custom model. |
+| **Analyze Invoice** | Analyze an invoice to extract key information, tables, and other invoice text.|
+| **Analyze Receipt** | Analyze a receipt document to extract key information, and other receipt text.|
+| **Analyze ID** | Analyze an ID card document to extract key information, and other identification card text.|
+| **Analyze Business Card** | Analyze a business card to extract key information and text.|
+
+### [v2.1 preview](#tab/v2-1)
+
+Explore the [REST API reference documentation](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-3/operations/AnalyzeWithCustomForm) to learn more. If you're familiar with a previous version of the API, see the [What's new](./whats-new.md) article to learn about recent changes.
+
+### [v2.0](#tab/v2-0)
+
+Explore the [REST API reference documentation](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-3/operations/AnalyzeWithCustomForm) to learn more. If you're familiar with a previous version of the API, see the [What's new](./whats-new.md) article to learn about recent changes.
++ ## Layout API
The Business Cards model enables you to extract information such as the person's
:::image type="content" source="./media/overview-business-card.jpg" alt-text="sample business card" lightbox="./media/overview-business-card.jpg":::
-## Get started
-
-Use the Sample Form Recognizer Tool to try out Layout, Pre-built models and train a custom model for your documents:
-
-### [v2.1 preview](#tab/v2-1)
-
-> [!div class="nextstepaction"]
-> [Try Form Recognizer](https://fott-preview.azurewebsites.net/)
-
-### [v2.0](#tab/v2-0)
-
-> [!div class="nextstepaction"]
-> [Try Form Recognizer](https://fott.azurewebsites.net/)
--
-Follow the [Client library / REST API quickstart](./quickstarts/client-library.md) to get started extracting data from your documents. We recommend that you use the free service when you're learning the technology. Remember that the number of free pages is limited to 500 per month.
-
-You can also use the REST samples (GitHub) to get started -
-
-* Extract text, selection marks, and table structure from documents
- * [Extract layout data - Python](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/FormRecognizer/rest/python-layout.md)
-* Train custom models and extract form data
- * [Train without labels - Python](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/FormRecognizer/rest/python-train-extract.md)
- * [Train with labels - Python](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/FormRecognizer/rest/python-labeled-data.md)
-* Extract data from invoices
- * [Extract invoice data - Python](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/FormRecognizer/rest/python-invoices.md)
-* Extract data from sales receipts
- * [Extract receipt data - Python](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/FormRecognizer/rest/python-receipts.md)
-* Extract data from business cards
- * [Extract business card data - Python](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/FormRecognizer/rest/python-business-cards.md)
-
-### Review the REST APIs
-
-You'll use the following APIs to train models and extract structured data from forms.
-
-|Name |Description |
-|||
-| **Analyze Layout** | Analyze a document passed in as a stream to extract text, selection marks, tables, and structure from the document |
-| **Train Custom Model**| Train a new model to analyze your forms by using five forms of the same type. Set the _useLabelFile_ parameter to `true` to train with manually labeled data. |
-| **Analyze Form** |Analyze a form passed in as a stream to extract text, key/value pairs, and tables from the form with your custom model. |
-| **Analyze Invoice** | Analyze an invoice to extract key information, tables, and other invoice text.|
-| **Analyze Receipt** | Analyze a receipt document to extract key information, and other receipt text.|
-| **Analyze ID** | Analyze an ID card document to extract key information, and other identification card text.|
-| **Analyze Business Card** | Analyze a business card to extract key information and text.|
-
-### [v2.1 preview](#tab/v2-1)
-
-Explore the [REST API reference documentation](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-3/operations/AnalyzeWithCustomForm) to learn more. If you're familiar with a previous version of the API, see the [What's new](./whats-new.md) article to learn about recent changes.
-
-### [v2.0](#tab/v2-0)
-
-Explore the [REST API reference documentation](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-3/operations/AnalyzeWithCustomForm) to learn more. If you're familiar with a previous version of the API, see the [What's new](./whats-new.md) article to learn about recent changes.
--- ## Input requirements [!INCLUDE [input requirements](./includes/input-requirements.md)]
cognitive-services Label Tool https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/quickstarts/label-tool.md
keywords: document processing
<!-- markdownlint-disable MD024 --> <!-- markdownlint-disable MD033 --> <!-- markdownlint-disable MD034 -->
-# Train a Form Recognizer model with labels using the sample labeling tool
+# Train a custom model using the sample labeling tool
In this quickstart, you'll use the Form Recognizer REST API with the sample labeling tool to train a custom document processing model with manually labeled data. See the [Train with labels](../overview.md#train-with-labels) section of the overview to learn more about supervised learning with Form Recognizer.
cognitive-services What Are Cognitive Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/what-are-cognitive-services.md
The following sections in this article provides a list of services that are part
|Service Name|Service Description| |:--|:|
-|[Computer Vision](./computer-vision/index.yml "Computer Vision")|The Computer Vision service provides you with access to advanced cognitive algorithms for processing images and returning information.|
-|[Custom Vision Service](./custom-vision-service/overview.md "Custom Vision Service")|The Custom Vision Service allows you to build custom image classifiers.|
-|[Face](./face/index.yml "Face")| The Face service provides access to advanced face algorithms, enabling face attribute detection and recognition.|
-|[Form Recognizer](./form-recognizer/index.yml "Form Recognizer")|Form Recognizer identifies and extracts key-value pairs and table data from form documents; then outputs structured data including the relationships in the original file.|
-|[Video Indexer](../media-services/video-indexer/video-indexer-overview.md "Video Indexer")|Video Indexer enables you to extract insights from your video.|
+|[Computer Vision](./computer-vision/index.yml "Computer Vision")|The Computer Vision service provides you with access to advanced cognitive algorithms for processing images and returning information. See [Computer Vision quickstart](./computer-vision/quickstarts-sdk/client-library.md) to get started with the service.|
+|[Custom Vision Service](./custom-vision-service/index.yml "Custom Vision Service")|The Custom Vision Service lets you build, deploy, and improve your own image classifiers. An image classifier is an AI service that applies labels to images, based on their visual characteristics. |
+|[Face](./face/index.yml "Face")| The Face service provides access to advanced face algorithms, enabling face attribute detection and recognition. See [Face quickstart](./face/quickstarts/client-libraries.md) to get started with the service.|
+|[Form Recognizer](./form-recognizer/index.yml "Form Recognizer")|Form Recognizer identifies and extracts key-value pairs and table data from form documents; then outputs structured data including the relationships in the original file. See [Form Recognizer quickstart](./form-recognizer/quickstarts/client-library.md) to get started.|
+|[Video Indexer](../media-services/video-indexer/video-indexer-overview.md "Video Indexer")|Video Indexer enables you to extract insights from your video. See [Video Indexer quickstart](/media-services/video-indexer/video-indexer-get-started.md) to get started.|
## Speech APIs
The following sections in this article provides a list of services that are part
|Service Name|Service Description| |:--|:|
-|[Language Understanding LUIS](./luis/index.yml "Language Understanding")|Language Understanding service (LUIS) allows your application to understand what a person wants in their own words.|
-|[QnA Maker](./qnamaker/index.yml "QnA Maker")|QnA Maker allows you to build a question and answer service from your semi-structured content.|
-|[Text Analytics](./text-analytics/index.yml "Text Analytics")| Text Analytics provides natural language processing over raw text for sentiment analysis, key phrase extraction, and language detection.|
+|[Language Understanding LUIS](./luis/index.yml "Language Understanding")|Language Understanding (LUIS) is a cloud-based conversational AI service that applies custom machine-learning intelligence to a user's conversational, natural language text to predict overall meaning, and pull out relevant, detailed information. [See LUIS quickstart](./luis/get-started-portal-build-app.md) to get started with the service.|
+|[QnA Maker](./qnamaker/index.yml "QnA Maker")|QnA Maker allows you to build a question and answer service from your semi-structured content. [See QnA Maker quickstart](./qnamaker/quickstarts/create-publish-knowledge-base.md) to get started with the service.|
+|[Text Analytics](./text-analytics/index.yml "Text Analytics")| Text Analytics provides natural language processing over raw text for sentiment analysis, key phrase extraction, and language detection. See [Text Analytics quickstart](./text-analytics/quickstarts/client-libraries-rest-api.md) to get started with the service.|
|[Translator](./translator/index.yml "Translator")|Translator provides machine-based text translation in near real-time.|
-| [Immersive Reader](./immersive-reader/index.yml "Immersive Reader") | Immersive Reader adds screen reading and comprehension capabilities to your applications. |
+| [Immersive Reader](./immersive-reader/index.yml "Immersive Reader") | Immersive Reader adds screen reading and comprehension capabilities to your applications. See [Immersive Reader quickstart](./immersive-reader/quickstarts/client-libraries.md) to get started with the service. |
## Decision APIs |Service Name|Service Description| |:--|:|
-|[Anomaly Detector](./anomaly-detector/index.yml "Anomaly Detector") |Anomaly Detector allows you to monitor and detect abnormalities in your time series data.|
-|[Content Moderator](./content-moderator/overview.md "Content Moderator")|Content Moderator provides monitoring for possible offensive, undesirable, and risky content.|
-|[Metrics Advisor](./metrics-advisor/index.yml) (Preview) | Metrics Advisor provides customizable anomaly detection on multi-variate time series data, and a fully featured web portal to help you use the service.|
-|[Personalizer](./personalizer/index.yml "Personalizer")|Personalizer allows you to choose the best experience to show to your users, learning from their real-time behavior.|
+|[Anomaly Detector](./anomaly-detector/index.yml "Anomaly Detector") |Anomaly Detector allows you to monitor and detect abnormalities in your time series data. See [Anomaly Detector quickstart](./anomaly-detector/quickstarts/client-libraries.md) to get started with the service|
+|[Content Moderator](./content-moderator/overview.md "Content Moderator")|Content Moderator provides monitoring for possible offensive, undesirable, and risky content. See [Content Moderator quickstart](./content-moderator/client-libraries.md) to get started with the service.|
+|[Metrics Advisor](./metrics-advisor/index.yml) (Preview) | Metrics Advisor provides customizable anomaly detection on multi-variate time series data, and a fully featured web portal to help you use the service. See [Metrics Advisor quickstart](./metrics-advisor/quickstarts/rest-api-and-client-library.md) to get started with the service. |
+|[Personalizer](./personalizer/index.yml "Personalizer")|Personalizer allows you to choose the best experience to show to your users, learning from their real-time behavior. See [Personalizer quickstart](./personalizer/quickstart-personalizer-sdk.md) to get started with the service.|
## Search APIs
The following sections in this article provides a list of services that are part
|[Bing Local Business Search](/azure/cognitive-services/bing-local-business-search/ "Bing Local Business Search")| Bing Local Business Search API enables your applications to find contact and location information about local businesses based on search queries.| |[Bing Spell Check](/azure/cognitive-services/bing-spell-check/ "Bing Spell Check")|Bing Spell Check allows you to perform contextual grammar and spell checking.|
-## Development options
+## Get started with Cognitive Services
+
+Start by creating a Cognitive Services resource with hands-on quickstarts using the following methods:
+
+* [Azure portal](cognitive-services-apis-create-account.md?tabs=multiservice%2Cwindows "Azure portal")
+* [Azure CLI](cognitive-services-apis-create-account-cli.md?tabs=windows "Azure CLI")
+* [Azure SDK client libraries](cognitive-services-apis-create-account-cli.md?tabs=windows "cognitive-services-apis-create-account-client-library?pivots=programming-language-csharp")
+* [Azure Resource Manager (ARM) templates](./create-account-resource-manager-template.md?tabs=portal "Azure Resource Manager (ARM) templates")
+
+## Using Cognitive Services in different development environments
With Azure and Cognitive Services, you have access to several development options, such as:
With Azure and Cognitive Services, you have access to several development option
To learn more, see [Cognitive Services development options](./cognitive-services-development-options.md).
-## Learn with the Quickstarts
-
-Start by creating a Cognitive Services resource with hands-on quickstarts using the following methods:
-
-* [Azure portal](cognitive-services-apis-create-account.md?tabs=multiservice%2Cwindows "Azure portal")
-* [Azure CLI](cognitive-services-apis-create-account-cli.md?tabs=windows "Azure CLI")
-* [Azure SDK client libraries](cognitive-services-apis-create-account-cli.md?tabs=windows "cognitive-services-apis-create-account-client-library?pivots=programming-language-csharp")
-* [Azure Resource Manager (ARM) templates](./create-account-resource-manager-template.md?tabs=portal "Azure Resource Manager (ARM) templates")
- <!-- ## Subscription management
Azure Cognitive Services provides a layered security model, including [authentic
## Containers for Cognitive Services
- Cognitive Services provides containers for deployment in the Azure cloud or on-premises. Learn more about [Cognitive Services Containers](cognitive-services-container-support.md "Cognitive Services Containers").
+ Azure Cognitive Services provides several Docker containers that let you use the same APIs that are available in Azure, on-premises. Using these containers gives you the flexibility to bring Cognitive Services closer to your data for compliance, security or other operational reasons. Learn more about [Cognitive Services Containers](cognitive-services-container-support.md "Cognitive Services Containers").
## Regional availability
Cognitive Services provides several support options to help you move forward wit
## Next steps * [Create a Cognitive Services account](cognitive-services-apis-create-account.md "Create a Cognitive Services account")
-* [What's new in Cognitive Services docs](whats-new-docs.md "What's new in Cognitive Services docs")
+* [What's new in Cognitive Services docs](whats-new-docs.md "What's new in Cognitive Services docs")
+* [Plan and manage costs for Cognitive Services](plan-manage-costs.md)
communication-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/samples/overview.md
Azure Communication Services has many samples available, which you can use to te
| Sample Name | Description | Languages/Platforms Available | | : | : | : |
-| [Group Calling Hero Sample](./calling-hero-sample.md) | Provides a sample of creating a group calling application. | Web, iOS |
+| [Group Calling Hero Sample](./calling-hero-sample.md) | Provides a sample of creating a group calling application. | Web, iOS, Android |
| [Web Calling Sample](./web-calling-sample.md) | A step by step walk-through of ACS Calling features within the Web. | Web | | [Chat Hero Sample](./chat-hero-sample.md) | Provides a sample of creating a chat application. | Web & C# .NET | | [Contoso Medical App](https://github.com/Azure-Samples/communication-services-contoso-med-app) | Sample app demonstrating a patient-doctor flow. | Web & Node.js |
Access code samples for quickstarts found on our documentation.
## Next Steps
+ - [Create a Communication Services resource](../quickstarts/create-communication-resource.md)
cosmos-db Analytical Store Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/analytical-store-introduction.md
The following constraints are applicable on the operational data in Azure Cosmos
* Currently we do not support Azure Synapse Spark reading column names that contain blanks (white spaces).
-* Expect different behavior in regard to `NULL` values:
- * Spark pools in Azure Synapse will read these values as 0 (zero).
- * SQL serverless pools in Azure Synapse will read these values as `NULL`.
+* Expect different behavior in regard to explicit `null` values:
+ * Spark pools in Azure Synapse will read these values as `0` (zero).
+ * SQL serverless pools in Azure Synapse will read these values as `NULL` if the first document of the collection has, for the same property, a value with a datatype different of `integer`.
+ * SQL serverless pools in Azure Synapse will read these values as `0` (zero) if the first document of the collection has, for the same property, a value that is an integer.
* Expect different behavior in regard to missing columns: * Spark pools in Azure Synapse will represent these columns as `undefined`.
The well-defined schema representation creates a simple tabular representation o
> [!NOTE] > If the Azure Cosmos DB analytical store follows the well-defined schema representation and the specification above is violated by certain items, those items will not be included in the analytical store.
+* Expect different behavior in regard to different types in well defined schema:
+ * Spark pools in Azure Synapse will represent these values as `undefined`.
+ * SQL serverless pools in Azure Synapse will represent these values as `NULL`.
++ **Full fidelity schema representation** The full fidelity schema representation is designed to handle the full breadth of polymorphic schemas in the schema-agnostic operational data. In this schema representation, no items are dropped from the analytical store even if the well-defined schema constraints (that is no mixed data type fields nor mixed data type arrays) are violated.
cosmos-db Configure Synapse Link https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/configure-synapse-link.md
Last updated 11/30/2020 -+ # Configure and use Azure Synapse Link for Azure Cosmos DB
cosmos-db Synapse Link Frequently Asked Questions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/synapse-link-frequently-asked-questions.md
Last updated 11/30/2020+
cosmos-db Synapse Link Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/synapse-link-power-bi.md
Last updated 11/30/2020 + # Use Power BI and serverless Synapse SQL pool to analyze Azure Cosmos DB data with Synapse Link
cosmos-db Synapse Link Use Cases https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/synapse-link-use-cases.md
Last updated 05/19/2020 + # Azure Synapse Link for Azure Cosmos DB: Near real-time analytics use cases
cosmos-db Synapse Link https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/synapse-link.md
Last updated 11/30/2020 + # What is Azure Synapse Link for Azure Cosmos DB?
cosmos-db Use Metrics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/use-metrics.md
Previously updated : 07/22/2020 Last updated : 03/22/2021 # Monitor and debug with metrics in Azure Cosmos DB
data-factory Compute Linked Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/compute-linked-services.md
You create an Azure Machine Learning linked service to connect an Azure Machine
| mlWorkspaceName | Azure Machine Learning workspace name | Yes | | servicePrincipalId | Specify the application's client ID. | No | | servicePrincipalKey | Specify the application's key. | No |
-| tenant | Specify the tenant information (domain name or tenant ID) under which your application resides. You can retrieve it by hovering the mouse in the upper-right corner of the Azure portal. | Required if updateResourceEndpoint is specified | No |
+| tenant | Specify the tenant information (domain name or tenant ID) under which your application resides. You can retrieve it by hovering the mouse in the upper-right corner of the Azure portal. | Required if updateResourceEndpoint is specified |
| connectVia | The Integration Runtime to be used to dispatch the activities to this linked service. You can use Azure Integration Runtime or Self-hosted Integration Runtime. If not specified, it uses the default Azure Integration Runtime. | No | ## Azure Data Lake Analytics linked service
You create an Azure Function linked service and use it with the [Azure Function
## Next steps
-For a list of the transformation activities supported by Azure Data Factory, see [Transform data](transform-data.md).
+For a list of the transformation activities supported by Azure Data Factory, see [Transform data](transform-data.md).
data-factory Connector Rest https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-rest.md
The template defines two parameters:
5. Select **Web** activity. In **Settings**, specify the corresponding **URL**, **Method**, **Headers**, and **Body** to retrieve OAuth bearer token from the login API of the service that you want to copy data from. The placeholder in the template showcases a sample of Azure Active Directory (AAD) OAuth. Note AAD authentication is natively supported by REST connector, here is just an example for OAuth flow. | Property | Description |
- |: |: |: |
+ |: |: |
| URL |Specify the url to retrieve OAuth bearer token from. for example, in the sample here it's https://login.microsoftonline.com/microsoft.onmicrosoft.com/oauth2/token |. | Method | The HTTP method. Allowed values are **Post** and **Get**. | | Headers | Header is user-defined, which references one header name in the HTTP request. |
The template defines two parameters:
6. In **Copy data** activity, select *Source* tab, you could see that the bearer token (access_token) retrieved from previous step would be passed to Copy data activity as **Authorization** under Additional headers. Confirm settings for following properties before starting a pipeline run. | Property | Description |
- |: |: |: |
+ |: |: |
| Request method | The HTTP method. Allowed values are **Get** (default) and **Post**. | | Additional headers | Additional HTTP request headers.|
data-factory Data Factory Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-factory-service-identity.md
description: Learn about managed identity for Azure Data Factory.
Previously updated : 07/06/2020 Last updated : 03/23/2021
[!INCLUDE[appliesto-adf-xxx-md](includes/appliesto-adf-xxx-md.md)]
-This article helps you understand what is managed identity for Data Factory (formerly known as Managed Service Identity/MSI) and how it works.
+This article helps you understand what a managed identity is for Data Factory (formerly known as Managed Service Identity/MSI) and how it works.
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
data-factory How To Create Event Trigger https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-create-event-trigger.md
This section shows you how to create a storage event trigger within the Azure Da
:::image type="content" source="media/how-to-create-event-trigger/event-based-trigger-image1.png" alt-text="Screenshot of Author page to create a new storage event trigger in Data Factory UI.":::
-1. Select your storage account from the Azure subscription dropdown or manually using its Storage account resource ID. Choose which container you wish the events to occur on. Container selection is optional, but be mindful that selecting all containers can lead to a large number of events.
+1. Select your storage account from the Azure subscription dropdown or manually using its Storage account resource ID. Choose which container you wish the events to occur on. Container selection is required, but be mindful that selecting all containers can lead to a large number of events.
> [!NOTE]
- > The Storage Event Trigger currently supports only Azure Data Lake Storage Gen2 and General-purpose version 2 storage accounts. Due to an Azure Event Grid limitation, Azure Data Factory only supports a maximum of 500 storage event triggers per storage account.
+ > The Storage Event Trigger currently supports only Azure Data Lake Storage Gen2 and General-purpose version 2 storage accounts. Due to an Azure Event Grid limitation, Azure Data Factory only supports a maximum of 500 storage event triggers per storage account. If you hit the limit, please contact support for recommendations and increasing the limit upon evaluation by Event Grid team.
> [!NOTE] > To create a new or modify an existing Storage Event Trigger, the Azure account used to log into Data Factory and publish the storage event trigger must have appropriate role based access control (Azure RBAC) permission on the storage account. No additional permission is required: Service Principal for the Azure Data Factory does _not_ need special permission to either the Storage account or Event Grid. For more information about access control, see [Role based access control](#role-based-access-control) section.
This section shows you how to create a storage event trigger within the Azure Da
1. The **Blob path begins with** and **Blob path ends with** properties allow you to specify the containers, folders, and blob names for which you want to receive events. Your storage event trigger requires at least one of these properties to be defined. You can use variety of patterns for both **Blob path begins with** and **Blob path ends with** properties, as shown in the examples later in this article. * **Blob path begins with:** The blob path must start with a folder path. Valid values include `2018/` and `2018/april/shoes.csv`. This field can't be selected if a container isn't selected.
- * **Blob path ends with:** The blob path must end with a file name or extension. Valid values include `shoes.csv` and `.csv`. Container and folder name are optional but, when specified, they must be separated by a `/blobs/` segment. For example, a container named 'orders' can have a value of `/orders/blobs/2018/april/shoes.csv`. To specify a folder in any container, omit the leading '/' character. For example, `april/shoes.csv` will trigger an event on any file named `shoes.csv` in folder a called 'april' in any container.
+ * **Blob path ends with:** The blob path must end with a file name or extension. Valid values include `shoes.csv` and `.csv`. Container and folder names, when specified, they must be separated by a `/blobs/` segment. For example, a container named 'orders' can have a value of `/orders/blobs/2018/april/shoes.csv`. To specify a folder in any container, omit the leading '/' character. For example, `april/shoes.csv` will trigger an event on any file named `shoes.csv` in folder a called 'april' in any container.
* Note that Blob path **begins with** and **ends with** are the only pattern matching allowed in Storage Event Trigger. Other types of wildcard matching aren't supported for the trigger type. 1. Select whether your trigger will respond to a **Blob created** event, **Blob deleted** event, or both. In your specified storage location, each event will trigger the Data Factory pipelines associated with the trigger.
data-factory Quickstart Create Data Factory Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/quickstart-create-data-factory-python.md
Pipelines can ingest data from disparate data stores. Pipelines process or trans
* An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
-* [Python 3.4+](https://www.python.org/downloads/).
+* [Python 3.6+](https://www.python.org/downloads/).
* [An Azure Storage account](../storage/common/storage-account-create.md).
databox-online Azure Stack Edge Gpu 2103 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-2103-release-notes.md
+
+ Title: Azure Stack Edge Pro 2103 release notes
+description: Describes critical open issues and resolutions for the Azure Stack Edge Pro running 2103 release.
++
+
+++ Last updated : 03/23/2021+++
+# Azure Stack Edge 2103 release notes
++
+The following release notes identify the critical open issues and the resolved issues for the 2103 release for your Azure Stack Edge devices. These release notes are applicable for Azure Stack Edge Pro GPU, Azure Stack Edge Pro R, and Azure Stack Edge Mini R devices. Features and issues that correspond to a specific model are called out wherever applicable.
+
+The release notes are continuously updated, and as critical issues requiring a workaround are discovered, they are added. Before you deploy your device, carefully review the information contained in the release notes.
+
+This article applies to the **Azure Stack Edge 2103** release, which maps to software version number **2.2.1540.2890**. This software can be applied to your device if you are running at least Azure Stack Edge 2010 (2.1.1377.2170) software.
+
+## What's new
+
+The following new features are available in the Azure Stack Edge 2103 release.
+
+- **New features for Virtual Machines** - Beginning this release, you can perform the following management operations on the virtual machines via the Azure portal:
+ - Add or remove multiple network interfaces to an existing VM.
+ - Add or remove multiple disks to an existing VM.
+ - Resize the VM.
+ - Add custom data while deploying a Windows or a Linux VM.
+
+ You can also [Connect to the VM console on your device](azure-stack-edge-gpu-connect-virtual-machine-console.md) and troubleshoot any VM issues.
+
+- **Connect to PowerShell interface via https** - Starting this release, you will no longer be able to open a remote PowerShell session into a device over *http*. By default, *https* will be used for all the sessions. For more information, see how to [Connect to the PowerShell interface](azure-stack-edge-gpu-connect-powershell-interface.md#connect-to-the-powershell-interface) of your device.
+
+- **Improvements for Compute** - Several enhancements and improvements were made including those for:
+
+ - **Overall compute platform quality**. This release has bug fixes to improve the overall compute platform quality. See the [Issues fixed in 2103 release](#issues-fixed-in-2103-release).
+ - **Compute platform components**. Security updates were applied to Compute VM image. IoT Edge and Azure Arc for Kubernetes versions were also updated.
+ - **Diagnostics**. A new API is released to check resource and network conditions. You can connect to the PowerShell interface of the device and use the `Test-HcsKubernetesStatus` command to verify the network readiness of the device.
+ - **Log collection** that would lead to improved debugging.
+ - **Alerting infrastructure** that will allow you to detect IP address conflicts for compute IP addresses.
+ - **Mix workload** of Kubernetes and local Azure Resource Manager.
+
+- **Proactive logging by default** - Starting this release, proactive log collection is enabled by default on your device. This feature allows Microsoft to collect logs proactively based on the system health indicators to help efficiently troubleshoot any device issues. For more information, see [Proactive log collection on your device](azure-stack-edge-gpu-proactive-log-collection.md).
+
+## Issues fixed in 2103 release
+
+The following table lists the issues that were release noted in previous releases and fixed in the current release.
+
+| No. | Feature | Issue |
+| | | |
+|**1.**|Kubernetes |Edge container registry does not work when web proxy is enabled.|
+|**2.**|Kubernetes |Edge container registry does not work with IoT Edge modules.|
+
+## Known issues in 2103 release
+
+The following table provides a summary of known issues in the 2103 release.
+
+| No. | Feature | Issue | Workaround/comments |
+| | | | |
+|**1.**|Preview features |For this release, the following features: Local Azure Resource Manager, VMs, Cloud management of VMs, Kubernetes cloud management, Azure Arc enabled Kubernetes, VPN for Azure Stack Edge Pro R and Azure Stack Edge Mini R, Multi-process service (MPS) for Azure Stack Edge Pro GPU - are all available in preview. |These features will be generally available in later releases. |
+|**2.**|GPU VMs |Prior to this release, GPU VM lifecycle was not managed in the update flow. Hence, when updating to 2103 release, GPU VMs are not stopped automatically during the update. You will need to manually stop the GPU VMs using a `stop-stayProvisioned` flag before you update your device. For more information, see [Suspend or shut down the VM](azure-stack-edge-gpu-deploy-virtual-machine-powershell.md#suspend-or-shut-down-the-vm).<br> All the GPU VMs that are kept running before the update, are started after the update. In these instances, the workloads running on the VMs aren't terminated gracefully. And the VMs could potentially end up in an undesirable state after the update. <br>All the GPU VMs that are stopped via the `stop-stayProvisioned` before the update, are automatically started after the update. <br>If you stop the GPU VMs via the Azure portal, you'll need to manually start the VM after the device update.| If running GPU VMs with Kubernetes, stop the GPU VMs right before the update. <br>When the GPU VMs are stopped, Kubernetes will take over the GPUs that were used originally by VMs. <br>The longer the GPU VMs are in stopped state, higher the chances that Kubernetes will take over the GPUs. |
+|**3.**|Custom script VM extension |There is a known issue in the Windows VMs that were created in an earlier release and the device was updated to 2103. <br> If you add a custom script extension on these VMs, the Windows VM Guest Agent (Version 2.7.41491.901 only) gets stuck in the update causing the extension deployment to time out. | To work around this issue: <ul><li> Connect to the Windows VM using remote desktop protocol (RDP). </li><li> Make sure that the `waappagent.exe` is running on the machine: `Get-Process WaAppAgent`. </li><li> If the `waappagent.exe` is not running, restart the `rdagent` service: `Get-Service RdAgent` \| `Restart-Service`. Wait for 5 minutes.</li><li> While the `waappagent.exe` is running, kill the `WindowsAzureGuest.exe` process. </li><li>After you kill the process, the process starts running again with the newer version.</li><li>Verify that the Windows VM Guest Agent version is 2.7.41491.971 using this command: `Get-Process WindowsAzureGuestAgent` \| `fl ProductVersion`.</li><li>[Set up custom script extension on Windows VM](azure-stack-edge-gpu-deploy-virtual-machine-custom-script-extension.md). </li><ul> |
+|**4.**|Multi-Process Service (MPS) |When the device software and the Kubernetes cluster are updated, the MPS setting is not retained for the workloads. |[Re-enable MPS](azure-stack-edge-gpu-connect-powershell-interface.md#connect-to-the-powershell-interface) and redeploy the workloads that were using MPS. |
++
+## Known issues from previous releases
+
+The following table provides a summary of known issues carried over from the previous releases.
+
+| No. | Feature | Issue | Workaround/comments |
+| | | | |
+| **1.** |Azure Stack Edge Pro + Azure SQL | Creating SQL database requires Administrator access. |Do the following steps instead of Steps 1-2 in [https://docs.microsoft.com/azure/iot-edge/tutorial-store-data-sql-server#create-the-sql-database](../iot-edge/tutorial-store-data-sql-server.md#create-the-sql-database). <ul><li>In the local UI of your device, enable compute interface. Select **Compute > Port # > Enable for compute > Apply.**</li><li>Download `sqlcmd` on your client machine from https://docs.microsoft.com/sql/tools/sqlcmd-utility </li><li>Connect to your compute interface IP address (the port that was enabled), adding a ",1401" to the end of the address.</li><li>Final command will look like this: sqlcmd -S {Interface IP},1401 -U SA -P "Strong!Passw0rd".</li>After this, steps 3-4 from the current documentation should be identical. </li></ul> |
+| **2.** |Refresh| Incremental changes to blobs restored via **Refresh** are NOT supported |For Blob endpoints, partial updates of blobs after a Refresh, may result in the updates not getting uploaded to the cloud. For example, sequence of actions such as:<ul><li>Create blob in cloud. Or delete a previously uploaded blob from the device.</li><li>Refresh blob from the cloud into the appliance using the refresh functionality.</li><li>Update only a portion of the blob using Azure SDK REST APIs.</li></ul>These actions can result in the updated sections of the blob to not get updated in the cloud. <br>**Workaround**: Use tools such as robocopy, or regular file copy through Explorer or command line, to replace entire blobs.|
+|**3.**|Throttling|During throttling, if new writes to the device aren't allowed, writes by the NFS client fail with a "Permission Denied" error.| The error will show as below:<br>`hcsuser@ubuntu-vm:~/nfstest$ mkdir test`<br>mkdir: cannot create directory 'test': Permission deniedΓÇï|
+|**4.**|Blob Storage ingestion|When using AzCopy version 10 for Blob storage ingestion, run AzCopy with the following argument: `Azcopy <other arguments> --cap-mbps 2000`| If these limits aren't provided for AzCopy, it could potentially send a large number of requests to the device, resulting in issues with the service.|
+|**5.**|Tiered storage accounts|The following apply when using tiered storage accounts:<ul><li> Only block blobs are supported. Page blobs are not supported.</li><li>There is no snapshot or copy API support.</li><li> Hadoop workload ingestion through `distcp` is not supported as it uses the copy operation heavily.</li></ul>||
+|**6.**|NFS share connection|If multiple processes are copying to the same share, and the `nolock` attribute isn't used, you may see errors during the copy.ΓÇï|The `nolock` attribute must be passed to the mount command to copy files to the NFS share. For example: `C:\Users\aseuser mount -o anon \\10.1.1.211\mnt\vms Z:`.|
+|**7.**|Kubernetes cluster|When applying an update on your device that is running a Kubernetes cluster, the Kubernetes virtual machines will restart and reboot. In this instance, only pods that are deployed with replicas specified are automatically restored after an update. |If you have created individual pods outside a replication controller without specifying a replica set, these pods won't be restored automatically after the device update. You will need to restore these pods.<br>A replica set replaces pods that are deleted or terminated for any reason, such as node failure or disruptive node upgrade. For this reason, we recommend that you use a replica set even if your application requires only a single pod.|
+|**8.**|Kubernetes cluster|Kubernetes on Azure Stack Edge Pro is supported only with Helm v3 or later. For more information, go to [Frequently asked questions: Removal of Tiller](https://v3.helm.sh/docs/faq/).|
+|**9.**|Azure Arc enabled Kubernetes |For the GA release, Azure Arc enabled Kubernetes is updated from version 0.1.18 to 0.2.9. As the Azure Arc enabled Kubernetes update is not supported on Azure Stack Edge device, you will need to redeploy Azure Arc enabled Kubernetes.|Follow these steps:<ol><li>[Apply device software and Kubernetes updates](azure-stack-edge-gpu-install-update.md).</li><li>Connect to the [PowerShell interface of the device](azure-stack-edge-gpu-connect-powershell-interface.md).</li><li>Remove the existing Azure Arc agent. Type: `Remove-HcsKubernetesAzureArcAgent`.</li><li>Deploy [Azure Arc to a new resource](azure-stack-edge-gpu-deploy-arc-kubernetes-cluster.md). Do not use an existing Azure Arc resource.</li></ol>|
+|**10.**|Azure Arc enabled Kubernetes|Azure Arc deployments are not supported if web proxy is configured on your Azure Stack Edge Pro device.||
+|**11.**|Kubernetes |Port 31000 is reserved for Kubernetes Dashboard. Port 31001 is reserved for Edge container registry. Similarly, in the default configuration, the IP addresses 172.28.0.1 and 172.28.0.10, are reserved for Kubernetes service and Core DNS service respectively.|Do not use reserved IPs.|
+|**12.**|Kubernetes |Kubernetes does not currently allow multi-protocol LoadBalancer services. For example, a DNS service that would have to listen on both TCP and UDP. |To work around this limitation of Kubernetes with MetalLB, two services (one for TCP, one for UDP) can be created on the same pod selector. These services use the same sharing key and spec.loadBalancerIP to share the same IP address. IPs can also be shared if you have more services than available IP addresses. <br> For more information, see [IP address sharing](https://metallb.universe.tf/usage/#ip-address-sharing).|
+|**13.**|Kubernetes cluster|Existing Azure IoT Edge marketplace modules may require modifications to run on IoT Edge on Azure Stack Edge device.|For more information, see Modify Azure IoT Edge modules from marketplace to run on Azure Stack Edge device.<!-- insert link-->|
+|**14.**|Kubernetes |File-based bind mounts aren't supported with Azure IoT Edge on Kubernetes on Azure Stack Edge device.|IoT Edge uses a translation layer to translate `ContainerCreate` options to Kubernetes constructs. Creating `Binds` maps to `hostpath` directory and thus file-based bind mounts cannot be bound to paths in IoT Edge containers. If possible, map the parent directory.|
+|**15.**|Kubernetes |If you bring your own certificates for IoT Edge and add those certificates on your Azure Stack Edge device after the compute is configured on the device, the new certificates are not picked up.|To work around this problem, you should upload the certificates before you configure compute on the device. If the compute is already configured, [Connect to the PowerShell interface of the device and run IoT Edge commands](azure-stack-edge-gpu-connect-powershell-interface.md#use-iotedge-commands). Restart `iotedged` and `edgehub` pods.|
+|**16.**|Certificates |In certain instances, certificate state in the local UI may take several seconds to update. |The following scenarios in the local UI may be affected.<ul><li>**Status** column in **Certificates** page.</li><li>**Security** tile in **Get started** page.</li><li>**Configuration** tile in **Overview** page.</li></ul> |
+|**17.**|IoT Edge |Modules deployed through IoT Edge can't use host network. | |
+|**18.**|Compute + Kubernetes |Compute/Kubernetes does not support NTLM web proxy. ||
+|**19.**|Kubernetes + update |Earlier software versions such as 2008 releases have a race condition update issue that causes the update to fail with ClusterConnectionException. |Using the newer builds should help avoid this issue. If you still see this issue, the workaround is to retry the upgrade, and it should work.|
+|**20**|Internet Explorer|If enhanced security features are enabled, you may not be able to access local web UI pages. | Disable enhanced security, and restart your browser.|
+|**21.**|Kubernetes Dashboard | *Https* endpoint for Kubernetes Dashboard with SSL certificate is not supported. | |
+|**22.**|Kubernetes |Kubernetes doesn't support ":" in environment variable names that are used by .NET applications. This is also required for Event grid IoT Edge module to function on Azure Stack Edge device and other applications. For more information, see [ASP.NET core documentation](/aspnet/core/fundamentals/configuration/?tabs=basicconfiguration&view=aspnetcore-3.1&preserve-view=true#environment-variables).|Replace ":" by double underscore. For more information,see [Kubernetes issue](https://github.com/kubernetes/kubernetes/issues/53201)|
+|**23.** |Azure Arc + Kubernetes cluster |By default, when resource `yamls` are deleted from the Git repository, the corresponding resources are not deleted from the Kubernetes cluster. |To allow the deletion of resources when they're deleted from the git repository, set `--sync-garbage-collection` in Arc OperatorParams. For more information, see [Delete a configuration](../azure-arc/kubernetes/use-gitops-connected-cluster.md#additional-parameters). |
+|**24.**|NFS |Applications that use NFS share mounts on your device to write data should use Exclusive write. That ensures the writes are written to the disk.| |
+|**25.**|Compute configuration |Compute configuration fails in network configurations where gateways or switches or routers respond to Address Resolution Protocol (ARP) requests for systems that do not exist on the network.| |
+|**26.**|Compute and Kubernetes |If Kubernetes is set up first on your device, it claims all the available GPUs. Hence, it is not possible to create Azure Resource Manager VMs using GPUs after setting up the Kubernetes. |If your device has 2 GPUs, then you can create 1 VM that uses the GPU and then configure Kubernetes. In this case, Kubernetes will use the remaining available 1 GPU. |
++
+<!--|**18.**|Azure Private Edge Zone (Preview) |There is a known issue with Virtual Network Function VM if the VM was created on Azure Stack Edge device running earlier preview builds such as 2006/2007b and then the device was updated to 2009 GA release. The issue is that the VNF information can't be retrieved or any new VNFs can't be created unless the VNF VMs are deleted before the device is updated. |Before you update Azure Stack Edge device to 2009 release, use the PowerShell command `get-mecvnf` followed by `remove-mecvnf <VNF guid>` to remove all Virtual Network Function VMs one at a time. After the upgrade, you will need to redeploy the same VNFs.|-->
++
+## Next steps
+
+- [Update your device](azure-stack-edge-gpu-install-update.md)
databox-online Azure Stack Edge Gpu Connect Virtual Machine Console https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-connect-virtual-machine-console.md
+
+ Title: Connect to the virtual machine console on Azure Stack Edge Pro GPU device
+description: Describes how to connect to the virtual machine console on a VM running on Azure Stack Edge Pro GPU device.
++++++ Last updated : 03/22/2021++
+# Connect to a virtual machine console on an Azure Stack Edge Pro GPU device
++
+Azure Stack Edge Pro GPU solution runs non-containerized workloads via the virtual machines. This article describes how to connect to the console of a virtual machine deployed on your device.
+
+The virtual machine console allows you to access your VMs with keyboard, mouse, and screen features using the commonly available remote desktop tools. You can access the console and troubleshoot any issues experienced when deploying a virtual machine on your device. You can connect to the virtual machine console even if your VM has failed to provision.
++
+## Workflow
+
+The high-level workflow has the following steps:
+
+1. Connect to the PowerShell interface on your device.
+1. Enable console access to the VM.
+1. Connect to the VM using remote desktop protocol (RDP).
+1. Revoke console access to the VM.
+
+## Prerequisites
+
+Before you begin, make sure that you have completed the following prerequisites:
+
+#### For your device
+
+You should have access to an Azure Stack Edge Pro GPU device that is activated. The device must have one or more VMs deployed on it. You can deploy VMs via Azure PowerShell, via the templates, or via the Azure portal.
+
+#### For client accessing the device
+
+Make sure that you have access to a client system that:
+
+- Can access the PowerShell interface of the device. The client is running a [Supported operating system](azure-stack-edge-gpu-system-requirements.md#supported-os-for-clients-connected-to-device).
+- The client is running PowerShell 7.0 or later. This version of PowerShell works for Windows, Mac, and Linux clients. See instructions to [install PowerShell 7.0](/powershell/scripting/whats-new/what-s-new-in-powershell-70?view=powershell-7.1&preserve-view=true).
+- Has remote desktop capabilities. Depending on whether you are using Windows, macOS, or Linux, you should install one of these [Remote desktop clients](/windows-server/remote/remote-desktop-services/clients/remote-desktop-clients). This article provides instructions with [Windows Remote Desktop](/windows-server/remote/remote-desktop-services/clients/windowsdesktop#install-the-client) and [FreeRDP](https://www.freerdp.com/). <!--Which version of FreeRDP to use?-->
++
+## Connect to VM console
+
+Follow these steps to connect to the virtual machine console on your device.
+
+### Connect to the PowerShell interface on your device
+
+The first step is to [Connect to the PowerShell interface](azure-stack-edge-gpu-connect-powershell-interface.md#connect-to-the-powershell-interface) of your device.
+
+### Enable console access to the VM
+
+1. In the PowerShell interface, run the following command to enable access to the VM console.
+
+ ```powershell
+ Grant-HcsVMConnectAccess -ResourceGroupName <VM resource group> -VirtualMachineName <VM name>
+ ```
+2. In the sample output, make a note of the virtual machine ID. You'll need this for a later step.
+
+ ```powershell
+ [10.100.10.10]: PS>Grant-HcsVMConnectAccess -ResourceGroupName mywindowsvm1rg -VirtualMachineName mywindowsvm1
+
+ VirtualMachineId : 81462e0a-decb-4cd4-96e9-057094040063
+ VirtualMachineHostName : 3V78B03
+ ResourceGroupName : mywindowsvm1rg
+ VirtualMachineName : mywindowsvm1
+ Id : 81462e0a-decb-4cd4-96e9-057094040063
+ [10.100.10.10]: PS>
+ ```
+
+### Connect to the VM
+
+You can now use a Remote Desktop client to connect to the virtual machine console.
+
+#### Use Windows Remote Desktop
+
+1. Create a new text file and input the following text.
+
+ ```
+ pcb:s:<VM ID from PowerShell>;EnhancedMode=0
+ full address:s:<IP address of the device>
+ server port:i:2179
+ username:s:EdgeARMUser
+ negotiate security layer:i:0
+ ```
+1. Save the file as **.rdp* on your client system. You'll use this profile to connect to the VM.
+1. Double-click the profile to connect to the VM. Provide the following credentials:
+
+ - **Username**: Sign in as EdgeARMUser.
+ - **Password**: Provide the local Azure Resource Manager password for your device. If you have forgotten the password, [Reset Azure Resource Manager password via the Azure portal](azure-stack-edge-gpu-set-azure-resource-manager-password.md#reset-password-via-the-azure-portal).
+
+#### Use FreeRDP
+
+If using FreeRDP on your Linux client, run the following command:
+
+```powershell
+./wfreerdp /u:EdgeARMUser /vmconnect:<VM ID from PowerShell> /v:<IP address of the device>
+```
+
+## Revoke VM console access
+
+To revoke access to the VM console, return to the PowerShell interface of your device. Run the following command:
+
+```
+Revoke-HcsVMConnectAccess -ResourceGroupName <VM resource group> -VirtualMachineName <VM name>
+```
+Here is an example output:
+
+```powershell
+[10.100.10.10]: PS>Revoke-HcsVMConnectAccess -ResourceGroupName mywindowsvm1rg -VirtualMachineName mywindowsvm1
+
+VirtualMachineId : 81462e0a-decb-4cd4-96e9-057094040063
+VirtualMachineHostName : 3V78B03
+ResourceGroupName : mywindowsvm1rg
+VirtualMachineName : mywindowsvm1
+Id : 81462e0a-decb-4cd4-96e9-057094040063
+
+[10.100.10.10]: PS>
+```
+> [!NOTE]
+> We recommend that after you are done using the VM console, you either revoke the access or close the PowerShell window to exit the session.
+
+## Next steps
+
+- Troubleshoot [Azure Stack Edge Pro](azure-stack-edge-gpu-troubleshoot.md) in Azure portal.
databox-online Azure Stack Edge Gpu Install Update https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-install-update.md
Previously updated : 02/21/2021 Last updated : 03/23/2021 # Update your Azure Stack Edge Pro GPU
This article describes the steps required to install update on your Azure Stack
The procedure described in this article was performed using a different version of software, but the process remains the same for the current software version. > [!IMPORTANT]
-> - Update **2101** is the current update and corresponds to:
-> - Device software version - **2.2.1473.2521**
+> - Update **2103** is the current update and corresponds to:
+> - Device software version - **2.2.1540.2890**
> - Kubernetes server version - **v1.17.3**
-> - IoT Edge version: **0.1.0-beta10**
+> - IoT Edge version: **0.1.0-beta13**
+> - GPU driver version: **460.32.03**
+> - CUDA version: **11.2**
>
-> For information on what's new in this update, go to [Release notes](azure-stack-edge-gpu-2101-release-notes.md).
-> - To apply 2101 update, your device must be running 2010.
+> For information on what's new in this update, go to [Release notes](azure-stack-edge-gpu-2103-release-notes.md).
+> - To apply 2103 update, your device must be running 2010. If you are not running the minimal supported version, you'll see this error: *Update package cannot be installed as its dependencies are not met*.
> - Keep in mind that installing an update or hotfix restarts your device. This update contains the device software updates and the Kubernetes updates. Given that the Azure Stack Edge Pro is a single node device, any I/O in progress is disrupted and your device experiences a downtime of up to 1.5 hours for the update. To install updates on your device, you first need to configure the location of the update server. After the update server is configured, you can apply the updates via the Azure portal UI or the local web UI.
We recommend that you install updates through the Azure portal. The device autom
![Software version after update 10](./media/azure-stack-edge-gpu-install-update/portal-update-9.png)
-6. As this is a 1-node device, the device restarts after the updates are installed. The critical alert during the restart indicates that the device heartbeat is lost.
+6. As this is a 1-node device, the device restarts after the updates are installed. The critical alert during the restart indicates that the device heartbeat is lost.
![Software version after update 11](./media/azure-stack-edge-gpu-install-update/portal-update-10.png)
We recommend that you install updates through the Azure portal. The device autom
![Software version after update 12](./media/azure-stack-edge-gpu-install-update/portal-update-11.png) -
-7. After the restart, the device is again put in the maintenance mode and an informational alert is displayed to indicate that.
-
- If you select the **Update device** from the top command bar, you can see the progress of the updates.
+7. After the restart, if you select the **Update device** from the top command bar, you can see the progress of the updates.
8. The device status updates to **Online** after the updates are installed.
We recommend that you install updates through the Azure portal. The device autom
![Software version after update 14](./media/azure-stack-edge-gpu-install-update/portal-update-15.png)
-<!--9. You will again see a notification that updates are available. These are the Kubernetes updates. Select the notification or select **Update device** from the top command bar.
-
- ![Software version after update 15](./media/azure-stack-edge-gpu-install-update/portal-update-16.png)
-
-10. Download the Kubernetes updates. You can see that the package size is different when compared to the previous update package.
-
- ![Software version after update 16](./media/azure-stack-edge-gpu-install-update/portal-update-17.png)
-
- The process of installation is identical to that of device updates. First the updates are downloaded.
-
- ![Software version after update 17](./media/azure-stack-edge-gpu-install-update/portal-update-18.png)
-
-11. Once the updates are downloaded, you can then install the updates.
-
- ![Software version after update 18](./media/azure-stack-edge-gpu-install-update/portal-update-19.png)
-
- As the updates are installed, the device is put into maintenance mode. The device does not restart for the Kubernetes updates. -->
Once the device software and Kubernetes updates are successfully installed, the banner notification disappears. Your device has now the latest version of device software and Kubernetes.
Do the following steps to download the update from the Microsoft Update Catalog.
2. In the search box of the Microsoft Update Catalog, enter the Knowledge Base (KB) number of the hotfix or terms for the update you want to download. For example, enter **Azure Stack Edge Pro**, and then click **Search**.
- The update listing appears as **Azure Stack Edge Update 2101**.
+ The update listing appears as **Azure Stack Edge Update 2103**.
<!--![Search catalog 2](./media/azure-stack-edge-gpu-install-update/download-update-2-b.png)-->
-4. Select **Download**. There are two files to download with *SoftwareUpdatePackage.exe* and *Kubernetes_Package.exe* suffixes that correspond to device software updates and Kubernetes updates respectively. Download the files to a folder on the local system. You can also copy the folder to a network share that is reachable from the device.
+4. Select **Download**. There are two packages to download: KB 4613486 and KB 46134867 that correspond to the device software updates (*SoftwareUpdatePackage.exe*) and Kubernetes updates (*Kubernetes_Package.exe*) respectively. Download the packages to a folder on the local system. You can also copy the folder to a network share that is reachable from the device.
### Install the update or the hotfix
This procedure takes around 20 minutes to complete. Perform the following steps
5. The update starts. After the device is successfully updated, it restarts. The local UI is not accessible in this duration.
-6. After the restart is complete, you are taken to the **Sign in** page. To verify that the device software has been updated, in the local web UI, go to **Maintenance** > **Software update**. For the current release, the displayed software version should be **Azure Stack Edge 2101**.
+6. After the restart is complete, you are taken to the **Sign in** page. To verify that the device software has been updated, in the local web UI, go to **Maintenance** > **Software update**. For the current release, the displayed software version should be **Azure Stack Edge 2103**.
<!--![update device 6](./media/azure-stack-edge-gpu-install-update/local-ui-update-6.png)-->
databox-online Azure Stack Edge Gpu Prepare Windows Vhd Generalized Image https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-prepare-windows-vhd-generalized-image.md
+
+ Title: Create VM images from generalized image of Windows VHD for your Azure Stack Edge Pro GPU device
+description: Describes how to VM images from generalized images starting from a Windows VHD or a VHDX. Use this generalized image to create VM images to use with VMs deployed on your Azure Stack Edge Pro GPU device.
++++++ Last updated : 03/18/2021+
+#Customer intent: As an IT admin, I need to understand how to create and upload Azure VM images that I can use with my Azure Stack Edge Pro device so that I can deploy VMs on the device.
++
+# Use generalized image from Windows VHD to create a VM image for your Azure Stack Edge Pro device
++
+To deploy VMs on your Azure Stack Edge Pro device, you need to be able to create custom VM images that you can use to create VMs. This article describes the steps required to prepare a Windows VHD or VHDX to create a generalized image. This generalized image is then used to create a VM image for your Azure Stack Edge Pro device.
+
+## About preparing Windows VHD
+
+A Windows VHD or VHDX can be used to create a *generalized* image or a *specialized* image. The following table summarizes key differences between the *generalized* and the *specialized* images.
++
+|Image type |Generalized |Specialized |
+||||
+|Target |Deployed on any system | Targeted to a specific system |
+|Setup after boot | Setup required at first boot of the VM. | Setup not needed. <br> Platform turns the VM on. |
+|Configuration |Hostname, admin-user, and other VM-specific settings required. |Completely pre-configured. |
+|Used when |Creating multiple new VMs from the same image. |Migrating a specific machine or restoring a VM from previous backup. |
++
+This article covers steps required to deploy from a generalized image. To deploy from a specialized image, see [Use specialized Windows VHD](azure-stack-edge-placeholder.md) for your device.
+
+> [!IMPORTANT]
+> This procedure does not cover cases where the source VHD is configured with custom configurations and settings. For example, additional actions may be required to generalize a VHD containing custom firewall rules or proxy settings. For more information on these additional actions, see [Prepare a Windows VHD to upload to Azure - Azure Virtual Machines](../virtual-machines/windows/prepare-for-upload-vhd-image.md).
++
+## VM image workflow
+
+The high-level workflow to prepare a Windows VHD for use as a generalized image has the following steps:
+
+1. Convert the source VHD or VHDX to a fixed size VHD.
+1. Create a VM in Hyper-V using the fixed VHD.
+1. Connect to the Hyper-V VM.
+1. Generalize the VHD using the *sysprep* utility.
+1. Copy the generalized image to Blob storage.
+1. Use generalized image to deploy VMs on your device. For more information, see how to [deploy a VM via Azure portal](azure-stack-edge-gpu-deploy-virtual-machine-portal.md) or [deploy a VM via PowerShell](azure-stack-edge-gpu-deploy-virtual-machine-powershell.md).
++
+## Prerequisites
+
+Before you prepare a Windows VHD for use as a generalized image on Azure Stack Edge, make sure that:
+
+- You have a VHD or a VHDX containing a supported version of Windows. See [Supported guest operating Systems]() for your Azure Stack Edge Pro.
+- You have access to a Windows client with Hyper-V Manager installed.
+- You have access to an Azure Blob storage account to store your VHD after it is prepared.
+
+## Prepare a generalized Windows image from VHD
+
+## Convert to a fixed VHD
+
+For your device, you'll need fixed-size VHDs to create VM images. You'll need to convert your source Windows VHD or VHDX to a fixed VHD. Follow these steps:
+
+1. Open Hyper-V Manager on your client system. Go to **Edit Disk**.
+
+ ![Open Hyper-V manager](./media/azure-stack-edge-gpu-prepare-windows-vhd-generalized-image/convert-fixed-vhd-1.png)
+
+1. On the **Before you begin** page, select **Next>**.
+
+1. On the **Locate virtual hard disk** page, browse to the location of the source Windows VHD or VHDX that you wish to convert. Select **Next>**.
+
+ ![Locate virtual hard disk page](./media/azure-stack-edge-gpu-prepare-windows-vhd-generalized-image/convert-fixed-vhd-2.png)
+
+1. On the **Choose action** page, select **Convert** and select **Next>**.
+
+ ![Choose action page](./media/azure-stack-edge-gpu-prepare-windows-vhd-generalized-image/convert-fixed-vhd-3.png)
+
+1. On the **Choose disk format** page, select **VHD** format and then select **Next>**.
+
+ ![Choose disk format page](./media/azure-stack-edge-gpu-prepare-windows-vhd-generalized-image/convert-fixed-vhd-4.png)
++
+1. On the **Choose disk type** page, choose **Fixed size** and select **Next>**.
+
+ ![Choose disk type page](./media/azure-stack-edge-gpu-prepare-windows-vhd-generalized-image/convert-fixed-vhd-5.png)
++
+1. On the **Configure disk** page, browse to the location and specify a name for the fixed size VHD disk. Select **Next>**.
+
+ ![Configure disk page](./media/azure-stack-edge-gpu-prepare-windows-vhd-generalized-image/convert-fixed-vhd-6.png)
++
+1. Review the summary and select **Finish**. The VHD or VHDX conversion takes a few minutes. The time for conversion depends on the size of the source disk.
+
+<!--
+1. Run PowerShell on your Windows client.
+1. Run the following command:
+
+ ```powershell
+ Convert-VHD -Path <source VHD path> -DestinationPath <destination-path.vhd> -VHDType Fixed
+ ```
+-->
+You'll use this fixed VHD for all the subsequent steps in this article.
++
+## Create a Hyper-V VM from fixed VHD
+
+1. In **Hyper-V Manager**, in the scope pane, right-click your system node to open the context menu, and then select **New** > **Virtual Machine**.
+
+ ![Select new virtual machine in scope pane](./media/azure-stack-edge-gpu-prepare-windows-vhd-generalized-image/create-virtual-machine-1.png)
+
+1. On the **Before you begin** page of the New Virtual Machine Wizard, select **Next**.
+
+1. On the **Specify name and location** page, provide a **Name** and **location** for your virtual machine. Select **Next**.
+
+ ![Specify name and location for your VM](./media/azure-stack-edge-gpu-prepare-windows-vhd-generalized-image/create-virtual-machine-2.png)
+
+1. On the **Specify generation** page, choose **Generation 1** for the .vhd device image type, and then select **Next**.
+
+ ![Specify generation](./media/azure-stack-edge-gpu-prepare-windows-vhd-generalized-image/create-virtual-machine-3.png)
+
+1. Assign your desired memory and networking configurations.
+
+1. On the **Connect virtual hard disk** page, choose **Use an existing virtual hard disk**, specify the location of the Windows fixed VHD that we created earlier, and then select **Next**.
+
+ ![Connect virtual hard disk page](./media/azure-stack-edge-gpu-prepare-windows-vhd-generalized-image/create-virtual-machine-4.png)
+
+1. Review the **Summary** and then select **Finish** to create the virtual machine.
+
+The virtual machine takes several minutes to create.
+
+
+## Connect to the Hyper-V VM
+
+The VM shows in the list of the virtual machines on your client system.
++
+1. Select the VM and then right-click and select **Start**.
+
+ ![Select VM and start it](./media/azure-stack-edge-gpu-prepare-windows-vhd-generalized-image/connect-virtual-machine-2.png)
+
+2. The VM shows show as **Running**. Select the VM and then right-click and select **Connect**.
+
+ ![Connect to VM](./media/azure-stack-edge-gpu-prepare-windows-vhd-generalized-image/connect-virtual-machine-4.png)
+
+After you are connected to the VM, complete the Machine setup wizard and then sign into the VM.
++
+## Generalize the VHD
+
+Use the *sysprep* utility to generalize the VHD.
+
+1. Inside the VM, open a command prompt.
+1. Run the following command to generalize the VHD.
+
+ ```
+ c:\windows\system32\sysprep\sysprep.exe /oobe /generalize /shutdown /mode:vm
+ ```
+ For details, see [Sysprep (system preparation) overview](/windows-hardware/manufacture/desktop/sysprep--system-preparation--overview).
+1. After the command is complete, the VM will shut down. **Do not restart the VM**.
+
+## Upload the VHD to Azure Blob storage
+
+Your VHD can now be used to create a generalized image on Azure Stack Edge.
+
+1. Upload the VHD to Azure blob storage. See the detailed instructions in [Upload a VHD using Azure Storage Explorer](../devtest-labs/devtest-lab-upload-vhd-using-storage-explorer.md).
+1. After the upload is complete, you can use the uploaded image to create VM images and VMs.
+
+<!-- this should be added to deploy VM articles - If you experience any issues creating VMs from your new image, you can use VM console access to help troubleshoot. For information on console access, see [link].-->
+++
+## Next steps
+
+Depending on the nature of deployment, you can choose one of the following procedures.
+
+- [Deploy a VM from a generalized image via Azure portal](azure-stack-edge-gpu-deploy-virtual-machine-portal.md)
+- [Deploy a VM from a generalized image via Azure PowerShell](azure-stack-edge-gpu-deploy-virtual-machine-powershell.md)
digital-twins How To Ingest Iot Hub Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-ingest-iot-hub-data.md
This how-to outlines how to send messages from IoT Hub to Azure Digital Twins, u
Whenever a temperature telemetry event is sent by the thermostat device, a function processes the telemetry and the *temperature* property of the digital twin should update. This scenario is outlined in a diagram below: ## Add a model and twin
In this section, you'll set up a [digital twin](concepts-twins-graph.md) in Azur
To create a thermostat-type twin, you'll first need to upload the thermostat [model](concepts-models.md) to your instance, which describes the properties of a thermostat and will be used later to create the twin.
-The model looks like this:
-
-To **upload this model to your twins instance**, run the following Azure CLI command, which uploads the above model as inline JSON. You can run the command in [Azure Cloud Shell](/cloud-shell/overview.md) in your browser, or on your machine if you have the CLI [installed locally](/cli/azure/install-azure-cli.md).
-
-```azurecli-interactive
-az dt model create --models '{ "@id": "dtmi:contosocom:DigitalTwins:Thermostat;1", "@type": "Interface", "@context": "dtmi:dtdl:context;2", "contents": [ { "@type": "Property", "name": "Temperature", "schema": "double" } ]}' -n {digital_twins_instance_name}
-```
You'll then need to **create one twin using this model**. Use the following command to create a thermostat twin named **thermostat67**, and set 0.0 as an initial temperature value.
You'll then need to **create one twin using this model**. Use the following comm
az dt twin create --dtmi "dtmi:contosocom:DigitalTwins:Thermostat;1" --twin-id thermostat67 --properties '{"Temperature": 0.0,}' --dt-name {digital_twins_instance_name} ```
->[!NOTE]
-> If you are using Cloud Shell in the PowerShell environment, you may need to escape the quotation mark characters on the inline JSON fields for their values to be parsed correctly. Here are the commands to upload the model and create the twin with this modification:
->
-> Upload model:
-> ```azurecli-interactive
-> az dt model create --models '{ \"@id\": \"dtmi:contosocom:DigitalTwins:Thermostat;1\", \"@type\": \"Interface\", \"@context\": \"dtmi:dtdl:context;2\", \"contents\": [ { \"@type\": \"Property\", \"name\": \"Temperature\", \"schema\": \"double\" } ]}' -n {digital_twins_instance_name}
-> ```
+> [!Note]
+> If you are using Cloud Shell in the PowerShell environment, you may need to escape the quotation mark characters on the inline JSON fields for their values to be parsed correctly. Here is the command to create the twin with this modification:
> > Create twin: > ```azurecli-interactive
Save your function code.
#### Step 3: Publish the function app to Azure
-Publish the project to a function app in Azure.
+Publish the project with *IoTHubtoTwins.cs* function to a function app in Azure.
For instructions on how to do this, see the section [**Publish the function app to Azure**](how-to-create-azure-function.md#publish-the-function-app-to-azure) of the *How-to: Set up a function for processing data* article.
digital-twins How To Manage Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-manage-model.md
Things you **can** do:
* Read properties * Read outgoing relationships * Add and delete incoming relationships (as in, other twins can still form relationships *to* this twin)
- - The `target` in the relationship definition can still reflect the DTMI of the deleted model. A relationship with no defined target can also work here.
+ - The `target` in the relationship definition can still reflect the DTMI of the deleted model. A relationship with no defined target can also work here.
* Delete relationships * Delete the twin
digital-twins How To Manage Twin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-manage-twin.md
The result of calling `object result = await client.GetDigitalTwinAsync("my-moon
The defined properties of the digital twin are returned as top-level properties on the digital twin. Metadata or system information that is not part of the DTDL definition is returned with a `$` prefix. Metadata properties include the following values: * `$dtId`: The ID of the digital twin in this Azure Digital Twins instance
-* `$etag`: A standard HTTP field assigned by the web server. This is updated to a new value every time the twin is updated, which can be useful to determine whether the twin's data has been updated on the server since a previous check. It can also be used in HTTP headers in these ways:
- - with read operations to avoid fetching content that hasn't changed
- - with write operations to support optimistic concurrency
+* `$etag`: A standard HTTP field assigned by the web server. This is updated to a new value every time the twin is updated, which can be useful to determine whether the twin's data has been updated on the server since a previous check. You can use `If-Match` to perform updates and deletes that only complete if the entity's etag matches the etag provided. For more information on these operations, see the documentation for [DigitalTwins Update](/rest/api/digital-twins/dataplane/twins/digitaltwins_update) and [DigitalTwins Delete](/rest/api/digital-twins/dataplane/twins/digitaltwins_delete).
* `$metadata`: A set of other properties, including: - The DTMI of the model of the digital twin. - Synchronization status for each writeable property. This is most useful for devices, where it's possible that the service and the device have diverging statuses (for example, when a device is offline). Currently, this property only applies to physical devices connected to IoT Hub. With the data in the metadata section, it is possible to understand the full status of a property, as well as the last modified timestamps. For more information about sync status, see [this IoT Hub tutorial](../iot-hub/tutorial-device-twins.md) on synchronizing device state.
digital-twins How To Provision Using Device Provisioning Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-provision-using-device-provisioning-service.md
# Mandatory fields. Title: Auto-manage devices using DPS
+ Title: Auto-manage devices using Device Provisioning Service
description: See how to set up an automated process to provision and retire IoT devices in Azure Digital Twins using Device Provisioning Service (DPS). Previously updated : 9/1/2020 Last updated : 3/21/2021
For more information about the _provision_ and _retire_ stages, and to better un
## Prerequisites
-Before you can set up the provisioning, you need to have an **Azure Digital Twins instance** that contains models and twins. This instance should also be set up with the ability to update digital twin information based on data.
-
-If you do not have this set up already, you can create it by following the Azure Digital Twins [*Tutorial: Connect an end-to-end solution*](tutorial-end-to-end.md). The tutorial will walk you through setting up an Azure Digital Twins instance with models and twins, a connected Azure [IoT Hub](../iot-hub/about-iot-hub.md), and several [Azure functions](../azure-functions/functions-overview.md) to propagate data flow.
-
-You will need the following values later in this article from when you set up your instance. If you need to gather these values again, use the links below for instructions.
-* Azure Digital Twins instance **_host name_** ([find in portal](how-to-set-up-instance-portal.md#verify-success-and-collect-important-values))
-* Azure Event Hubs connection string **_connection string_** ([find in portal](../event-hubs/event-hubs-get-connection-string.md#get-connection-string-from-the-portal))
+Before you can set up the provisioning, you'll need to set up the following:
+* an **Azure Digital Twins instance**. Follow the instructions in [*How-to: Set up an instance and authentication*](how-to-set-up-instance-portal.md) to create an Azure digital twins instance. Gather the instance's **_host name_** in the Azure portal ([instructions](how-to-set-up-instance-portal.md#verify-success-and-collect-important-values)).
+* an **IoT hub**. For instructions, see the *Create an IoT Hub* section of this [IoT Hub quickstart](../iot-hub/quickstart-send-telemetry-cli.md).
+* an [**Azure function**](../azure-functions/functions-overview.md) that updates digital twin information based on IoT Hub data. Follow the instructions in [*How to: Ingest IoT hub data*](how-to-ingest-iot-hub-data.md) to create this Azure function. Gather the function **_name_** to use it in this article.
This sample also uses a **device simulator** that includes provisioning using the Device Provisioning Service. The device simulator is located here: [Azure Digital Twins and IoT Hub Integration Sample](/samples/azure-samples/digital-twins-iothub-integration/adt-iothub-provision-sample/). Get the sample project on your machine by navigating to the sample link and selecting the *Download ZIP* button underneath the title. Unzip the downloaded folder.
-The device simulator is based on **Node.js**, version 10.0.x or later. [*Prepare your development environment*](https://github.com/Azure/azure-iot-sdk-node/blob/master/doc/node-devbox-setup.md) describes how to install Node.js for this tutorial on either Windows or Linux.
+You'll need [**Node.js**](https://nodejs.org/download) installed on your machine. The device simulator is based on **Node.js**, version 10.0.x or later.
## Solution architecture The image below illustrates the architecture of this solution using Azure Digital Twins with Device Provisioning Service. It shows both the device provision and retire flow. This article is divided into two sections: * [*Auto-provision device using Device Provisioning Service*](#auto-provision-device-using-device-provisioning-service)
For deeper explanations of each step in the architecture, see their individual s
In this section, you'll be attaching Device Provisioning Service to Azure Digital Twins to auto-provision devices through the path below. This is an excerpt from the full architecture shown [earlier](#solution-architecture). Here is a description of the process flow: 1. Device contacts the DPS endpoint, passing identifying information to prove its identity. 2. DPS validates device identity by validating the registration ID and key against the enrollment list, and calls an [Azure function](../azure-functions/functions-overview.md) to do the allocation.
-3. The Azure function creates a new [twin](concepts-twins-graph.md) in Azure Digital Twins for the device.
+3. The Azure function creates a new [twin](concepts-twins-graph.md) in Azure Digital Twins for the device. The twin will have the same name as the device's **registration ID**.
4. DPS registers the device with an IoT hub, and populates the device's desired twin state. 5. The IoT hub returns device ID information and the IoT hub connection information to the device. The device can now connect to the IoT hub.
The following sections walk through the steps to set up this auto-provision devi
### Create a Device Provisioning Service
-When a new device is provisioned using Device Provisioning Service, a new twin for that device can be created in Azure Digital Twins.
+When a new device is provisioned using Device Provisioning Service, a new twin for that device can be created in Azure Digital Twins with the same name as the registration ID.
Create a Device Provisioning Service instance, which will be used to provision IoT devices. You can either use the Azure CLI instructions below, or use the Azure portal: [*Quickstart: Set up the IoT Hub Device Provisioning Service with the Azure portal*](../iot-dps/quick-setup-auto-provision.md).
-The following Azure CLI command will create a Device Provisioning Service. You will need to specify a name, resource group, and region. The command can be run in [Cloud Shell](https://shell.azure.com), or locally if you have the Azure CLI [installed on your machine](/cli/azure/install-azure-cli).
+The following Azure CLI command will create a Device Provisioning Service. You'll need to specify a Device Provisioning Service name, resource group, and region. To see what regions support Device Provisioning Service, visit [*Azure products available by region*](https://azure.microsoft.com/global-infrastructure/services/?products=iot-hub).
+The command can be run in [Cloud Shell](https://shell.azure.com), or locally if you have the Azure CLI [installed on your machine](/cli/azure/install-azure-cli).
```azurecli-interactive
-az iot dps create --name <Device Provisioning Service name> --resource-group <resource group name> --location <region; for example, eastus>
+az iot dps create --name <Device Provisioning Service name> --resource-group <resource group name> --location <region>
```
-### Create an Azure function
+### Add a function to use with Device Provisioning Service
-Next, you'll create an HTTP request-triggered function inside a function app. You can use the function app created in the end-to-end tutorial ([*Tutorial: Connect an end-to-end solution*](tutorial-end-to-end.md)), or your own.
+Inside your function app project that you created in the [prerequisites](#prerequisites) section, you'll create a new function to use with the Device Provisioning Service. This function will be used by the Device Provisioning Service in a [Custom Allocation Policy](../iot-dps/how-to-use-custom-allocation-policies.md) to provision a new device.
-This function will be used by the Device Provisioning Service in a [Custom Allocation Policy](../iot-dps/how-to-use-custom-allocation-policies.md) to provision a new device. For more information about using HTTP requests with Azure functions, see [*Azure Http request trigger for Azure Functions*](../azure-functions/functions-bindings-http-webhook-trigger.md).
+Start by opening the function app project in Visual Studio on your machine and follow the steps below.
-Inside your function app project, add a new function. Also, add a new NuGet package to the project: `Microsoft.Azure.Devices.Provisioning.Service`.
+#### Step 1: Add a new function
-In the newly created function code file, paste in the following code.
+Add a new function of type *HTTP-trigger* to the function app project in Visual Studio.
-Save the file and then re-publish your function app. For instructions on publishing the function app, see the [*Publish the app*](tutorial-end-to-end.md#publish-the-app) section of the end-to-end tutorial.
+#### Step 2: Fill in function code
-### Configure your function
+Add a new NuGet package to the project: [Microsoft.Azure.Devices.Provisioning.Service](https://www.nuget.org/packages/Microsoft.Azure.Devices.Provisioning.Service/). You might need to add more packages to your project as well, if the packages used in the code aren't part of the project already.
-Next, you'll need to set environment variables in your function app from earlier, containing the reference to the Azure Digital Twins instance you've created. If you used the the end-to-end tutorial ([*Tutorial: Connect an end-to-end solution*](tutorial-end-to-end.md)), the setting will already be configured.
+In the newly created function code file, paste in the following code, rename the function to *DpsAdtAllocationFunc.cs*, and save the file.
-Add the setting with this Azure CLI command:
-```azurecli-interactive
-az functionapp config appsettings set --settings "ADT_SERVICE_URL=https://<Azure Digital Twins instance _host name_>" -g <resource group> -n <your App Service (function app) name>
-```
+#### Step 3: Publish the function app to Azure
+
+Publish the project with *DpsAdtAllocationFunc.cs* function to the function app in Azure.
-Ensure that the permissions and Managed Identity role assignment are configured correctly for the function app, as described in the section [*Assign permissions to the function app*](tutorial-end-to-end.md#configure-permissions-for-the-function-app) in the end-to-end tutorial.
### Create Device Provisioning enrollment
-Next, you'll need to create an enrollment in Device Provisioning Service using a **custom allocation function**. Follow the instructions to do this in the [*Create the enrollment*](../iot-dps/how-to-use-custom-allocation-policies.md#create-the-enrollment) and [*Derive unique device keys*](../iot-dps/how-to-use-custom-allocation-policies.md#derive-unique-device-keys) sections of the Device Provisioning Services article about custom allocation policies.
+Next, you'll need to create an enrollment in Device Provisioning Service using a **custom allocation function**. Follow the instructions to do this in the [*Create the enrollment*](../iot-dps/how-to-use-custom-allocation-policies.md#create-the-enrollment) section of the custom allocation policies article in the Device Provisioning Service documentation.
+
+While going through that flow, make sure you select the following options to link the enrollment to the function you just created.
+
+* **Select how you want to assign devices to hubs**: Custom (Use Azure Function).
+* **Select the IoT hubs this group can be assigned to:** Choose your IoT hub name or select the *Link a new IoT hub* button, and choose your IoT hub from the dropdown.
+
+Next, choose the *Select a new function* button to link your function app to the enrollment group. Then, fill in the following values:
+
+* **Subscription**: Your Azure subscription is auto-populated. Make sure it is the right subscription.
+* **Function App**: Choose your function app name.
+* **Function**: Choose DpsAdtAllocationFunc.
+
+Save your details.
+
-While going through that flow, you will link the enrollment to the function you just created by selecting your function during the step to **Select how you want to assign devices to hubs**. After creating the enrollment, the enrollment name and primary or secondary SAS key will be used later to configure the device simulator for this article.
+After creating the enrollment, the **Primary Key** for the enrollment will be used later to configure the device simulator for this article.
### Set up the device simulator This sample uses a device simulator that includes provisioning using the Device Provisioning Service. The device simulator is located here: [Azure Digital Twins and IoT Hub Integration Sample](/samples/azure-samples/digital-twins-iothub-integration/adt-iothub-provision-sample/). If you haven't already downloaded the sample, get it now by navigating to the sample link and selecting the *Download ZIP* button underneath the title. Unzip the downloaded folder.
-Open a command window and navigate into the downloaded folder, and then into the *device-simulator* directory. Install the dependencies for the project using the following command:
+#### Upload the model
+
+The device simulator is a thermostat-type device that uses the model with this ID: `dtmi:contosocom:DigitalTwins:Thermostat;1`. You'll need to upload this model to Azure Digital Twins before you can create a twin of this type for the device.
++
+For more information about models, refer to [*How-to: Manage models*](how-to-manage-model.md#upload-models).
+
+#### Configure and run the simulator
+
+In your command window, navigate to the downloaded sample *Azure Digital Twins and IoT Hub Integration* that you unzipped earlier, and then into the *device-simulator* directory. Next, install the dependencies for the project using the following command:
```cmd npm install ```
-Next, copy the *.env.template* file to a new file called *.env*, and fill in these settings:
+Next, in your device simulator directory, copy the .env.template file to a new file called .env, and gather the following values to fill in the settings:
+
+* PROVISIONING_IDSCOPE: To get this value, navigate to your device provisioning service in the [Azure portal](https://portal.azure.com/), then select *Overview* in the menu options and look for the field *ID Scope*.
+
+ :::image type="content" source="media/how-to-provision-using-dps/id-scope.png" alt-text="Screenshot of the Azure portal view of the device provisioning overview page to copy the ID Scope value." lightbox="media/how-to-provision-using-dps/id-scope.png":::
+
+* PROVISIONING_REGISTRATION_ID: You can choose a registration ID for your device.
+* ADT_MODEL_ID: `dtmi:contosocom:DigitalTwins:Thermostat;1`
+* PROVISIONING_SYMMETRIC_KEY: This is the primary key for the enrollment you set up earlier. To get this value again, navigate to your device provisioning service in the Azure portal, select *Manage enrollments*, then select the enrollment group that you created earlier and copy the *Primary Key*.
+
+ :::image type="content" source="media/how-to-provision-using-dps/sas-primary-key.png" alt-text="Screenshot of the Azure portal view of the device provisioning service manage enrollments page to copy the SAS primary key value." lightbox="media/how-to-provision-using-dps/sas-primary-key.png":::
+
+Now, use the values above to update the .env file settings.
```cmd PROVISIONING_HOST = "global.azure-devices-provisioning.net" PROVISIONING_IDSCOPE = "<Device Provisioning Service Scope ID>" PROVISIONING_REGISTRATION_ID = "<Device Registration ID>" ADT_MODEL_ID = "dtmi:contosocom:DigitalTwins:Thermostat;1"
-PROVISIONING_SYMMETRIC_KEY = "<Device Provisioning Service enrollment primary or secondary SAS key>"
+PROVISIONING_SYMMETRIC_KEY = "<Device Provisioning Service enrollment primary SAS key>"
``` Save and close the file.
node .\adt_custom_register.js
``` You should see the device being registered and connected to IoT Hub, and then starting to send messages. ### Validate
-As a result of the flow you've set up in this article, the device will be automatically registered in Azure Digital Twins. Using the following [Azure Digital Twins CLI](how-to-use-cli.md) command to find the twin of the device in the Azure Digital Twins instance you created.
+As a result of the flow you've set up in this article, the device will be automatically registered in Azure Digital Twins. Use the following [Azure Digital Twins CLI](how-to-use-cli.md) command to find the twin of the device in the Azure Digital Twins instance you created.
```azurecli-interactive
-az dt twin show -n <Digital Twins instance name> --twin-id <Device Registration ID>"
+az dt twin show -n <Digital Twins instance name> --twin-id "<Device Registration ID>"
``` You should see the twin of the device being found in the Azure Digital Twins instance. ## Auto-retire device using IoT Hub lifecycle events In this section, you will be attaching IoT Hub lifecycle events to Azure Digital Twins to auto-retire devices through the path below. This is an excerpt from the full architecture shown [earlier](#solution-architecture). Here is a description of the process flow: 1. An external or manual process triggers the deletion of a device in IoT Hub.
The following sections walk through the steps to set up this auto-retire device
### Create an event hub
-You now need to create an Azure [event hub](../event-hubs/event-hubs-about.md), which will be used to receive the IoT Hub lifecycle events.
+Next, you'll create an Azure [event hub](../event-hubs/event-hubs-about.md) to receive IoT Hub lifecycle events.
-Go through the steps described in the [*Create an event hub*](../event-hubs/event-hubs-create.md) quickstart, using the following information:
-* If you're using the end-to-end tutorial ([*Tutorial: Connect an end-to-end solution*](tutorial-end-to-end.md)), you can reuse the resource group you created for the end-to-end tutorial.
-* Name your event hub *lifecycleevents*, or something else of your choice, and remember the namespace you created. You will use these when you set up the lifecycle function and IoT Hub route in the next sections.
+Follow the steps described in the [*Create an event hub*](../event-hubs/event-hubs-create.md) quickstart. Name your event hub *lifecycleevents*. You'll use this event hub name when you set up IoT Hub route and an Azure function in the next sections.
-### Create an Azure function
+The screenshot below illustrates the creation of the event hub.
-Next, you'll create an Event Hubs-triggered function inside a function app. You can use the function app created in the end-to-end tutorial ([*Tutorial: Connect an end-to-end solution*](tutorial-end-to-end.md)), or your own.
+#### Create SAS policy for your event hub
-Name your event hub trigger *lifecycleevents*, and connect the event hub trigger to the event hub you created in the previous step. If you used a different event hub name, change it to match in the trigger name below.
+Next, you'll need to create a [shared access signature (SAS) policy](../event-hubs/authorize-access-shared-access-signature.md) to configure the event hub with your function app.
+To do this,
+1. Navigate to the event hub you just created in the Azure portal and select **Shared access policies** in the menu options on the left.
+2. Select **Add**. In the *Add SAS Policy* window that opens, enter a policy name of your choice and select the *Listen* checkbox.
+3. Select **Create**.
+
-This function will use the IoT Hub device lifecycle event to retire an existing device. For more about lifecycle events, see [*IoT Hub Non-telemetry events*](../iot-hub/iot-hub-devguide-messages-d2c.md#non-telemetry-events). For more information about using Event Hubs with Azure functions, see [*Azure Event Hubs trigger for Azure Functions*](../azure-functions/functions-bindings-event-hubs-trigger.md).
+#### Configure event hub with function app
-Inside your published function app, add a new function class of type *Event Hub Trigger*, and paste in the code below.
+Next, configure the Azure function app that you set up in the [prerequisites](#prerequisites) section to work with your new event hub. You'll do this by setting an environment variable inside the function app with the event hub's connection string.
+1. Open the policy that you just created and copy the **Connection string-primary key** value.
-Save the project, then publish the function app again. For instructions on publishing the function app, see the [*Publish the app*](tutorial-end-to-end.md#publish-the-app) section of the end-to-end tutorial.
+ :::image type="content" source="media/how-to-provision-using-dps/event-hub-sas-policy-connection-string.png" alt-text="Screenshot of the Azure portal to copy the connection string-primary key." lightbox="media/how-to-provision-using-dps/event-hub-sas-policy-connection-string.png":::
-### Configure your function
+2. Add the connection string as a variable in the function app settings with the following Azure CLI command. The command can be run in [Cloud Shell](https://shell.azure.com), or locally if you have the Azure CLI [installed on your machine](/cli/azure/install-azure-cli).
-Next, you'll need to set environment variables in your function app from earlier, containing the reference to the Azure Digital Twins instance you've created and the event hub. If you used the the end-to-end tutorial ([*Tutorial: Connect an end-to-end solution*](./tutorial-end-to-end.md)), the first setting will already be configured.
+ ```azurecli-interactive
+ az functionapp config appsettings set --settings "EVENTHUB_CONNECTIONSTRING=<Event Hubs SAS connection string Listen>" -g <resource group> -n <your App Service (function app) name>
+ ```
-Add the setting with this Azure CLI command. The command can be run in [Cloud Shell](https://shell.azure.com), or locally if you have the Azure CLI [installed on your machine](/cli/azure/install-azure-cli).
+### Add a function to retire with IoT Hub lifecycle events
-```azurecli-interactive
-az functionapp config appsettings set --settings "ADT_SERVICE_URL=https://<Azure Digital Twins instance _host name_>" -g <resource group> -n <your App Service (function app) name>
-```
+Inside your function app project that you created in the [prerequisites](#prerequisites) section, you'll create a new function to retire an existing device using IoT Hub lifecycle events.
-Next, you will need to configure the function environment variable for connecting to the newly created event hub.
+For more about lifecycle events, see [*IoT Hub Non-telemetry events*](../iot-hub/iot-hub-devguide-messages-d2c.md#non-telemetry-events). For more information about using Event Hubs with Azure functions, see [*Azure Event Hubs trigger for Azure Functions*](../azure-functions/functions-bindings-event-hubs-trigger.md).
-```azurecli-interactive
-az functionapp config appsettings set --settings "EVENTHUB_CONNECTIONSTRING=<Event Hubs SAS connection string Listen>" -g <resource group> -n <your App Service (function app) name>
-```
+Start by opening the function app project in Visual Studio on your machine and follow the steps below.
+
+#### Step 1: Add a new function
+
+Add a new function of type *Event Hub Trigger* to the function app project in Visual Studio.
-Ensure that the permissions and Managed Identity role assignment are configured correctly for the function app, as described in the section [*Assign permissions to the function app*](tutorial-end-to-end.md#configure-permissions-for-the-function-app) in the end-to-end tutorial.
+
+#### Step 2: Fill in function code
+
+In the newly created function code file, paste in the following code, rename the function to `DeleteDeviceInTwinFunc.cs`, and save the file.
++
+#### Step 3: Publish the function app to Azure
+
+Publish the project with *DeleteDeviceInTwinFunc.cs* function to the function app in Azure.
+ ### Create an IoT Hub route for lifecycle events
-Now you need to set up an IoT Hub route, to route device lifecycle events. In this case, you will specifically listen to device delete events, identified by `if (opType == "deleteDeviceIdentity")`. This will trigger the delete of the digital twin item, finalizing the retirement of a device and its digital twin.
+Now you'll set up an IoT Hub route, to route device lifecycle events. In this case, you will specifically listen to device delete events, identified by `if (opType == "deleteDeviceIdentity")`. This will trigger the delete of the digital twin item, finalizing the retirement of a device and its digital twin.
+
+First, you'll need to create an event hub endpoint in your IoT hub. Then, you'll add a route in IoT hub to send lifecycle events to this event hub endpoint.
+Follow these steps to create an event hub endpoint:
+
+1. In the [Azure portal](https://portal.azure.com/), navigate to the IoT hub you created in the [prerequisites](#prerequisites) section and select **Message routing** in the menu options on the left.
+2. Select the **Custom endpoints** tab.
+3. Select **+ Add** and choose **Event hubs** to add an event hubs type endpoint.
-Instructions for creating an IoT Hub route are described in this article: [*Use IoT Hub message routing to send device-to-cloud messages to different endpoints*](../iot-hub/iot-hub-devguide-messages-d2c.md). The section *Non-telemetry events* explains that you can use **device lifecycle events** as the data source for the route.
+ :::image type="content" source="media/how-to-provision-using-dps/event-hub-custom-endpoint.png" alt-text="Screenshot of the Visual Studio window to add an event hub custom endpoint." lightbox="media/how-to-provision-using-dps/event-hub-custom-endpoint.png":::
-The steps you need to go through for this setup are:
-1. Create a custom IoT Hub event hub endpoint. This endpoint should target the event hub you created in the [*Create an event hub*](#create-an-event-hub) section.
-2. Add a *Device Lifecycle Events* route. Use the endpoint created in the previous step. You can limit the device lifecycle events to only send the delete events by adding the routing query `opType='deleteDeviceIdentity'`.
- :::image type="content" source="media/how-to-provision-using-dps/lifecycle-route.png" alt-text="Add a route":::
+4. In the window *Add an event hub endpoint* that opens, choose the following values:
+ * **Endpoint name**: Choose an endpoint name.
+ * **Event hub namespace**: Select your event hub namespace from the dropdown list.
+ * **Event hub instance**: Choose the event hub name that you created in the previous step.
+5. Select **Create**. Keep this window open to add a route in the next step.
+
+ :::image type="content" source="media/how-to-provision-using-dps/add-event-hub-endpoint.png" alt-text="Screenshot of the Visual Studio window to add an event hub endpoint." lightbox="media/how-to-provision-using-dps/add-event-hub-endpoint.png":::
+
+Next, you'll add a route that connects to the endpoint you created in the above step, with a routing query that sends the delete events. Follow these steps to create a route:
+
+1. Navigate to the *Routes* tab and select **Add** to add a route.
+
+ :::image type="content" source="media/how-to-provision-using-dps/add-message-route.png" alt-text="Screenshot of the Visual Studio window to add a route to send events." lightbox="media/how-to-provision-using-dps/add-message-route.png":::
+
+2. In the *Add a route* page that opens, choose the following values:
+
+ * **Name**: Choose a name for your route.
+ * **Endpoint**: Choose the event hubs endpoint you created earlier from the dropdown.
+ * **Data source**: Choose *Device Lifecycle Events*.
+ * **Routing query**: Enter `opType='deleteDeviceIdentity'`. This query limits the device lifecycle events to only send the delete events.
+
+3. Select **Save**.
+
+ :::image type="content" source="media/how-to-provision-using-dps/lifecycle-route.png" alt-text="Screenshot of the Azure portal window to add a route to send lifecycle events." lightbox="media/how-to-provision-using-dps/lifecycle-route.png":::
Once you have gone through this flow, everything is set to retire devices end-to-end.
Once you have gone through this flow, everything is set to retire devices end-to
To trigger the process of retirement, you need to manually delete the device from IoT Hub.
-In the [first half of this article](#auto-provision-device-using-device-provisioning-service), you created a device in IoT Hub and a corresponding digital twin.
+You can do this with an [Azure CLI command](/cli/azure/ext/azure-iot/iot/hub/module-identity#ext_azure_iot_az_iot_hub_module_identity_delete) or in the Azure portal.
+Follow the steps below to delete the device in the Azure portal:
+
+1. Navigate to your IoT hub, and choose **IoT devices** in the menu options on the left.
+2. You'll see a device with the device registration ID you chose in the [first half of this article](#auto-provision-device-using-device-provisioning-service). Alternatively, you can choose any other device to delete, as long as it has a twin in Azure Digital Twins so you can verify that the twin is automatically deleted after the device is deleted.
+3. Select the device and choose **Delete**.
-Now, go to the IoT Hub and delete that device (you can do this with an [Azure CLI command](/cli/azure/ext/azure-iot/iot/hub/module-identity#ext_azure_iot_az_iot_hub_module_identity_delete) or in the [Azure portal](https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.Devices%2FIotHubs)).
-The device will be automatically removed from Azure Digital Twins.
+It might take a few minutes to see the changes reflected in Azure Digital Twins.
Use the following [Azure Digital Twins CLI](how-to-use-cli.md) command to verify the twin of the device in the Azure Digital Twins instance was deleted. ```azurecli-interactive
-az dt twin show -n <Digital Twins instance name> --twin-id <Device Registration ID>"
+az dt twin show -n <Digital Twins instance name> --twin-id "<Device Registration ID>"
``` You should see that the twin of the device cannot be found in the Azure Digital Twins instance anymore.+ ## Clean up resources
The digital twins created for the devices are stored as a flat hierarchy in Azur
* [*Concepts: Digital twins and the twin graph*](concepts-twins-graph.md)
+For more information about using HTTP requests with Azure functions, see:
+
+* [*Azure Http request trigger for Azure Functions*](../azure-functions/functions-bindings-http-webhook-trigger.md)
+ You can write custom logic to automatically provide this information using the model and graph data already stored in Azure Digital Twins. To read more about managing, upgrading, and retrieving information from the twins graph, see the following: * [*How-to: Manage a digital twin*](how-to-manage-twin.md)
event-grid Sdk Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/sdk-overview.md
The management SDKs enable you to create, update, and delete event grid topics a
The data plane SDKs enable you to post events to topics by taking care of authenticating, forming the event, and asynchronously posting to the specified endpoint. They also enable you to consume first party events. Currently, the following SDKs are available: | Programming language | SDK |
-| -- | - | - |
+| -- | - |
| .NET | Stable SDK: [Microsoft.Azure.EventGrid](https://www.nuget.org/packages/Microsoft.Azure.EventGrid)<p>Preview SDK: [Azure.Messaging.EventGrid](https://www.nuget.org/packages/Azure.Messaging.EventGrid/) | | Java | Stable SDK: [azure-eventgrid](https://mvnrepository.com/artifact/com.microsoft.azure/azure-eventgrid)<p>Preview SDK: [azure-messaging-eventgrid](https://search.maven.org/artifact/com.azure/azure-messaging-eventgrid/)</p> | | Python | [azure-eventgrid](https://pypi.org/project/azure-eventgrid/#history) (see the latest stable and pre-release versions on the **Release history** page) | | JavaScript | [@azure/eventgrid](https://www.npmjs.com/package/@azure/eventgrid/) (switch to the **Versions** tab to see the latest stable and beta version packages). |
-| Go | [Azure SDK for Go](https://github.com/Azure/azure-sdk-for-go) | |
-| Ruby | [azure_event_grid](https://rubygems.org/gems/azure_event_grid) | |
+| Go | [Azure SDK for Go](https://github.com/Azure/azure-sdk-for-go) |
+| Ruby | [azure_event_grid](https://rubygems.org/gems/azure_event_grid) |
## Next steps
event-hubs Add Custom Data Event https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/add-custom-data-event.md
+
+ Title: Add custom data to events in Azure Event Hubs
+description: This article shows you how to add custom data to events in Azure Event Hubs.
+ Last updated : 03/19/2021++
+# Add custom data to events in Azure Event Hubs
+Because an event consists mainly of an opaque set of bytes, it may be difficult for consumers of those events to make informed decisions about how to process them. To allow event publishers to offer better context for consumers, events may also contain custom metadata, in the form of a set of key-value pairs. One common scenario for the inclusion of metadata is to provide a hint about the type of data contained by an event, so that consumers understand its format and can deserialize it appropriately.
+
+> [!NOTE]
+> This metadata is not used by, or in any way meaningful to, the Event Hubs service; it exists only for coordination between event publishers and consumers.
+
+The following sections show you how to add custom data to events in different programming languages.
+
+## .NET
+
+```csharp
+var eventBody = new BinaryData("Hello, Event Hubs!");
+var eventData = new EventData(eventBody);
+eventData.Properties.Add("EventType", "com.microsoft.samples.hello-event");
+eventData.Properties.Add("priority", 1);
+eventData.Properties.Add("score", 9.0);
+```
+
+For the full code sample, see [Publishing events with custom metadata](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/eventhub/Azure.Messaging.EventHubs/samples/Sample04_PublishingEvents.md#publishing-events-with-custom-metadata).
+
+## Java
+
+```java
+EventData firstEvent = new EventData("EventData Sample 1".getBytes(UTF_8));
+firstEvent.getProperties().put("EventType", "com.microsoft.samples.hello-event");
+firstEvent.getProperties().put("priority", 1);
+firstEvent.getProperties().put("score", 9.0);
+```
+
+For the full code sample, see [Publish events with custom metadata](https://github.com/Azure/azure-sdk-for-java/blob/master/sdk/eventhubs/azure-messaging-eventhubs/src/samples/java/com/azure/messaging/eventhubs/PublishEventsWithCustomMetadata.java).
++
+## Python
+
+```python
+event_data = EventData('Message with properties')
+event_data.properties = {'event-type': 'com.microsoft.samples.hello-event', 'priority': 1, "score": 9.0}
+```
+
+For the full code sample, see [Send Event Data batch with properties](https://github.com/Azure/azure-sdk-for-python/blob/azure-eventhub_5.3.1/sdk/eventhub/azure-eventhub/samples/async_samples/send_async.py).
+
+## JavaScript
+
+```javascript
+let eventData = { body: "First event", properties: { "event-type": "com.microsoft.samples.hello-event", "priority": 1, "score": 9.0 } };
+```
++
+## Next steps
+See the following quickstarts and samples.
+
+- Quickstarts: [.NET](event-hubs-dotnet-standard-getstarted-send.md), [Java](event-hubs-java-get-started-send.md), [Python](event-hubs-python-get-started-send.md), [JavaScript](event-hubs-node-get-started-send.md)
+- Samples on GitHub: [.NET](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/eventhub/Azure.Messaging.EventHubs/samples), [Java](https://github.com/Azure/azure-sdk-for-java/blob/master/sdk/eventhubs/azure-messaging-eventhubs/src/samples), [Python](https://github.com/Azure/azure-sdk-for-python/blob/azure-eventhub_5.3.1/sdk/eventhub/azure-eventhub/samples), [JavaScript](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/eventhub/event-hubs/samples/javascript), [TypeScript](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/eventhub/event-hubs/samples/typescript)
event-hubs Event Hubs Availability And Consistency https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-availability-and-consistency.md
We recommend sending events to an event hub without setting partition informatio
In this section, you learn how to send events to a specific partition using different programming languages. ### [.NET](#tab/dotnet)
-To send events to a specific partition, create the batch using the [EventHubProducerClient.CreateBatchAsync](/dotnet/api/azure.messaging.eventhubs.producer.eventhubproducerclient.createbatchasync#Azure_Messaging_EventHubs_Producer_EventHubProducerClient_CreateBatchAsync_Azure_Messaging_EventHubs_Producer_CreateBatchOptions_System_Threading_CancellationToken_) method by specifying either the `PartitionId` or the `PartitionKey` in [CreateBatchOptions](/dotnet/api/azure.messaging.eventhubs.producer.createbatchoptions?view=azure-dotnet). The following code sends a batch of events to a specific partition by specifying a partition key. Event Hubs ensures that all events sharing a partition key value are stored together and delivered in order of arrival.
+To send events to a specific partition, create the batch using the [EventHubProducerClient.CreateBatchAsync](/dotnet/api/azure.messaging.eventhubs.producer.eventhubproducerclient.createbatchasync#Azure_Messaging_EventHubs_Producer_EventHubProducerClient_CreateBatchAsync_Azure_Messaging_EventHubs_Producer_CreateBatchOptions_System_Threading_CancellationToken_) method by specifying either the `PartitionId` or the `PartitionKey` in [CreateBatchOptions](/dotnet/api/azure.messaging.eventhubs.producer.createbatchoptions). The following code sends a batch of events to a specific partition by specifying a partition key. Event Hubs ensures that all events sharing a partition key value are stored together and delivered in order of arrival.
```csharp var batchOptions = new CreateBatchOptions { PartitionKey = "cities" };
expressroute Expressroute Howto Add Ipv6 Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-howto-add-ipv6-portal.md
While IPv6 support is available for connections to deployments in regions with A
* Global Reach connections between ExpressRoute circuits * Use of ExpressRoute with virtual WAN * FastPath with non-ExpressRoute Direct circuits
+* FastPath with circuits in the following peering locations: Dubai
* Coexistence with VPN Gateway ## Next steps
expressroute Expressroute Howto Add Ipv6 Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-howto-add-ipv6-powershell.md
While IPv6 support is available for connections to deployments in regions with A
* Global Reach connections between ExpressRoute circuits * Use of ExpressRoute with virtual WAN * FastPath with non-ExpressRoute Direct circuits
+* FastPath with circuits in the following peering locations: Dubai
* Coexistence with VPN Gateway ## Next steps
frontdoor Front Door Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/front-door-faq.md
To lock down your application to accept traffic only from your specific Front Do
- Look for the `Front Door ID` value under the Overview section from Front Door portal page. You can then filter on the incoming header '**X-Azure-FDID**' sent by Front Door to your backend with that value to ensure only your own specific Front Door instance is allowed (because the IP ranges above are shared with other Front Door instances of other customers). -- Apply rule filtering in your backend web server to restrict traffic based on the resulting 'X-Azure-FDID' header value. Note that some services like Azure App Service provide this [header based filtering](../app-service/app-service-ip-restrictions.md#restrict-access-to-a-specific-azure-front-door-instance-preview) capability without needing to change your application or host.
+- Apply rule filtering in your backend web server to restrict traffic based on the resulting 'X-Azure-FDID' header value. Note that some services like Azure App Service provide this [header based filtering](../app-service/app-service-ip-restrictions.md#restrict-access-to-a-specific-azure-front-door-instance) capability without needing to change your application or host.
Here's an example for [Microsoft Internet Information Services (IIS)](https://www.iis.net/):
frontdoor Front Door Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/front-door-overview.md
# What is Azure Front Door? > [!IMPORTANT]
-> This documentation is for Azure Front Door. Looking for information on Azure Front Door Standard/Premium (Preview)? View [here](/standard-premium/overview.md).
+> This documentation is for Azure Front Door. Looking for information on Azure Front Door Standard/Premium (Preview)? View [here](standard-premium/overview.md).
Azure Front Door is a global, scalable entry-point that uses the Microsoft global edge network to create fast, secure, and widely scalable web applications. With Front Door, you can transform your global consumer and enterprise applications into robust, high-performing personalized modern applications with contents that reach a global audience through Azure.
governance Policy For Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/concepts/policy-for-kubernetes.md
Title: Learn Azure Policy for Kubernetes description: Learn how Azure Policy uses Rego and Open Policy Agent to manage clusters running Kubernetes in Azure or on-premises. Previously updated : 12/01/2020 Last updated : 03/22/2021 # Understand Azure Policy for Kubernetes clusters
The following limitations apply only to the Azure Policy Add-on for AKS:
The following are general recommendations for using the Azure Policy Add-on: -- The Azure Policy Add-on requires 3 Gatekeeper components to run: 1 audit pod and 2 webhook pod
+- The Azure Policy Add-on requires three Gatekeeper components to run: 1 audit pod and 2 webhook pod
replicas. These components consume more resources as the count of Kubernetes resources and policy
- assignments increases in the cluster which requires audit and enforcement operations.
+ assignments increases in the cluster, which requires audit and enforcement operations.
- - For less than 500 pods in a single cluster with a max of 20 constraints: 2 vCPUs and 350 MB
+ - For fewer than 500 pods in a single cluster with a max of 20 constraints: 2 vCPUs and 350 MB
memory per component. - For more than 500 pods in a single cluster with a max of 40 constraints: 3 vCPUs and 600 MB memory per component.
The following recommendation applies only to AKS and the Azure Policy Add-on:
## Install Azure Policy Add-on for AKS Before installing the Azure Policy Add-on or enabling any of the service features, your subscription
-must enable the **Microsoft.ContainerService** and **Microsoft.PolicyInsights** resource providers.
+must enable the **Microsoft.PolicyInsights** resource providers.
1. You need the Azure CLI version 2.12.0 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see
must enable the **Microsoft.ContainerService** and **Microsoft.PolicyInsights**
- Azure portal:
- Register the **Microsoft.ContainerService** and **Microsoft.PolicyInsights** resource
- providers. For steps, see
+ Register the **Microsoft.PolicyInsights** resource providers. For steps, see
[Resource providers and types](../../../azure-resource-manager/management/resource-providers-and-types.md#azure-portal). - Azure CLI:
must enable the **Microsoft.ContainerService** and **Microsoft.PolicyInsights**
```azurecli-interactive # Log in first with az login if you're not using Cloud Shell
- # Provider register: Register the Azure Kubernetes Service provider
- az provider register --namespace Microsoft.ContainerService
- # Provider register: Register the Azure Policy provider az provider register --namespace Microsoft.PolicyInsights ```
you want to manage.
> 2020. New v2 versions of the policy definitions must then be assigned. To upgrade now, follow > these steps: >
- > 1. Validate your AKS cluster has the v1 add-on installed by visiting the **Policies** page on your AKS cluster and has the "The current cluster uses Azure Policy add-on v1..." message.
+ > 1. Validate your AKS cluster has the v1 add-on installed by visiting the **Policies** page on
+ > your AKS cluster and has the "The current cluster uses Azure Policy add-on v1..." message.
> 1. [Remove the add-on](#remove-the-add-on-from-aks). > 1. Select the **Enable add-on** button to install the v2 version of the add-on. > 1. [Assign v2 versions of your v1 built-in policy definitions](#assign-a-built-in-policy-definition)
kubectl get pods -n gatekeeper-system
Lastly, verify that the latest add-on is installed by running this Azure CLI command, replacing `<rg>` with your resource group name and `<cluster-name>` with the name of your AKS cluster:
-`az aks show --query addonProfiles.azurepolicy -g <rg> -n <cluster-name>`. The result should look similar to the following output and
-**config.version** should be `v2`:
+`az aks show --query addonProfiles.azurepolicy -g <rg> -n <cluster-name>`. The result should look
+similar to the following output and **config.version** should be `v2`:
```output "addonProfiles": {
compliance reporting experience. For more information, see
## Assign a built-in policy definition
-To assign a policy definition to your Kubernetes cluster, you must be assigned the appropriate
-Azure role-based access control (Azure RBAC) policy assignment operations. The Azure built-in roles **Resource
-Policy Contributor** and **Owner** have these operations. To learn more, see
+To assign a policy definition to your Kubernetes cluster, you must be assigned the appropriate Azure
+role-based access control (Azure RBAC) policy assignment operations. The Azure built-in roles
+**Resource Policy Contributor** and **Owner** have these operations. To learn more, see
[Azure RBAC permissions in Azure Policy](../overview.md#azure-rbac-permissions-in-azure-policy). Find the built-in policy definitions for managing your cluster using the Azure portal with the
Some other considerations:
- If the cluster subscription is registered with Azure Security Center, then Azure Security Center Kubernetes policies are applied on the cluster automatically. -- When a deny policy is applied on cluster with existing Kubernetes resources, any preexisting
+- When a deny policy is applied on cluster with existing Kubernetes resources, any pre-existing
resource that is not compliant with the new policy continues to run. When the non-compliant resource gets rescheduled on a different node the Gatekeeper blocks the resource creation.
Some other considerations:
## Logging
-As a Kubernetes controller/container, both the the _azure-policy_ and _gatekeeper_ pods keep logs in
-the Kubernetes cluster. The logs can be exposed in the **Insights** page of the Kubernetes cluster.
-For more information, see
+As a Kubernetes controller/container, both the _azure-policy_ and _gatekeeper_ pods keep logs in the
+Kubernetes cluster. The logs can be exposed in the **Insights** page of the Kubernetes cluster. For
+more information, see
[Monitor your Kubernetes cluster performance with Azure Monitor for containers](../../../azure-monitor/containers/container-insights-analyze.md). To view the add-on logs, use `kubectl`:
governance Guest Configuration Create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/how-to/guest-configuration-create.md
Configuration AuditBitLocker
} # Compile the configuration to create the MOF files
-AuditBitLocker ./Config
+AuditBitLocker
```
-Save this file with name `config.ps1` in the project folder. Run it in PowerShell by executing
-`./config.ps1` in the terminal. A new mof file is created.
+Run this script in a PowerShell terminal or save this file with name `config.ps1` in the project folder.
+Run it in PowerShell by executing `./config.ps1` in the terminal. A new mof file is created.
The `Node AuditBitlocker` command isn't technically required but it produces a file named `AuditBitlocker.mof` rather than the default, `localhost.mof`. Having the .mof file name follow the
Run the following command to create a package using the configuration given in t
```azurepowershell-interactive New-GuestConfigurationPackage ` -Name 'AuditBitlocker' `
- -Configuration './Config/AuditBitlocker.mof'
+ -Configuration './AuditBitlocker/AuditBitlocker.mof'
``` After creating the Configuration package but before publishing it to Azure, you can test the package
The cmdlet also supports input from the PowerShell pipeline. Pipe the output of
`New-GuestConfigurationPackage` cmdlet to the `Test-GuestConfigurationPackage` cmdlet. ```azurepowershell-interactive
-New-GuestConfigurationPackage -Name AuditBitlocker -Configuration ./Config/AuditBitlocker.mof | Test-GuestConfigurationPackage
+New-GuestConfigurationPackage -Name AuditBitlocker -Configuration ./AuditBitlocker/AuditBitlocker.mof | Test-GuestConfigurationPackage
```
-The next step is to publish the file to Azure Blob Storage. The command `Publish-GuestConfigurationPackage` requires the `Az.Storage`
-module.
+The next step is to publish the file to Azure Blob Storage. There are no special requirements for the storage account,
+but it's a good idea to host the file in a region near your machines. If you don't have a storage account,
+use the following example. The commands below, including `Publish-GuestConfigurationPackage`,
+require the `Az.Storage` module.
+
+```azurepowershell-interactive
+# Creates a new resource group, storage account, and container
+New-AzResourceGroup -name myResourceGroupName -Location WestUS
+New-AzStorageAccount -ResourceGroupName myResourceGroupName -Name myStorageAccountName -SkuName 'Standard_LRS' -Location 'WestUs' | New-AzStorageContainer -Name guestconfiguration -Permission Blob
+```
Parameters of the `Publish-GuestConfigurationPackage` cmdlet:
After the DSC resource has been installed in the development environment, use th
**FilesToInclude** parameter for `New-GuestConfigurationPackage` to include content for the third-party platform in the content artifact.
-### Step by step, creating a content artifact that uses third-party tools
-
-Only the `New-GuestConfigurationPackage` cmdlet requires a change from the step-by-step guidance for
-DSC content artifacts. For this example, use the `gcInSpec` module to extend Guest Configuration to
-audit Windows machines using the InSpec platform rather than the built-in module used on Linux. The
-community module is maintained as an
-[open source project in GitHub](https://github.com/microsoft/gcinspec).
-
-Install required modules in your development environment:
-
-```azurepowershell-interactive
-# Update PowerShellGet if needed to allow installing PreRelease versions of modules
-Install-Module PowerShellGet -Force
-
-# Install GuestConfiguration module prerelease version
-Install-Module GuestConfiguration -allowprerelease
-
-# Install commmunity supported gcInSpec module
-Install-Module gcInSpec
-```
-
-First, create the YaML file used by InSpec. The file provides basic information about the
-environment. An example is given below:
-
-```YaML
-name: wmi_service
Title: Verify WMI service is running
-maintainer: Microsoft Corporation
-summary: Validates that the Windows Service 'winmgmt' is running
-copyright: Microsoft Corporation
-license: MIT
-version: 1.0.0
-supports:
- - os-family: windows
-```
-
-Save this file named `wmi_service.yml` in a folder named `wmi_service` in your project directory.
-
-Next, create the Ruby file with the InSpec language abstraction used to audit the machine.
-
-```Ruby
-control 'wmi_service' do
- impact 1.0
- title 'Verify windows service: winmgmt'
- desc 'Validates that the service, is installed, enabled, and running'
-
- describe service('winmgmt') do
- it { should be_installed }
- it { should be_enabled }
- it { should be_running }
- end
-end
-
-```
-
-Save this file `wmi_service.rb` in a new folder named `controls` inside the `wmi_service` directory.
-
-Finally, create a configuration, import the **GuestConfiguration** resource module, and use the
-`gcInSpec` resource to set the name of the InSpec profile.
-
-```powershell
-# Define the configuration and import GuestConfiguration
-Configuration wmi_service
-{
- Import-DSCResource -Module @{ModuleName = 'gcInSpec'; ModuleVersion = '2.1.0'}
- node 'wmi_service'
- {
- gcInSpec wmi_service
- {
- InSpecProfileName = 'wmi_service'
- InSpecVersion = '3.9.3'
- WindowsServerVersion = '2016'
- }
- }
-}
-
-# Compile the configuration to create the MOF files
-wmi_service -out ./Config
-```
-
-You should now have a project structure as below:
-
-```file
-/ wmi_service
- / Config
- wmi_service.mof
- / wmi_service
- wmi_service.yml
- / controls
- wmi_service.rb
-```
-
-The supporting files must be packaged together. The completed package is used by Guest Configuration
-to create the Azure Policy definitions.
-
-The `New-GuestConfigurationPackage` cmdlet creates the package. For third-party content, use the
-**FilesToInclude** parameter to add the InSpec content to the package. You don't need to specify the
-**ChefProfilePath** as for Linux packages.
--- **Name**: Guest Configuration package name.-- **Configuration**: Compiled configuration document full path.-- **Path**: Output folder path. This parameter is optional. If not specified, the package is created
- in current directory.
-- **FilesoInclude**: Full path to InSpec profile.-
-Run the following command to create a package using the configuration given in
-the previous step:
-
-```azurepowershell-interactive
-New-GuestConfigurationPackage `
- -Name 'wmi_service' `
- -Configuration './Config/wmi_service.mof' `
- -FilesToInclude './wmi_service' `
- -Path './package'
-```
- ## Policy lifecycle If you would like to release an update to the policy, make the change for both the Guest Configuration
governance Advanced https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/resource-graph/samples/advanced.md
Title: Advanced query samples description: Use Azure Resource Graph to run some advanced queries, including working with columns, listing tags used, and matching resources with regular expressions. Previously updated : 01/27/2021 Last updated : 03/23/2021 # Advanced Resource Graph query samples
We'll walk through the following advanced queries:
- [List all extensions installed on a virtual machine](#join-vmextension) - [Find storage accounts with a specific tag on the resource group](#join-findstoragetag) - [Combine results from two queries into a single result](#unionresults)-- [Include the tenant and subscription names with DisplayNames](#displaynames) - [Summarize virtual machine by the power states extended property](#vm-powerstate) - [Count of non-compliant Guest Configuration assignments](#count-gcnoncompliant) - [Query details of Guest Configuration assignment reports](#query-gcreports)
Search-AzGraph -Query "Resources | where type == 'microsoft.compute/virtualmachi
-## <a name="displaynames"></a>Include the tenant and subscription names with DisplayNames
-
-This query uses the **Include** parameter with option _DisplayNames_ to add
-**subscriptionDisplayName** and **tenantDisplayName** to the results. This parameter is only
-available for Azure CLI and Azure PowerShell.
-
-```azurecli-interactive
-az graph query -q "limit 1" --include displayNames
-```
-
-```azurepowershell-interactive
-Search-AzGraph -Query "limit 1" -Include DisplayNames
-```
-
-An alternative to getting the subscription name is to use the `join` operator and connect to the
-**ResourceContainers** table and the `Microsoft.Resources/subscriptions` type. `join` works in Azure
-CLI, Azure PowerShell, portal, and all supported SDK. For an example, see
-[Sample - Key vault with subscription name](#join).
-
-> [!NOTE]
-> If the query doesn't use **project** to specify the returned properties,
-> **subscriptionDisplayName** and **tenantDisplayName** are automatically included in the results.
-> If the query does use **project**, each of the _DisplayName_ fields must be explicitly included in
-> the **project** or they won't be returned in the results, even when the **Include** parameter is
-> used. The **Include** parameter doesn't work with
-> [tables](../concepts/query-language.md#resource-graph-tables).
--- ## <a name="count-gcnoncompliant"></a>Count of non-compliant Guest Configuration assignments Displays a count of non-compliant machines per [Guest Configuration assignment reason](../../policy/how-to/determine-non-compliance.md#compliance-details-for-guest-configuration). Limits results to first 100 for performance.
hdinsight Apache Ambari Email https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/apache-ambari-email.md
In this tutorial, you learn how to:
1. From the Overview page, select **Manage**, to go the SendGrid webpage for your account.
- ![SendGrid overview in azure portal](./media/apache-ambari-email/azure-portal-sendgrid-manage.png)
+ :::image type="content" source="./media/apache-ambari-email/azure-portal-sendgrid-manage.png" alt-text="SendGrid overview in azure portal":::
1. From the left menu, navigate to your account name and then **Account Details**.
- ![SendGrid dashboard navigation](./media/apache-ambari-email/sendgrid-dashboard-navigation.png)
+ :::image type="content" source="./media/apache-ambari-email/sendgrid-dashboard-navigation.png" alt-text="SendGrid dashboard navigation":::
1. From the **Account Details** page, record the **Username**.
- ![SendGrid account details](./media/apache-ambari-email/sendgrid-account-details.png)
+ :::image type="content" source="./media/apache-ambari-email/sendgrid-account-details.png" alt-text="SendGrid account details":::
## Configure Ambari e-mail notification
In this tutorial, you learn how to:
1. From the **Manage Alert Notifications** window, select the **+** icon.
- ![Screenshot shows the Manage Alert Notifications dialog box.](./media/apache-ambari-email/azure-portal-create-notification.png)
+ :::image type="content" source="./media/apache-ambari-email/azure-portal-create-notification.png" alt-text="Screenshot shows the Manage Alert Notifications dialog box.":::
1. From the **Create Alert Notification** dialog, provide the following information:
In this tutorial, you learn how to:
|Password Confirmation|Reenter password.| |Start TLS|Select this check box|
- ![Screenshot shows the Create Alert Notification dialog box.](./media/apache-ambari-email/ambari-create-alert-notification.png)
+ :::image type="content" source="./media/apache-ambari-email/ambari-create-alert-notification.png" alt-text="Screenshot shows the Create Alert Notification dialog box.":::
Select **Save**. You'll return to the **Manage Alert Notifications** window.
hdinsight Apache Kafka Spark Structured Streaming Cosmosdb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/apache-kafka-spark-structured-streaming-cosmosdb.md
Spark structured streaming is a stream processing engine built on Spark SQL. It
Apache Kafka on HDInsight doesn't provide access to the Kafka brokers over the public internet. Anything that talks to Kafka must be in the same Azure virtual network as the nodes in the Kafka cluster. For this example, both the Kafka and Spark clusters are located in an Azure virtual network. The following diagram shows how communication flows between the clusters:
-![Diagram of Spark and Kafka clusters in an Azure virtual network](./media/apache-kafka-spark-structured-streaming-cosmosdb/apache-spark-kafka-vnet.png)
> [!NOTE] > The Kafka service is limited to communication within the virtual network. Other services on the cluster, such as SSH and Ambari, can be accessed over the internet. For more information on the public ports available with HDInsight, see [Ports and URIs used by HDInsight](hdinsight-hadoop-port-settings-for-services.md).
While you can create an Azure virtual network, Kafka, and Spark clusters manuall
|Ssh User Name|The SSH user to create for the Spark and Kafka clusters.| |Ssh Password|The password for the SSH user for the Spark and Kafka clusters.|
- ![HDInsight custom deployment values](./media/apache-kafka-spark-structured-streaming-cosmosdb/hdi-custom-parameters.png)
+ :::image type="content" source="./media/apache-kafka-spark-structured-streaming-cosmosdb/hdi-custom-parameters.png" alt-text="HDInsight custom deployment values":::
1. Read the **Terms and Conditions**, and then select **I agree to the terms and conditions stated above**.
hdinsight Cluster Availability Monitor Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/cluster-availability-monitor-logs.md
As a prerequisite, you'll need a Log Analytics Workspace to store the collected
From the HDInsight cluster resource page in the portal, select **Azure Monitor**. Then, select **enable** and select your Log Analytics workspace from the drop-down.
-![HDInsight Operations Management Suite](media/cluster-availability-monitor-logs/azure-portal-monitoring.png)
By default, this installs the OMS agent on all of the cluster nodes except for edge nodes. Because no OMS agent is installed on cluster edge nodes, there is no telemetry on edge nodes present in Log Analytics by default.
By default, this installs the OMS agent on all of the cluster nodes except for e
Once Azure Monitor log integration is enabled (this may take a few minutes), navigate to your **Log Analytics Workspace** resource and select **Logs**.
-![Log Analytics workspace logs](media/cluster-availability-monitor-logs/hdinsight-portal-logs.png)
Logs list a number of sample queries, such as:
Logs list a number of sample queries, such as:
As an example, run the **Availability rate** sample query by selecting **Run** on that query, as shown in the screenshot above. This will show the availability rate of each node in your cluster as a percentage. If you have enabled multiple HDInsight clusters to send metrics to the same Log Analytics workspace, you'll see the availability rate for all nodes (excluding edge nodes) in those clusters displayed.
-![Log Analytics workspace logs 'availability rate' sample query](media/cluster-availability-monitor-logs/portal-availability-rate.png)
> [!NOTE] > Availability rate is measured over a 24-hour period, so your cluster will need to run for at least 24 hours before you see accurate availability rates.
You can also set up Azure Monitor alerts that will trigger when the value of a m
From **Logs**, run the **Unavailable computers** sample query by selecting **Run** on that query, as shown below.
-![Log Analytics workspace logs 'unavailable computers' sample](media/cluster-availability-monitor-logs/portal-unavailable-computers.png)
If all nodes are available, this query should return zero results for now. Click **New alert rule** to begin configuring your alert for this query.
-![Log Analytics workspace new alert rule](media/cluster-availability-monitor-logs/portal-logs-new-alert-rule.png)
There are three components to an alert: the *resource* for which to create the rule (the Log Analytics workspace in this case), the *condition* to trigger the alert, and the *action groups* that determine what will happen when the alert is triggered. Click the **condition title**, as shown below, to finish configuring the signal logic.
-![Portal alert create rule condition](media/cluster-availability-monitor-logs/portal-condition-title.png)
This will open **Configure signal logic**.
For the purpose of this alert, you want to make sure **Period=Frequency.** More
Select **Done** when you're finished configuring the signal logic.
-![Alert rule configures signal logic](media/cluster-availability-monitor-logs/portal-configure-signal-logic.png)
If you don't already have an existing action group, click **Create New** under the **Action Groups** section.
-![Alert rule creates new action group](media/cluster-availability-monitor-logs/portal-create-new-action-group.png)
This will open **Add action group**. Choose an **Action group name**, **Short name**, **Subscription**, and **Resource group.** Under the **Actions** section, choose an **Action Name** and select **Email/SMS/Push/Voice** as the **Action Type.**
This will open **Add action group**. Choose an **Action group name**, **Short na
This will open **Email/SMS/Push/Voice**. Choose a **Name** for the recipient, **check** the **Email** box, and type an email address to which you want the alert sent. Select **OK** in **Email/SMS/Push/Voice**, then in **Add action group** to finish configuring your action group.
-![Alert rule creates add action group](media/cluster-availability-monitor-logs/portal-add-action-group.png)
After these blades close, you should see your action group listed under the **Action Groups** section. Finally, complete the **Alert Details** section by typing an **Alert Rule Name** and **Description** and choosing a **Severity**. Click **Create Alert Rule** to finish.
-![Portal creates alert rule finish](media/cluster-availability-monitor-logs/portal-create-alert-rule-finish.png)
> [!TIP] > The ability to specify **Severity** is a powerful tool that can be used when creating multiple alerts. For example, you could create one alert to raise a Warning (Sev 1) if a single head node goes down and another alert that raises Critical (Sev 0) in the unlikely event that both head nodes go down. When the condition for this alert is met, the alert will fire and you'll receive an email with the alert details like this:
-![Azure Monitor alert email example](media/cluster-availability-monitor-logs/portal-oms-alert-email.png)
You can also view all alerts that have fired, grouped by severity, by going to **Alerts** in your **Log Analytics Workspace**.
-![Log Analytics workspace alerts](media/cluster-availability-monitor-logs/hdi-portal-oms-alerts.png)
Selecting on a severity grouping (i.e. **Sev 1,** as highlighted above) will show records for all alerts of that severity that have fired like below:
-![Log Analytics workspace sev one alert](media/cluster-availability-monitor-logs/portal-oms-alerts-sev1.png)
## Next steps
hdinsight Connect On Premises Network https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/connect-on-premises-network.md
These configurations enable the following behavior:
In the following diagram, green lines are requests for resources that end in the DNS suffix of the virtual network. Blue lines are requests for resources in the on-premises network or on the public internet.
-![Diagram of how DNS requests are resolved in the configuration](./media/connect-on-premises-network/on-premises-to-cloud-dns.png)
## Prerequisites
These steps use the [Azure portal](https://portal.azure.com) to create an Azure
1. From the top menu, select **+ Create a resource**.
- ![Create an Ubuntu virtual machine](./media/connect-on-premises-network/azure-portal-create-resource.png)
+ :::image type="content" source="./media/connect-on-premises-network/azure-portal-create-resource.png" alt-text="Create an Ubuntu virtual machine":::
1. Select **Compute** > **Virtual machine** to go to the **Create a virtual machine** page.
These steps use the [Azure portal](https://portal.azure.com) to create an Azure
|Password or SSH public key | The available field is determined by your choice for **Authentication type**. Enter the appropriate value.| |Public inbound ports|Select **Allow selected ports**. Then select **SSH (22)** from the **Select inbound ports** drop-down list.|
- ![Virtual machine basic configuration](./media/connect-on-premises-network/virtual-machine-basics.png)
+ :::image type="content" source="./media/connect-on-premises-network/virtual-machine-basics.png" alt-text="Virtual machine basic configuration":::
Leave other entries at the default values and then select the **Networking** tab.
These steps use the [Azure portal](https://portal.azure.com) to create an Azure
|Subnet | Select the default subnet for the virtual network that you created earlier. Do __not__ select the subnet used by the VPN gateway.| |Public IP | Use the autopopulated value. |
- ![HDInsight Virtual network settings](./media/connect-on-premises-network/virtual-network-settings.png)
+ :::image type="content" source="./media/connect-on-premises-network/virtual-network-settings.png" alt-text="HDInsight Virtual network settings":::
Leave other entries at the default values and then select the **Review + create**.
Once the virtual machine has been created, you'll receive a **Deployment succeed
2. Note the values for **PUBLIC IP ADDRESS/DNS NAME LABEL** and **PRIVATE IP ADDRESS** for later use.
- ![Public and private IP addresses](./media/connect-on-premises-network/virtual-machine-ip-addresses.png)
+ :::image type="content" source="./media/connect-on-premises-network/virtual-machine-ip-addresses.png" alt-text="Public and private IP addresses":::
### Install and configure Bind (DNS software)
To configure the virtual network to use the custom DNS server instead of the Azu
5. Select __Save__. <br />
- ![Set the custom DNS server for the network](./media/connect-on-premises-network/configure-custom-dns.png)
+ :::image type="content" source="./media/connect-on-premises-network/configure-custom-dns.png" alt-text="Set the custom DNS server for the network":::
## Configure on-premises DNS server
hdinsight Control Network Traffic https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/control-network-traffic.md
Network traffic in an Azure Virtual Networks can be controlled using the followi
As a managed service, HDInsight requires unrestricted access to the HDInsight health and management services both for incoming and outgoing traffic from the VNET. When using NSGs, you must ensure that these services can still communicate with HDInsight cluster.
-![Diagram of HDInsight entities created in Azure custom VNET](./media/control-network-traffic/hdinsight-vnet-diagram.png)
## HDInsight with network security groups
hdinsight Disk Encryption https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/disk-encryption.md
HDInsight only supports Azure Key Vault. If you have your own key vault, you can
1. From your new key vault, navigate to **Settings** > **Keys** > **+ Generate/Import**.
- ![Generate a new key in Azure Key Vault](./media/disk-encryption/create-new-key.png "Generate a new key in Azure Key Vault")
+ :::image type="content" source="./media/disk-encryption/create-new-key.png" alt-text="Generate a new key in Azure Key Vault":::
1. Provide a name, then select **Create**. Maintain the default **Key Type** of **RSA**.
- ![generates key name](./media/disk-encryption/create-key.png "Generate key name")
+ :::image type="content" source="./media/disk-encryption/create-key.png" alt-text="generates key name":::
1. When you return to the **Keys** page, select the key you created.
- ![key vault key list](./media/disk-encryption/key-vault-key-list.png)
+ :::image type="content" source="./media/disk-encryption/key-vault-key-list.png" alt-text="key vault key list":::
1. Select the version to open the **Key Version** page. When you use your own key for HDInsight cluster encryption, you need to provide the key URI. Copy the **Key identifier** and save it somewhere until you're ready to create your cluster.
- ![get key identifier](./media/disk-encryption/get-key-identifier.png)
+ :::image type="content" source="./media/disk-encryption/get-key-identifier.png" alt-text="get key identifier":::
### Create access policy 1. From your new key vault, navigate to **Settings** > **Access policies** > **+ Add Access Policy**.
- ![Create new Azure Key Vault access policy](./media/disk-encryption/key-vault-access-policy.png)
+ :::image type="content" source="./media/disk-encryption/key-vault-access-policy.png" alt-text="Create new Azure Key Vault access policy":::
1. From the **Add access policy** page, provide the following information:
HDInsight only supports Azure Key Vault. If you have your own key vault, you can
|Secret Permissions|Select **Get**, **Set**, and **Delete**.| |Select principal|Select the user-assigned managed identity you created earlier.|
- ![Set Select Principal for Azure Key Vault access policy](./media/disk-encryption/azure-portal-add-access-policy.png)
+ :::image type="content" source="./media/disk-encryption/azure-portal-add-access-policy.png" alt-text="Set Select Principal for Azure Key Vault access policy":::
1. Select **Add**. 1. Select **Save**.
- ![Save Azure Key Vault access policy](./media/disk-encryption/add-key-vault-access-policy-save.png)
+ :::image type="content" source="./media/disk-encryption/add-key-vault-access-policy-save.png" alt-text="Save Azure Key Vault access policy":::
### Create cluster with customer-managed key disk encryption
During cluster creation, you can either use a versioned key, or a versionless ke
You also need to assign the managed identity to the cluster.
-![Create new cluster](./media/disk-encryption/create-cluster-portal.png)
#### Using Azure CLI
You can change the encryption keys used on your running cluster, using the Azure
To rotate the key, you need the base key vault URI. Once you've done that, go to the HDInsight cluster properties section in the portal and click on **Change Key** under **Disk Encryption Key URL**. Enter in the new key url and submit to rotate the key.
-![rotate disk encryption key](./media/disk-encryption/change-key.png)
#### Using Azure CLI
No, all managed disks and resource disks are encrypted by the same key.
If the cluster loses access to the key, warnings will be shown in the Apache Ambari portal. In this state, the **Change Key** operation will fail. Once key access is restored, Ambari warnings will go away and operations such as key rotation can be successfully performed.
-![key access Ambari alert](./media/disk-encryption/ambari-alert.png)
**How can I recover the cluster if the keys are deleted?**
hdinsight Apache Domain Joined Configure Using Azure Adds https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/domain-joined/apache-domain-joined-configure-using-azure-adds.md
New-SelfSignedCertificate -Subject contoso100.onmicrosoft.com `
View the health status of Azure Active Directory Domain Services by selecting **Health** in the **Manage** category. Make sure the status of Azure AD DS is green (running) and the synchronization is complete.
-![Azure AD DS health](./media/apache-domain-joined-configure-using-azure-adds/hdinsight-aadds-health.png)
### Create and authorize a managed identity
To set up ESP clusters, create a user-assigned managed identity if you don't hav
Next, assign the **HDInsight Domain Services Contributor** role to the managed identity in **Access control** for Azure AD DS. You need Azure AD DS admin privileges to make this role assignment.
-![Azure Active Directory Domain Services Access control](./media/apache-domain-joined-configure-using-azure-adds/hdinsight-configure-managed-identity.png)
Assigning the **HDInsight Domain Services Contributor** role ensures that this identity has proper (`on behalf of`) access to do domain services operations on the Azure AD DS domain. These operations include creating and deleting OUs.
After the managed identity is given the role, the Azure AD DS admin manages who
For example, the Azure AD DS admin can assign this role to the **MarketingTeam** group for the **sjmsi** managed identity. An example is shown in the following image. This assignment ensures the right people in the organization can use the managed identity to create ESP clusters.
-![HDInsight Managed Identity Operator Role Assignment](./media/apache-domain-joined-configure-using-azure-adds/hdinsight-managed-identity-operator-role-assignment.png)
### Network configuration
For example, the Azure AD DS admin can assign this role to the **MarketingTeam**
Enable Azure AD DS. Then a local Domain Name System (DNS) server runs on the Active Directory virtual machines (VMs). Configure your Azure AD DS virtual network to use these custom DNS servers. To locate the right IP addresses, select **Properties** in the **Manage** category and look under **IP ADDRESS ON VIRTUAL NETWORK**.
-![Locate IP addresses for local DNS servers](./media/apache-domain-joined-configure-using-azure-adds/hdinsight-aadds-dns1.png)
Change the configuration of the DNS servers in the Azure AD DS virtual network. To use these custom IPs, select **DNS servers** in the **Settings** category. Then select the **Custom** option, enter the first IP address in the text box, and select **Save**. Add more IP addresses by using the same steps.
-![Updating the virtual network DNS configuration](./media/apache-domain-joined-configure-using-azure-adds/hdinsight-aadds-vnet-configuration.png)
It's easier to place both the Azure AD DS instance and the HDInsight cluster in the same Azure virtual network. If you plan to use different virtual networks, you must peer those virtual networks so that the domain controller is visible to HDInsight VMs. For more information, see [Virtual network peering](../../virtual-network/virtual-network-peering-overview.md). After the virtual networks are peered, configure the HDInsight virtual network to use a custom DNS server. And enter the Azure AD DS private IPs as the DNS server addresses. When both virtual networks use the same DNS servers, your custom domain name will resolve to the right IP and will be reachable from HDInsight. For example, if your domain name is `contoso.com`, then after this step, `ping contoso.com` should resolve to the right Azure AD DS IP.
-![Configuring custom DNS servers for a peered virtual network](./media/apache-domain-joined-configure-using-azure-adds/hdinsight-aadds-peered-vnet-configuration.png)
If you're using network security group (NSG) rules in your HDInsight subnet, you should allow the [required IPs](../hdinsight-management-ip-addresses.md) for both inbound and outbound traffic.
You can also enable the [HDInsight ID Broker](identity-broker.md) feature during
> [!NOTE] > The first six characters of the ESP cluster names must be unique in your environment. For example, if you have multiple ESP clusters in different virtual networks, choose a naming convention that ensures the first six characters on the cluster names are unique.
-![Domain validation for Azure HDInsight Enterprise Security Package](./media/apache-domain-joined-configure-using-azure-adds/azure-portal-cluster-security-networking-esp.png)
After you enable ESP, common misconfigurations related to Azure AD DS are automatically detected and validated. After you fix these errors, you can continue with the next step.
-![Azure HDInsight Enterprise Security Package failed domain validation](./media/apache-domain-joined-configure-using-azure-adds/azure-portal-cluster-security-networking-esp-error.png)
When you create an HDInsight cluster with ESP, you must supply the following parameters:
When you create an HDInsight cluster with ESP, you must supply the following par
The managed identity that you created can be chosen from the **User-assigned managed identity** drop-down list when you're creating a new cluster.
-![Azure HDInsight ESP Active Directory Domain Services managed identity](./media/apache-domain-joined-configure-using-azure-adds/azure-portal-cluster-security-networking-identity.png).
## Next steps
hdinsight Apache Domain Joined Create Configure Enterprise Security Cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/domain-joined/apache-domain-joined-create-configure-enterprise-security-cluster.md
Before you use this process in your own environment:
* Enable Azure AD. * Sync on-premises user accounts to Azure AD.
-![Azure AD architecture diagram](./media/apache-domain-joined-create-configure-enterprise-security-cluster/hdinsight-image-0002.png)
## Create an on-premises environment
In this section, you'll use an Azure Quickstart deployment template to create ne
Leave the remaining default values.
- ![Template for Create an Azure VM with a new Azure AD Forest](./media/apache-domain-joined-create-configure-enterprise-security-cluster/create-azure-vm-ad-forest.png)
+ :::image type="content" source="./media/apache-domain-joined-create-configure-enterprise-security-cluster/create-azure-vm-ad-forest.png" alt-text="Template for Create an Azure VM with a new Azure AD Forest" border="true":::
1. Review the **Terms and Conditions**, and then select **I agree to the terms and conditions stated above**. 1. Select **Purchase**, and monitor the deployment and wait for it to complete. The deployment takes about 30 minutes to complete.
In this section, you'll create the users that will have access to the HDInsight
1. From the domain controller **Server Manager** dashboard, navigate to **Tools** > **Active Directory Users and Computers**.
- ![On the Server Manager dashboard, open Active Directory Management](./media/apache-domain-joined-create-configure-enterprise-security-cluster/server-manager-active-directory-screen.png)
+ :::image type="content" source="./media/apache-domain-joined-create-configure-enterprise-security-cluster/server-manager-active-directory-screen.png" alt-text="On the Server Manager dashboard, open Active Directory Management" border="true":::
1. Create two new users: **HDIAdmin** and **HDIUser**. These two users will sign in to HDInsight clusters. 1. From the **Active Directory Users and Computers** page, right-click `HDIFabrikam.com`, and then navigate to **New** > **User**.
- ![Create a new Active Directory user](./media/apache-domain-joined-create-configure-enterprise-security-cluster/create-active-directory-user.png)
+ :::image type="content" source="./media/apache-domain-joined-create-configure-enterprise-security-cluster/create-active-directory-user.png" alt-text="Create a new Active Directory user" border="true":::
1. On the **New Object - User** page, enter `HDIUser` for **First name** and **User logon name**. The other fields will autopopulate. Then select **Next**.
- ![Create the first admin user object](./media/apache-domain-joined-create-configure-enterprise-security-cluster/hdinsight-image-0020.png)
+ :::image type="content" source="./media/apache-domain-joined-create-configure-enterprise-security-cluster/hdinsight-image-0020.png" alt-text="Create the first admin user object" border="true":::
1. In the pop-up window that appears, enter a password for the new account. Select **Password never expires**, and then **OK** at the pop-up message. 1. Select **Next**, and then **Finish** to create the new account. 1. Repeat the above steps to create the user `HDIAdmin`.
- ![Create a second admin user object](./media/apache-domain-joined-create-configure-enterprise-security-cluster/hdinsight-image-0024.png)
+ :::image type="content" source="./media/apache-domain-joined-create-configure-enterprise-security-cluster/hdinsight-image-0024.png" alt-text="Create a second admin user object" border="true":::
1. Create a global security group.
In this section, you'll create the users that will have access to the HDInsight
1. Select **OK**.
- ![Create a new Active Directory group](./media/apache-domain-joined-create-configure-enterprise-security-cluster/create-active-directory-group.png)
+ :::image type="content" source="./media/apache-domain-joined-create-configure-enterprise-security-cluster/create-active-directory-group.png" alt-text="Create a new Active Directory group" border="true":::
- ![Create a new object](./media/apache-domain-joined-create-configure-enterprise-security-cluster/hdinsight-image-0028.png)
+ :::image type="content" source="./media/apache-domain-joined-create-configure-enterprise-security-cluster/hdinsight-image-0028.png" alt-text="Create a new object" border="true":::
1. Add members to **HDIUserGroup**.
In this section, you'll create the users that will have access to the HDInsight
1. In the **Enter the object names to select** text box, enter `HDIUserGroup`. Then select **OK**, and **OK** again at the pop-up. 1. Repeat the previous steps for the **HDIAdmin** account.
- ![Add the member HDIUser to the group HDIUserGroup](./media/apache-domain-joined-create-configure-enterprise-security-cluster/active-directory-add-users-to-group.png)
+ :::image type="content" source="./media/apache-domain-joined-create-configure-enterprise-security-cluster/active-directory-add-users-to-group.png" alt-text="Add the member HDIUser to the group HDIUserGroup" border="true":::
You've now created your Active Directory environment. You've added two users and a user group that can access the HDInsight cluster.
The users will be synchronized with Azure AD.
1. Under **Initial domain name**, enter `HDIFabrikamoutlook`. 1. Select **Create**.
- ![Create an Azure AD directory](./media/apache-domain-joined-create-configure-enterprise-security-cluster/create-new-directory.png)
+ :::image type="content" source="./media/apache-domain-joined-create-configure-enterprise-security-cluster/create-new-directory.png" alt-text="Create an Azure AD directory" border="true":::
### Create a custom domain
The users will be synchronized with Azure AD.
1. Under **Custom domain name**, enter `HDIFabrikam.com`, and then select **Add domain**. 1. Then complete [Add your DNS information to the domain registrar](../../active-directory/fundamentals/add-custom-domain.md#add-your-dns-information-to-the-domain-registrar).
-![Create a custom domain](./media/apache-domain-joined-create-configure-enterprise-security-cluster/create-custom-domain.png)
+ :::image type="content" source="./media/apache-domain-joined-create-configure-enterprise-security-cluster/create-custom-domain.png" alt-text="Create a custom domain" border="true":::
### Create a group
Create an Active Directory tenant administrator.
1. Enter the following details for the new user:
- **Identity**
+ **Identity**
- |Property |Description |
- |||
- |User name|Enter `fabrikamazureadmin` in the text box. From the domain name drop-down list, select `hdifabrikam.com`|
- |Name| Enter `fabrikamazureadmin`.|
+ |Property |Description |
+ |||
+ |User name|Enter `fabrikamazureadmin` in the text box. From the domain name drop-down list, select `hdifabrikam.com`|
+ |Name| Enter `fabrikamazureadmin`.|
- **Password**
- 1. Select **Let me create the password**.
- 1. Enter a secure password of your choice.
+ **Password**
+ 1. Select **Let me create the password**.
+ 1. Enter a secure password of your choice.
- **Groups and roles**
- 1. Select **0 groups selected**.
- 1. Select **AAD DC Administrators**, and then **Select**.
+ **Groups and roles**
+ 1. Select **0 groups selected**.
+ 1. Select **AAD DC Administrators**, and then **Select**.
- ![The Azure AD Groups dialog box](./media/apache-domain-joined-create-configure-enterprise-security-cluster/azure-ad-add-group-member.png)
+ :::image type="content" source="./media/apache-domain-joined-create-configure-enterprise-security-cluster/azure-ad-add-group-member.png" alt-text="The Azure AD Groups dialog box" border="true":::
- 1. Select **User**.
- 1. Select **Global administrator**, and then **Select**.
+ 1. Select **User**.
+ 1. Select **Global administrator**, and then **Select**.
- ![The Azure AD role dialog box](./media/apache-domain-joined-create-configure-enterprise-security-cluster/azure-ad-add-role-member.png)
+ :::image type="content" source="./media/apache-domain-joined-create-configure-enterprise-security-cluster/azure-ad-add-role-member.png" alt-text="The Azure AD role dialog box" border="true":::
1. Select **Create**.
Create an Active Directory tenant administrator.
1. On the **Connect to Azure AD** page, enter the username and password of the global administrator for Azure AD. Use the username `fabrikamazureadmin@hdifabrikam.com` that you created when you configured your Active Directory tenant. Then select **Next**.
- ![The "Connect to Azure A D" page.](./media/apache-domain-joined-create-configure-enterprise-security-cluster/hdinsight-image-0058.png)
+ :::image type="content" source="./media/apache-domain-joined-create-configure-enterprise-security-cluster/hdinsight-image-0058.png" alt-text="Connect to Azure A D" border="true":::
1. On the **Connect to Active Directory Domain Services** page, enter the username and password for an enterprise admin account. Use the username `HDIFabrikam\HDIFabrikamAdmin` and its password that you created earlier. Then select **Next**.
- ![The "Connect to A D D S" page.](./media/apache-domain-joined-create-configure-enterprise-security-cluster/hdinsight-image-0060.png)
+ :::image type="content" source="./media/apache-domain-joined-create-configure-enterprise-security-cluster/hdinsight-image-0060.png" alt-text="Connect to A D D S page." border="true":::
+ 1. On the **Azure AD sign-in configuration** page, select **Next**.
- ![The "Azure AD sign-in configuration" page](./media/apache-domain-joined-create-configure-enterprise-security-cluster/hdinsight-image-0062.png)
+
+ :::image type="content" source="./media/apache-domain-joined-create-configure-enterprise-security-cluster/hdinsight-image-0062.png" alt-text="Azure AD sign-in configuration page" border="true":::
1. On the **Ready to configure** page, select **Install**.
- ![The "Ready to configure" page](./media/apache-domain-joined-create-configure-enterprise-security-cluster/hdinsight-image-0064.png)
+ :::image type="content" source="./media/apache-domain-joined-create-configure-enterprise-security-cluster/hdinsight-image-0064.png" alt-text="Ready to configure page" border="true":::
1. On the **Configuration complete** page, select **Exit**.
- ![The "Configuration complete" page](./media/apache-domain-joined-create-configure-enterprise-security-cluster/hdinsight-image-0078.png)
+ :::image type="content" source="./media/apache-domain-joined-create-configure-enterprise-security-cluster/hdinsight-image-0078.png" alt-text="Configuration complete page" border="true":::
1. After the sync completes, confirm that the users you created on the IaaS directory are synced to Azure AD. 1. Sign in to the Azure portal.
Create a user-assigned managed identity that you can use to configure Azure AD D
1. Under **Location**, select **Central US**. 1. Select **Create**.
-![Create a new user-assigned managed identity](./media/apache-domain-joined-create-configure-enterprise-security-cluster/hdinsight-image-0082.png)
### Enable Azure AD DS
Follow these steps to enable Azure AD DS. For more information, see [Enable Azur
1. Sign in to the Azure portal. 1. Select **Create resource**, enter `Domain services`, and select **Azure AD Domain Services** > **Create**. 1. On the **Basics** page:
- 1. Under **Directory name**, select the Azure AD directory you created: **HDIFabrikam**.
- 1. For **DNS domain name**, enter *HDIFabrikam.com*.
- 1. Select your subscription.
- 1. Specify the resource group **HDIFabrikam-CentralUS**. For **Location**, select **Central US**.
+ 1. Under **Directory name**, select the Azure AD directory you created: **HDIFabrikam**.
+ 1. For **DNS domain name**, enter *HDIFabrikam.com*.
+ 1. Select your subscription.
+ 1. Specify the resource group **HDIFabrikam-CentralUS**. For **Location**, select **Central US**.
- ![Azure AD DS basic details](./media/apache-domain-joined-create-configure-enterprise-security-cluster/hdinsight-image-0084.png)
+ :::image type="content" source="./media/apache-domain-joined-create-configure-enterprise-security-cluster/hdinsight-image-0084.png" alt-text="Azure AD DS basic details" border="true":::
1. On the **Network** page, select the network (**HDIFabrikam-VNET**) and the subnet (**AADDS-subnet**) that you created by using the PowerShell script. Or choose **Create new** to create a virtual network now.
- ![The "Create virtual network" step](./media/apache-domain-joined-create-configure-enterprise-security-cluster/hdinsight-image-0086.png)
+ :::image type="content" source="./media/apache-domain-joined-create-configure-enterprise-security-cluster/hdinsight-image-0086.png" alt-text="Create virtual network step" border="true":::
1. On the **Administrator group** page, you should see a notification that a group named **AAD DC Administrators** has already been created to administer this group. You can modify the membership of this group if you want to, but in this case you don't need to change it. Select **OK**.
- ![View the Azure AD administrator group](./media/apache-domain-joined-create-configure-enterprise-security-cluster/hdinsight-image-0088.png)
+ :::image type="content" source="./media/apache-domain-joined-create-configure-enterprise-security-cluster/hdinsight-image-0088.png" alt-text="View the Azure AD administrator group" border="true":::
1. On the **Synchronization** page, enable complete synchronization by selecting **All** > **OK**.
- ![Enable Azure AD DS synchronization](./media/apache-domain-joined-create-configure-enterprise-security-cluster/hdinsight-image-0090.png)
+ :::image type="content" source="./media/apache-domain-joined-create-configure-enterprise-security-cluster/hdinsight-image-0090.png" alt-text="Enable Azure AD DS synchronization" border="true":::
1. On the **Summary** page, verify the details for Azure AD DS and select **OK**.
- ![The summary of "Enable Azure AD Domain Services"](./media/apache-domain-joined-create-configure-enterprise-security-cluster/hdinsight-image-0092.png)
+ :::image type="content" source="./media/apache-domain-joined-create-configure-enterprise-security-cluster/hdinsight-image-0092.png" alt-text="Enable Azure AD Domain Services" border="true":::
After you enable Azure AD DS, a local DNS server runs on the Azure AD VMs.
After you enable Azure AD DS, a local DNS server runs on the Azure AD VMs.
Use the following steps to configure your Azure AD DS virtual network (**HDIFabrikam-AADDSVNET**) to use your custom DNS servers. 1. Locate the IP addresses of your custom DNS servers.
- 1. Select the `HDIFabrikam.com` Azure AD DS resource.
- 1. Under **Manage**, select **Properties**.
- 1. Find the IP addresses under **IP address on virtual network**.
+ 1. Select the `HDIFabrikam.com` Azure AD DS resource.
+ 1. Under **Manage**, select **Properties**.
+ 1. Find the IP addresses under **IP address on virtual network**.
- ![Locate custom DNS IP addresses for Azure AD DS](./media/apache-domain-joined-create-configure-enterprise-security-cluster/hdinsight-image-0096.png)
+ :::image type="content" source="./media/apache-domain-joined-create-configure-enterprise-security-cluster/hdinsight-image-0096.png" alt-text="Locate custom DNS IP addresses for Azure AD DS" border="true":::
1. Configure **HDIFabrikam-AADDSVNET** to use custom IP addresses 10.0.0.4 and 10.0.0.5.
- 1. Under **Settings**, select **DNS Servers**.
- 1. Select **Custom**.
- 1. In the text box, enter the first IP address (*10.0.0.4*).
- 1. Select **Save**.
- 1. Repeat the steps to add the other IP address (*10.0.0.5*).
+ 1. Under **Settings**, select **DNS Servers**.
+ 1. Select **Custom**.
+ 1. In the text box, enter the first IP address (*10.0.0.4*).
+ 1. Select **Save**.
+ 1. Repeat the steps to add the other IP address (*10.0.0.5*).
In our scenario, we configured Azure AD DS to use IP addresses 10.0.0.4 and 10.0.0.5, setting the same IP address on the Azure AD DS virtual network:
-![The custom DNS servers page](./media/apache-domain-joined-create-configure-enterprise-security-cluster/hdinsight-image-0098.png)
## Securing LDAP traffic
Verify that the certificate is installed in the computer's **Personal** store:
1. Add the **Certificates** snap-in that manages certificates on the local computer. 1. Expand **Certificates (Local Computer)** > **Personal** > **Certificates**. A new certificate should exist in the **Personal** store. This certificate is issued to the fully qualified host name.
- ![Verify local certificate creation](./media/apache-domain-joined-create-configure-enterprise-security-cluster/hdinsight-image-0102.png)
+ :::image type="content" source="./media/apache-domain-joined-create-configure-enterprise-security-cluster/hdinsight-image-0102.png" alt-text="Verify local certificate creation" border="true":::
1. In pane on the right, right-click the certificate that you created. Point to **All Tasks**, and then select **Export**. 1. On the **Export Private Key** page, select **Yes, export the private key**. The computer where the key will be imported needs the private key to read the encrypted messages.
- ![The Export Private Key page of the Certificate Export Wizard](./media/apache-domain-joined-create-configure-enterprise-security-cluster/hdinsight-image-0103.png)
+ :::image type="content" source="./media/apache-domain-joined-create-configure-enterprise-security-cluster/hdinsight-image-0103.png" alt-text="The Export Private Key page of the Certificate Export Wizard" border="true":::
1. On the **Export File Format** page, leave the default settings, and then select **Next**. 1. On the **Password** page, type a password for the private key. For **Encryption**, select **TripleDES-SHA1**. Then select **Next**. 1. On the **File to Export** page, type the path and the name for the exported certificate file, and then select **Next**. The file name has to have a .pfx extension. This file is configured in the Azure portal to establish a secure connection. 1. Enable LDAPS for an Azure AD DS managed domain.
- 1. From the Azure portal, select the domain `HDIFabrikam.com`.
- 1. Under **Manage**, select **Secure LDAP**.
- 1. On the **Secure LDAP** page, under **Secure LDAP**, select **Enable**.
- 1. Browse for the .pfx certificate file that you exported on your computer.
- 1. Enter the certificate password.
+ 1. From the Azure portal, select the domain `HDIFabrikam.com`.
+ 1. Under **Manage**, select **Secure LDAP**.
+ 1. On the **Secure LDAP** page, under **Secure LDAP**, select **Enable**.
+ 1. Browse for the .pfx certificate file that you exported on your computer.
+ 1. Enter the certificate password.
- ![Enable secure LDAP](./media/apache-domain-joined-create-configure-enterprise-security-cluster/hdinsight-image-0113.png)
+ :::image type="content" source="./media/apache-domain-joined-create-configure-enterprise-security-cluster/hdinsight-image-0113.png" alt-text="Enable secure LDAP" border="true":::
1. Now that you've enabled LDAPS, make sure it's reachable by enabling port 636.
- 1. In the **HDIFabrikam-CentralUS** resource group, select the network security group **AADDS-HDIFabrikam.com-NSG**.
- 1. Under **Settings**, select **Inbound security rules** > **Add**.
- 1. On the **Add inbound security rule** page, enter the following properties, and select **Add**:
-
- | Property | Value |
- |||
- | Source | Any |
- | Source port ranges | * |
- | Destination | Any |
- | Destination port range | 636 |
- | Protocol | Any |
- | Action | Allow |
- | Priority | \<Desired number> |
- | Name | Port_LDAP_636 |
-
- ![The "Add inbound security rule" dialog box](./media/apache-domain-joined-create-configure-enterprise-security-cluster/add-inbound-security-rule.png)
+ 1. In the **HDIFabrikam-CentralUS** resource group, select the network security group **AADDS-HDIFabrikam.com-NSG**.
+ 1. Under **Settings**, select **Inbound security rules** > **Add**.
+ 1. On the **Add inbound security rule** page, enter the following properties, and select **Add**:
+
+ | Property | Value |
+ |||
+ | Source | Any |
+ | Source port ranges | * |
+ | Destination | Any |
+ | Destination port range | 636 |
+ | Protocol | Any |
+ | Action | Allow |
+ | Priority | \<Desired number> |
+ | Name | Port_LDAP_636 |
+
+ :::image type="content" source="./media/apache-domain-joined-create-configure-enterprise-security-cluster/add-inbound-security-rule.png" alt-text="The Add inbound security rule dialog box" border="true":::
**HDIFabrikamManagedIdentity** is the user-assigned managed identity. The HDInsight Domain Services Contributor role is enabled for the managed identity that will allow this identity to read, create, modify, and delete domain services operations.
-![Create a user-assigned managed identity](./media/apache-domain-joined-create-configure-enterprise-security-cluster/hdinsight-image-0117.png)
## Create an ESP-enabled HDInsight cluster
This step requires the following prerequisites:
1. Select **Custom** and enter *10.0.0.4* and *10.0.0.5*. 1. Select **Save**.
- ![Save custom DNS settings for a virtual network](./media/apache-domain-joined-create-configure-enterprise-security-cluster/hdinsight-image-0123.png)
+ :::image type="content" source="./media/apache-domain-joined-create-configure-enterprise-security-cluster/hdinsight-image-0123.png" alt-text="Save custom DNS settings for a virtual network" border="true":::
1. Create a new ESP-enabled HDInsight Spark cluster. 1. Select **Custom (size, settings, apps)**.
This step requires the following prerequisites:
* Select **Cluster admin user** and select the **HDIAdmin** account that you created as the on-premises admin user. Click **Select**. * Select **Cluster access group** > **HDIUserGroup**. Any user that you add to this group in the future will be able to access HDInsight clusters.
- ![Select the cluster access group HDIUserGroup](./media/apache-domain-joined-create-configure-enterprise-security-cluster/hdinsight-image-0129.jpg)
+ :::image type="content" source="./media/apache-domain-joined-create-configure-enterprise-security-cluster/hdinsight-image-0129.jpg" alt-text="Select the cluster access group HDIUserGroup" border="true":::
1. Complete the other steps of the cluster configuration and verify the details on the **Cluster summary**. Select **Create**. 1. Sign in to the Ambari UI for the newly created cluster at `https://CLUSTERNAME.azurehdinsight.net`. Use your admin username `hdiadmin@hdifabrikam.com` and its password.
- ![The Apache Ambari UI sign-in window](./media/apache-domain-joined-create-configure-enterprise-security-cluster/hdinsight-image-0135.jpg)
+ :::image type="content" source="./media/apache-domain-joined-create-configure-enterprise-security-cluster/hdinsight-image-0135.jpg" alt-text="The Apache Ambari UI sign-in window" border="true":::
1. From the cluster dashboard, select **Roles**.
-1. On the **Roles** page, under **Assign roles to these**, next to the **Cluster Administrator** role, enter the group *hdiusergroup*.
+1. On the **Roles** page, under **Assign roles to these**, next to the **Cluster Administrator** role, enter the group *hdiusergroup*.
- ![Assign the cluster admin role to hdiusergroup](./media/apache-domain-joined-create-configure-enterprise-security-cluster/hdinsight-image-0137.jpg)
+ :::image type="content" source="./media/apache-domain-joined-create-configure-enterprise-security-cluster/hdinsight-image-0137.jpg" alt-text="Assign the cluster admin role to hdiusergroup" border="true":::
1. Open your Secure Shell (SSH) client and sign in to the cluster. Use the **hdiuser** that you created in the on-premises Active Directory instance.
- ![Sign in to the cluster by using the SSH client](./media/apache-domain-joined-create-configure-enterprise-security-cluster/hdinsight-image-0139.jpg)
+ :::image type="content" source="./media/apache-domain-joined-create-configure-enterprise-security-cluster/hdinsight-image-0139.jpg" alt-text="Sign in to the cluster by using the SSH client" border="true":::
If you can sign in with this account, you've configured your ESP cluster correctly to sync with your on-premises Active Directory instance. ## Next steps
-Read [An introduction to Apache Hadoop security with ESP](hdinsight-security-overview.md).
+Read [An introduction to Apache Hadoop security with ESP](hdinsight-security-overview.md).
hdinsight Apache Domain Joined Manage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/domain-joined/apache-domain-joined-manage.md
HDInsight Enterprise Security Package has the following roles:
2. From the left menu, select **Roles**. 3. Select the blue question mark to see the permissions:
- ![ESP HDInsight roles permissions](./media/apache-domain-joined-manage/hdinsight-domain-joined-roles-permissions.png)
+ :::image type="content" source="./media/apache-domain-joined-manage/hdinsight-domain-joined-roles-permissions.png" alt-text="ESP HDInsight roles permissions" border="true":::
## Open the Ambari Management UI
HDInsight Enterprise Security Package has the following roles:
1. Sign in to Ambari using the cluster administrator domain user name and password. 1. Select the **admin** dropdown menu from the upper right corner, and then select **Manage Ambari**.
- ![ESP HDInsight manage Apache Ambari](./media/apache-domain-joined-manage/hdinsight-domain-joined-manage-ambari.png)
+ :::image type="content" source="./media/apache-domain-joined-manage/hdinsight-domain-joined-manage-ambari.png" alt-text="ESP HDInsight manage Apache Ambari" border="true":::
The UI looks like:
- ![ESP HDInsight Apache Ambari management UI](./media/apache-domain-joined-manage/hdinsight-domain-joined-ambari-management-ui.png)
+ :::image type="content" source="./media/apache-domain-joined-manage/hdinsight-domain-joined-ambari-management-ui.png" alt-text="ESP HDInsight Apache Ambari management UI" border="true":::
## List the domain users synchronized from your Active Directory 1. Open the Ambari Management UI. See [Open the Ambari Management UI](#open-the-ambari-management-ui). 2. From the left menu, select **Users**. You shall see all the users synced from your Active Directory to the HDInsight cluster.
- ![ESP HDInsight Ambari management UI list users](./media/apache-domain-joined-manage/hdinsight-domain-joined-ambari-management-ui-users.png)
+ :::image type="content" source="./media/apache-domain-joined-manage/hdinsight-domain-joined-ambari-management-ui-users.png" alt-text="ESP HDInsight Ambari management UI list users" border="true":::
## List the domain groups synchronized from your Active Directory 1. Open the Ambari Management UI. See [Open the Ambari Management UI](#open-the-ambari-management-ui). 2. From the left menu, select **Groups**. You shall see all the groups synced from your Active Directory to the HDInsight cluster.
- ![ESP HDInsight Ambari management UI list groups](./media/apache-domain-joined-manage/hdinsight-domain-joined-ambari-management-ui-groups.png)
+ :::image type="content" source="./media/apache-domain-joined-manage/hdinsight-domain-joined-ambari-management-ui-groups.png" alt-text="ESP HDInsight Ambari management UI list groups" border="true":::
## Configure Hive Views permissions
HDInsight Enterprise Security Package has the following roles:
2. From the left menu, select **Views**. 3. Select **HIVE** to show the details.
- ![ESP HDInsight Ambari management UI Hive Views](./media/apache-domain-joined-manage/hdinsight-domain-joined-ambari-management-ui-hive-views.png)
+ :::image type="content" source="./media/apache-domain-joined-manage/hdinsight-domain-joined-ambari-management-ui-hive-views.png" alt-text="ESP HDInsight Ambari management UI Hive Views" border="true":::
4. Select the **Hive View** link to configure Hive Views. 5. Scroll down to the **Permissions** section.
- ![ESP HDInsight Ambari management UI Hive Views configure permissions](./media/apache-domain-joined-manage/hdinsight-domain-joined-ambari-management-ui-hive-views-permissions.png)
+ :::image type="content" source="./media/apache-domain-joined-manage/hdinsight-domain-joined-ambari-management-ui-hive-views-permissions.png" alt-text="ESP HDInsight Ambari management UI Hive Views configure permissions" border="true":::
6. Select **Add User** or **Add Group**, and then specify the users or groups that can use Hive Views.
hdinsight Apache Domain Joined Run Hbase https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/domain-joined/apache-domain-joined-run-hbase.md
You can use SSH to connect to HBase clusters and then use [Apache HBase Shell](h
scan 'Customers' ```
- ![HDInsight Hadoop HBase shell output](./media/apache-domain-joined-run-hbase/hbase-shell-scan-table.png)
+ :::image type="content" source="./media/apache-domain-joined-run-hbase/hbase-shell-scan-table.png" alt-text="HDInsight Hadoop HBase shell output" border="true":::
## Create Ranger policies
Create a Ranger policy for **sales_user1** and **marketing_user1**.
1. Open the **Ranger Admin UI**. Click **\<ClusterName>_hbase** under **HBase**.
- ![HDInsight Apache Ranger Admin UI](./media/apache-domain-joined-run-hbase/apache-ranger-admin-login.png)
+ :::image type="content" source="./media/apache-domain-joined-run-hbase/apache-ranger-admin-login.png" alt-text="HDInsight Apache Ranger Admin UI" border="true":::
2. The **List of Policies** screen will display all Ranger policies created for this cluster. One pre-configured policy may be listed. Click **Add New Policy**.
- ![Apache Ranger HBase policies list](./media/apache-domain-joined-run-hbase/apache-ranger-hbase-policies-list.png)
+ :::image type="content" source="./media/apache-domain-joined-run-hbase/apache-ranger-hbase-policies-list.png" alt-text="Apache Ranger HBase policies list" border="true":::
3. On the **Create Policy** screen, enter the following values:
Create a Ranger policy for **sales_user1** and **marketing_user1**.
* `*` indicates zero or more occurrences of characters. * `?` indicates single character.
- ![Apache Ranger policy create sales](./media/apache-domain-joined-run-hbase/apache-ranger-hbase-policy-create-sales.png)
+ :::image type="content" source="./media/apache-domain-joined-run-hbase/apache-ranger-hbase-policy-create-sales.png" alt-text="Apache Ranger policy create sales" border="true":::
>[!NOTE] >Wait a few moments for Ranger to sync with Azure AD if a domain user is not automatically populated for **Select User**.
Create a Ranger policy for **sales_user1** and **marketing_user1**.
|Select User | marketing_user1 | |Permissions | Read |
- ![Apache Ranger policy create marketing](./media/apache-domain-joined-run-hbase/apache-ranger-hbase-policy-create-marketing.png)
+ :::image type="content" source="./media/apache-domain-joined-run-hbase/apache-ranger-hbase-policy-create-marketing.png" alt-text="Apache Ranger policy create marketing" border="true":::
6. Click **Add** to save the policy.
Based on the Ranger policies configured, **sales_user1** can view all of the dat
1. View the audit access events from the Ranger UI.
- ![HDInsight Ranger UI Policy Audit](./media/apache-domain-joined-run-hbase/apache-ranger-admin-audit.png)
+ :::image type="content" source="./media/apache-domain-joined-run-hbase/apache-ranger-admin-audit.png" alt-text="HDInsight Ranger UI Policy Audit" border="true":::
## Clean up resources
hdinsight Apache Domain Joined Run Hive https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/domain-joined/apache-domain-joined-run-hive.md
Learn how to configure Apache Ranger policies for Apache Hive. In this article,
2. Log in using the cluster administrator domain user name and password:
- ![HDInsight ESP Ranger home page](./media/apache-domain-joined-run-hive/hdinsight-domain-joined-ranger-home-page.png)
+ :::image type="content" source="./media/apache-domain-joined-run-hive/hdinsight-domain-joined-ranger-home-page.png" alt-text="HDInsight ESP Ranger home page" border="true":::
Currently, Ranger only works with Yarn and Hive.
In this section, you create two Ranger policies for accessing hivesampletable. Y
|Select User|hiveuser1| |Permissions|select|
- ![HDInsight ESP Ranger Hive policies configure](./media/apache-domain-joined-run-hive/hdinsight-domain-joined-configure-ranger-policy.png).
+ :::image type="content" source="./media/apache-domain-joined-run-hive/hdinsight-domain-joined-configure-ranger-policy.png" alt-text="HDInsight ESP Ranger Hive policies configure" border="true":::.
> [!NOTE] > If a domain user is not populated in Select User, wait a few moments for Ranger to sync with AAD.
In the last section, you've configured two policies. hiveuser1 has the select p
1. From the **Data** tab, navigate to **Get Data** > **From Other Sources** > **From ODBC** to launch the **From ODBC** window.
- ![Open data connection wizard](./media/apache-domain-joined-run-hive/simbahiveodbc-excel-dataconnection1.png)
+ :::image type="content" source="./media/apache-domain-joined-run-hive/simbahiveodbc-excel-dataconnection1.png" alt-text="Open data connection wizard" border="true":::
1. From the drop-down list, select the data source name that you created in the last section and then select **OK**.
hdinsight Apache Domain Joined Run Kafka https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/domain-joined/apache-domain-joined-run-kafka.md
A [HDInsight Kafka cluster with Enterprise Security Package](./apache-domain-joi
2. Sign in using your Azure Active Directory (AD) admin credentials. The Azure AD admin credentials aren't the same as HDInsight cluster credentials or Linux HDInsight node SSH credentials.
- ![HDInsight Apache Ranger Admin UI](./media/apache-domain-joined-run-kafka/apache-ranger-admin-login.png)
+ :::image type="content" source="./media/apache-domain-joined-run-kafka/apache-ranger-admin-login.png" alt-text="HDInsight Apache Ranger Admin UI" border="true":::
## Create domain users
Create a Ranger policy for **sales_user** and **marketing_user**.
* ΓÇÖ*ΓÇÖ indicates zero or more occurrences of characters. * ΓÇÖ?ΓÇÿ indicates single character.
- ![Apache Ranger Admin UI Create Policy1](./media/apache-domain-joined-run-kafka/apache-ranger-admin-create-policy.png)
+ :::image type="content" source="./media/apache-domain-joined-run-kafka/apache-ranger-admin-create-policy.png" alt-text="Apache Ranger Admin UI Create Policy1" border="true":::
Wait a few moments for Ranger to sync with Azure AD if a domain user is not automatically populated for **Select User**.
Create a Ranger policy for **sales_user** and **marketing_user**.
|Select User | marketing_user1 | |Permissions | publish, consume, create |
- ![Apache Ranger Admin UI Create Policy2](./media/apache-domain-joined-run-kafka/apache-ranger-admin-create-policy-2.png)
+ :::image type="content" source="./media/apache-domain-joined-run-kafka/apache-ranger-admin-create-policy-2.png" alt-text="Apache Ranger Admin UI Create Policy2" border="true":::
6. Select **Add** to save the policy.
Based on the Ranger policies configured, **sales_user** can produce/consume topi
8. View the audit access events from the Ranger UI.
- ![Ranger UI policy audit access events ](./media/apache-domain-joined-run-kafka/apache-ranger-admin-audit.png)
+ :::image type="content" source="./media/apache-domain-joined-run-kafka/apache-ranger-admin-audit.png" alt-text="Ranger UI policy audit access events " border="true":::
## Produce and consume topics in ESP Kafka by using the console
hdinsight Hdinsight Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/domain-joined/hdinsight-security-overview.md
Azure compliance offerings are based on various types of assurances, including f
The following image summarizes the major system security areas and the security solutions that are available to you in each. It also highlights which security areas are your responsibility as a customer. And which areas are the responsibility of HDInsight as the service provider.
-![HDInsight shared responsibilities diagram](./media/hdinsight-security-overview/hdinsight-shared-responsibility.png)
The following table provides links to resources for each type of security solution.
The following table provides links to resources for each type of security soluti
* [Plan for HDInsight clusters with ESP](apache-domain-joined-architecture.md) * [Configure HDInsight clusters with ESP](./apache-domain-joined-configure-using-azure-adds.md)
-* [Manage HDInsight clusters with ESP](apache-domain-joined-manage.md)
+* [Manage HDInsight clusters with ESP](apache-domain-joined-manage.md)
hdinsight Identity Broker https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/domain-joined/identity-broker.md
Use the following table to determine the best authentication option based on you
The following diagram shows the modern OAuth-based authentication flow for all users, including federated users, after HDInsight ID Broker is enabled: In this diagram, the client (that is, a browser or app) needs to acquire the OAuth token first. Then it presents the token to the gateway in an HTTP request. If you've already signed in to other Azure services, such as the Azure portal, you can sign in to your HDInsight cluster with a single sign-on experience.
There still might be many legacy applications that only support basic authentica
The following diagram shows the basic authentication flow for federated users. First, the gateway attempts to complete the authentication by using [ROPC flow](../../active-directory/develop/v2-oauth-ropc.md). In case there are no password hashes synced to Azure AD, it falls back to discovering the AD FS endpoint and completes the authentication by accessing the AD FS endpoint. - ## Enable HDInsight ID Broker
To create an Enterprise Security Package cluster with HDInsight ID Broker enable
The HDInsight ID Broker feature adds one extra VM to the cluster. This VM is the HDInsight ID Broker node, and it includes server components to support authentication. The HDInsight ID Broker node is domain joined to the Azure AD DS domain.
-![Diagram that shows option to enable HDInsight ID Broker.](./media/identity-broker/identity-broker-enable.png)
### Use Azure Resource Manager templates
hdinsight Apache Ambari Troubleshoot Stale Alerts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hadoop/apache-ambari-troubleshoot-stale-alerts.md
This article describes troubleshooting steps and possible resolutions for issues
In the Apache Ambari UI, you might see an alert like this:
-![Apache Ambari stale alert example](./media/apache-ambari-troubleshoot-stale-alerts/ambari-stale-alerts-example.png)
## Cause
hdinsight Apache Hadoop Connect Excel Hive Odbc Driver https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hadoop/apache-hadoop-connect-excel-hive-odbc-driver.md
The following steps show you how to create a Hive ODBC Data Source.
1. From Windows, navigate to **Start > Windows Administrative Tools > ODBC Data Sources (32-bit)/(64-bit)**. This action opens the **ODBC Data Source Administrator** window.
- ![OBDC data source administrator](./media/apache-hadoop-connect-excel-hive-odbc-driver/simbahiveodbc-datasourceadmin1.png "Configure a DSN using ODBC Data Source Administrator")
+ :::image type="content" source="./media/apache-hadoop-connect-excel-hive-odbc-driver/simbahiveodbc-datasourceadmin1.png" alt-text="OBDC data source administrator" border="true":::
1. From the **User DSN** tab, select **Add** to open the **Create New Data Source** window.
The following steps show you how to create a Hive ODBC Data Source.
| Rows fetched per block |When fetching a large number of records, tuning this parameter may be required to ensure optimal performances. | | Default string column length, Binary column length, Decimal column scale |The data type lengths and precisions may affect how data is returned. They cause incorrect information to be returned because of loss of precision and, or truncation. |
- ![Advanced DSN configuration options](./media/apache-hadoop-connect-excel-hive-odbc-driver/hiveodbc-datasource-advancedoptions1.png "Advanced DSN configuration options")
+ :::image type="content" source="./media/apache-hadoop-connect-excel-hive-odbc-driver/hiveodbc-datasource-advancedoptions1.png" alt-text="Advanced DSN configuration options" border="true":::
1. Select **Test** to test the data source. When the data source is configured correctly, the test result shows **SUCCESS!**
The following steps describe the way to import data from a Hive table into an Ex
2. From the **Data** tab, navigate to **Get Data** > **From Other Sources** > **From ODBC** to launch the **From ODBC** window.
- ![Open Excel data connection wizard](./media/apache-hadoop-connect-excel-hive-odbc-driver/simbahiveodbc-excel-dataconnection1.png "Open Excel data connection wizard")
+ :::image type="content" source="./media/apache-hadoop-connect-excel-hive-odbc-driver/simbahiveodbc-excel-dataconnection1.png" alt-text="Open Excel data connection wizard" border="true":::
3. From the drop-down list, select the data source name that you created in the last section and then select **OK**.
The following steps describe the way to import data from a Hive table into an Ex
5. From **Navigator**, navigate to **HIVE** > **default** > **hivesampletable**, and then select **Load**. It takes a few moments before data gets imported to Excel.
- ![HDInsight Excel Hive ODBC navigator](./media/apache-hadoop-connect-excel-hive-odbc-driver/hdinsight-hive-odbc-navigator.png "HDInsight Excel Hive ODBC navigator")
+ :::image type="content" source="./media/apache-hadoop-connect-excel-hive-odbc-driver/hdinsight-hive-odbc-navigator.png" alt-text="HDInsight Excel Hive ODBC navigator" border="true":::
## Next steps
hdinsight Apache Hadoop Connect Excel Power Query https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hadoop/apache-hadoop-connect-excel-power-query.md
The Power Query add-in for Excel makes it easy to import data from your HDInsigh
* Select > **Data** > **Get Data** > **From Azure** > **From Azure HDInsight(HDFS)**.
- ![HDI.PowerQuery.SelectHdiSource.2016](./media/apache-hadoop-connect-excel-power-query/powerquery-selecthdisource-excel2016.png)
+ :::image type="content" source="./media/apache-hadoop-connect-excel-power-query/powerquery-selecthdisource-excel2016.png" alt-text="HDI.PowerQuery.SelectHdiSource.2016" border="true":::
* Excel 2013/2010 * Select **Power Query** > **From Azure** > **From Microsoft Azure HDInsight**.
- ![HDI.PowerQuery.SelectHdiSource](./media/apache-hadoop-connect-excel-power-query/powerquery-selecthdisource.png)
+ :::image type="content" source="./media/apache-hadoop-connect-excel-power-query/powerquery-selecthdisource.png" alt-text="HDI.PowerQuery.SelectHdiSource" border="true":::
**Note:** If you don't see the **Power Query** menu, go to **File** > **Options** > **Add-ins**, and select **COM Add-ins** from the drop-down **Manage** box at the bottom of the page. Select the **Go...** button and verify that the box for the Power Query for Excel add-in has been checked.
The Power Query add-in for Excel makes it easy to import data from your HDInsigh
1. Locate **HiveSampleData.txt** in the **Name** column (the folder path is **../hive/warehouse/hivesampletable/**), and then select **Binary** on the left of HiveSampleData.txt. HiveSampleData.txt comes with all the cluster. Optionally, you can use your own file.
- ![HDI Excel power query import data](./media/apache-hadoop-connect-excel-power-query/powerquery-importdata.png)
+ :::image type="content" source="./media/apache-hadoop-connect-excel-power-query/powerquery-importdata.png" alt-text="HDI Excel power query import data" border="true":::
1. If you want, you can rename the column names. When you're ready, select **Close & Load**. The data has been loaded to your workbook:
- ![HDI Excel power query imported table](./media/apache-hadoop-connect-excel-power-query/powerquery-importedtable.png)
+ :::image type="content" source="./media/apache-hadoop-connect-excel-power-query/powerquery-importedtable.png" alt-text="HDI Excel power query imported table" border="true":::
## Next steps
hdinsight Apache Hadoop Connect Hive Jdbc Driver https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hadoop/apache-hadoop-connect-hive-jdbc-driver.md
Replace `CLUSTERNAME` with the name of your HDInsight cluster.
Or you can get the connection through **Ambari UI > Hive > Configs > Advanced**.
-![Get JDBC connection string through Ambari](./media/apache-hadoop-connect-hive-jdbc-driver/hdinsight-get-connection-string-through-ambari.png)
### Host name in connection string
SQuirreL SQL is a JDBC client that can be used to remotely run Hive queries with
3. Start the SQuirreL SQL application. From the left of the window, select **Drivers**.
- ![Drivers tab on the left of the window](./media/apache-hadoop-connect-hive-jdbc-driver/hdinsight-squirreldrivers.png)
+ :::image type="content" source="./media/apache-hadoop-connect-hive-jdbc-driver/hdinsight-squirreldrivers.png" alt-text="Drivers tab on the left of the window" border="true":::
4. From the icons at the top of the **Drivers** dialog, select the **+** icon to create a driver.
- ![SQuirreL SQL application drivers icon](./media/apache-hadoop-connect-hive-jdbc-driver/hdinsight-driversicons.png)
+ :::image type="content" source="./media/apache-hadoop-connect-hive-jdbc-driver/hdinsight-driversicons.png" alt-text="SQuirreL SQL application drivers icon" border="true":::
5. In the Add Driver dialog, add the following information:
SQuirreL SQL is a JDBC client that can be used to remotely run Hive queries with
|Extra Class Path|Use the **Add** button to add the all of jar files downloaded earlier.| |Class Name|org.apache.hive.jdbc.HiveDriver|
- ![add driver dialog with parameters](./media/apache-hadoop-connect-hive-jdbc-driver/hdinsight-add-driver.png)
+ :::image type="content" source="./media/apache-hadoop-connect-hive-jdbc-driver/hdinsight-add-driver.png" alt-text="add driver dialog with parameters" border="true":::
Select **OK** to save these settings. 6. On the left of the SQuirreL SQL window, select **Aliases**. Then select the **+** icon to create a connection alias.
- ![`SQuirreL SQL add new alias dialog`](./media/apache-hadoop-connect-hive-jdbc-driver/hdinsight-new-aliases.png)
+ :::image type="content" source="./media/apache-hadoop-connect-hive-jdbc-driver/hdinsight-new-aliases.png" alt-text="`SQuirreL SQL add new alias dialog`" border="true":::
7. Use the following values for the **Add Alias** dialog:
SQuirreL SQL is a JDBC client that can be used to remotely run Hive queries with
|User Name|The cluster login account name for your HDInsight cluster. The default is **admin**.| |Password|The password for the cluster login account.|
- ![add alias dialog with parameters](./media/apache-hadoop-connect-hive-jdbc-driver/hdinsight-addalias-dialog.png)
+ :::image type="content" source="./media/apache-hadoop-connect-hive-jdbc-driver/hdinsight-addalias-dialog.png" alt-text="add alias dialog with parameters" border="true":::
> [!IMPORTANT] > Use the **Test** button to verify that the connection works. When **Connect to: Hive on HDInsight** dialog appears, select **Connect** to perform the test. If the test succeeds, you see a **Connection successful** dialog. If an error occurs, see [Troubleshooting](#troubleshooting).
SQuirreL SQL is a JDBC client that can be used to remotely run Hive queries with
8. From the **Connect to** dropdown at the top of SQuirreL SQL, select **Hive on HDInsight**. When prompted, select **Connect**.
- ![connection dialog with parameters](./media/apache-hadoop-connect-hive-jdbc-driver/hdinsight-connect-dialog.png)
+ :::image type="content" source="./media/apache-hadoop-connect-hive-jdbc-driver/hdinsight-connect-dialog.png" alt-text="connection dialog with parameters" border="true":::
9. Once connected, enter the following query into the SQL query dialog, and then select the **Run** icon (a running person). The results area should show the results of the query.
SQuirreL SQL is a JDBC client that can be used to remotely run Hive queries with
select * from hivesampletable limit 10; ```
- ![sql query dialog, including results](./media/apache-hadoop-connect-hive-jdbc-driver/hdinsight-sqlquery-dialog.png)
+ :::image type="content" source="./media/apache-hadoop-connect-hive-jdbc-driver/hdinsight-sqlquery-dialog.png" alt-text="sql query dialog, including results" border="true":::
## Connect from an example Java application
hdinsight Apache Hadoop Connect Hive Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hadoop/apache-hadoop-connect-hive-power-bi.md
Learn how to connect Microsoft Power BI Desktop to Azure HDInsight using ODBC an
In this article, you load the data from a `hivesampletable` Hive table to Power BI. The Hive table contains some mobile phone usage data. Then you plot the usage data on a world map:
-![HDInsight Power BI the map report](./media/apache-hadoop-connect-hive-power-bi/hdinsight-power-bi-visualization.png)
The information also applies to the new [Interactive Query](../interactive-query/apache-interactive-query-get-started.md) cluster type. For how to connect to HDInsight Interactive Query using direct query, see [Visualize Interactive Query Hive data with Microsoft Power BI using direct query in Azure HDInsight](../interactive-query/apache-hadoop-connect-hive-power-bi-directquery.md).
The **hivesampletable** Hive table comes with all HDInsight clusters.
1. From the top menu, navigate to **Home** > **Get Data** > **More...**.
- ![HDInsight Excel Power BI open data](./media/apache-hadoop-connect-hive-power-bi/hdinsight-power-bi-open-odbc.png)
+ :::image type="content" source="./media/apache-hadoop-connect-hive-power-bi/hdinsight-power-bi-open-odbc.png" alt-text="HDInsight Excel Power BI open data" border="true":::
1. From the **Get Data** dialog, select **Other** from the left, select **ODBC** from the right, and then select **Connect** on the bottom.
Continue from the last procedure.
1. From the Visualizations pane, select **Map**, it's a globe icon.
- ![HDInsight Power BI customizes report](./media/apache-hadoop-connect-hive-power-bi/hdinsight-power-bi-customize.png)
+ :::image type="content" source="./media/apache-hadoop-connect-hive-power-bi/hdinsight-power-bi-customize.png" alt-text="HDInsight Power BI customizes report" border="true":::
1. From the **Fields** pane, select **country** and **devicemake**. You can see the data plotted on the map.
hdinsight Apache Hadoop Deep Dive Advanced Analytics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hadoop/apache-hadoop-deep-dive-advanced-analytics.md
HDInsight provides the ability to obtain valuable insight from large amounts of
## Advanced analytics process
-![Advanced analytics process flow](./media/apache-hadoop-deep-dive-advanced-analytics/hdinsight-analytic-process.png)
After you've identified the business problem and have started collecting and processing your data, you need to create a model that represents the question you wish to predict. Your model will use one or more machine learning algorithms to make the type of prediction that best fits your business needs. The majority of your data should be used to train your model, with the rest used to test or evaluate it.
After you create, load, test, and evaluate your model, the next step is to deplo
Advanced analytics solutions provide a set of machine learning algorithms. Here is a summary of the categories of algorithms and associated common business use cases.
-![Machine Learning category summaries](./media/apache-hadoop-deep-dive-advanced-analytics/machine-learning-use-cases.png)
Along with selecting the best-fitting algorithm(s), you need to consider whether or not you need to provide data for training. Machine learning algorithms are categorized as follows:
As part of HDInsight, you can create an HDInsight cluster with [ML Services](../
Let's review an example of an advanced analytics machine learning pipeline using HDInsight.
-In this scenario you'll see how DNNs produced in a deep learning framework, MicrosoftΓÇÖs Cognitive Toolkit (CNTK), can be operationalized for scoring large image collections stored in an Azure Blob Storage account using PySpark on an HDInsight Spark cluster. This approach is applied to a common DNN use case, aerial image classification, and can be used to identify recent patterns in urban development. You'll use a pre-trained image classification model. The model is pre-trained on the [CIFAR-10 dataset](https://www.cs.toronto.edu/~kriz/cifar.html) and has been applied to 10,000 withheld images.
+In this scenario you'll see how DNNs produced in a deep learning framework, Microsoft's Cognitive Toolkit (CNTK), can be operationalized for scoring large image collections stored in an Azure Blob Storage account using PySpark on an HDInsight Spark cluster. This approach is applied to a common DNN use case, aerial image classification, and can be used to identify recent patterns in urban development. You'll use a pre-trained image classification model. The model is pre-trained on the [CIFAR-10 dataset](https://www.cs.toronto.edu/~kriz/cifar.html) and has been applied to 10,000 withheld images.
There are three key tasks in this advanced analytics scenario:
There are three key tasks in this advanced analytics scenario:
This example uses the CIFAR-10 image set compiled and distributed by Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. The CIFAR-10 dataset contains 60,000 32×32 color images belonging to 10 mutually exclusive classes:
-![Machine Learning example images](./media/apache-hadoop-deep-dive-advanced-analytics/machine-learning-images.png)
-For more information on the dataset, see Alex KrizhevskyΓÇÖs [Learning Multiple Layers of Features from Tiny Images](https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf).
+For more information on the dataset, see Alex Krizhevsky's [Learning Multiple Layers of Features from Tiny Images](https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf).
-The dataset was partitioned into a training set of 50,000 images and a test set of 10,000 images. The first set was used to train a twenty-layer-deep convolutional residual network (ResNet) model using Microsoft Cognitive Toolkit by following [this tutorial](https://github.com/Microsoft/CNTK/tree/master/Examples/Image/Classification/ResNet) from the Cognitive Toolkit GitHub repository. The remaining 10,000 images were used for testing the modelΓÇÖs accuracy. This is where distributed computing comes into play: the task of pre-processing and scoring the images is highly parallelizable. With the saved trained model in hand, we used:
+The dataset was partitioned into a training set of 50,000 images and a test set of 10,000 images. The first set was used to train a twenty-layer-deep convolutional residual network (ResNet) model using Microsoft Cognitive Toolkit by following [this tutorial](https://github.com/Microsoft/CNTK/tree/master/Examples/Image/Classification/ResNet) from the Cognitive Toolkit GitHub repository. The remaining 10,000 images were used for testing the model's accuracy. This is where distributed computing comes into play: the task of pre-processing and scoring the images is highly parallelizable. With the saved trained model in hand, we used:
-* PySpark to distribute the images and trained model to the clusterΓÇÖs worker nodes.
+* PySpark to distribute the images and trained model to the cluster's worker nodes.
* Python to pre-process the images on each node of the HDInsight Spark cluster. * Cognitive Toolkit to load the model and score the pre-processed images on each node. * Jupyter Notebooks to run the PySpark script, aggregate the results, and use [Matplotlib](https://matplotlib.org/) to visualize the model performance. The entire preprocessing/scoring of the 10,000 images takes less than one minute on a cluster with 4 worker nodes. The model accurately predicts the labels of ~9,100 (91%) images. A confusion matrix illustrates the most common classification errors. For example, the matrix shows that mislabeling dogs as cats and vice versa occurs more frequently than for other label pairs.
-![Machine Learning results chart](./media/apache-hadoop-deep-dive-advanced-analytics/machine-learning-results.png)
### Try it Out!
hdinsight Apache Hadoop Dotnet Csharp Mapreduce Streaming https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hadoop/apache-hadoop-dotnet-csharp-mapreduce-streaming.md
Next, you need to upload the *mapper* and *reducer* applications to HDInsight st
1. Expand the HDInsight cluster that you wish to deploy this application to. An entry with the text **(Default Storage Account)** is listed.
- ![Storage account, HDInsight cluster, Server Explorer, Visual Studio](./media/apache-hadoop-dotnet-csharp-mapreduce-streaming/hdinsight-storage-account.png)
+ :::image type="content" source="./media/apache-hadoop-dotnet-csharp-mapreduce-streaming/hdinsight-storage-account.png" alt-text="Storage account, HDInsight cluster, Server Explorer, Visual Studio" border="true":::
* If the **(Default Storage Account)** entry can be expanded, you're using an **Azure Storage Account** as default storage for the cluster. To view the files on the default storage for the cluster, expand the entry and then double-click **(Default Container)**.
Next, you need to upload the *mapper* and *reducer* applications to HDInsight st
* If you're using an **Azure Storage Account**, select the **Upload Blob** icon.
- ![HDInsight upload icon for mapper, Visual Studio](./media/apache-hadoop-dotnet-csharp-mapreduce-streaming/hdinsight-upload-icon.png)
+ :::image type="content" source="./media/apache-hadoop-dotnet-csharp-mapreduce-streaming/hdinsight-upload-icon.png" alt-text="HDInsight upload icon for mapper, Visual Studio" border="true":::
In the **Upload New File** dialog box, under **File name**, select **Browse**. In the **Upload Blob** dialog box, go to the *bin\debug* folder for the *mapper* project, and then choose the *mapper.exe* file. Finally, select **Open** and then **OK** to complete the upload.
hdinsight Apache Hadoop Emulator Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hadoop/apache-hadoop-emulator-get-started.md
To download an older HDP version sandbox, see the links under **Older Versions**
1. From the **File** menu, click **Import Appliance**, and then specify the Hortonworks Sandbox image. 1. Select the Hortonworks Sandbox, click **Start**, and then **Normal Start**. Once the virtual machine has finished the boot process, it displays login instructions.
- ![virtualbox manager normal start](./media/apache-hadoop-emulator-get-started/virtualbox-normal-start.png)
+ :::image type="content" source="./media/apache-hadoop-emulator-get-started/virtualbox-normal-start.png" alt-text="virtualbox manager normal start" border="true":::
1. Open a web browser and navigate to the URL displayed (usually `http://127.0.0.1:8888`).
hdinsight Apache Hadoop Etl At Scale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hadoop/apache-hadoop-etl-at-scale.md
Extract, transform, and load (ETL) is the process by which data is acquired from
The use of HDInsight in the ETL process is summarized by this pipeline:
-![HDInsight ETL at scale overview](./media/apache-hadoop-etl-at-scale/hdinsight-etl-at-scale-overview.png)
The following sections explore each of the ETL phases and their associated components.
hdinsight Apache Hadoop Hive Pig Udf Dotnet Csharp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hadoop/apache-hadoop-hive-pig-udf-dotnet-csharp.md
Next, upload the Hive and Pig UDF applications to storage on a HDInsight cluster
1. Expand the HDInsight cluster that you wish to deploy this application to. An entry with the text **(Default Storage Account)** is listed.
- ![Default storage account, HDInsight cluster, Server Explorer](./media/apache-hadoop-hive-pig-udf-dotnet-csharp/hdinsight-storage-account.png)
+ :::image type="content" source="./media/apache-hadoop-hive-pig-udf-dotnet-csharp/hdinsight-storage-account.png" alt-text="Default storage account, HDInsight cluster, Server Explorer" border="true":::
* If this entry can be expanded, you're using an **Azure Storage Account** as default storage for the cluster. To view the files on the default storage for the cluster, expand the entry and then double-click the **(Default Container)**.
Next, upload the Hive and Pig UDF applications to storage on a HDInsight cluster
* If you're using an **Azure Storage Account**, select the **Upload Blob** icon.
- ![HDInsight upload icon for new project](./media/apache-hadoop-hive-pig-udf-dotnet-csharp/hdinsight-upload-icon.png)
+ :::image type="content" source="./media/apache-hadoop-hive-pig-udf-dotnet-csharp/hdinsight-upload-icon.png" alt-text="HDInsight upload icon for new project" border="true":::
In the **Upload New File** dialog box, under **File name**, select **Browse**. In the **Upload Blob** dialog box, go to the *bin\debug* folder for the *HiveCSharp* project, and then choose the *HiveCSharp.exe* file. Finally, select **Open** and then **OK** to complete the upload.
hdinsight Apache Hadoop Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hadoop/apache-hadoop-introduction.md
To see available Hadoop technology stack components on HDInsight, see [Component
A basic word count MapReduce job example is illustrated in the following diagram:
- ![HDI.WordCountDiagram](./media/apache-hadoop-introduction/hdi-word-count-diagram.gif)
+ :::image type="content" source="./media/apache-hadoop-introduction/hdi-word-count-diagram.gif" alt-text="HDI.WordCountDiagram" border="true":::
The output of this job is a count of how many times each word occurred in the text.
hdinsight Apache Hadoop Linux Create Cluster Get Started Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hadoop/apache-hadoop-linux-create-cluster-get-started-portal.md
In this section, you create a Hadoop cluster in HDInsight using the Azure portal
1. From the top menu, select **+ Create a resource**.
- ![Create a resource HDInsight cluster](./media/apache-hadoop-linux-create-cluster-get-started-portal/azure-portal-create-resource.png "Create a resource HDInsight cluster")
+ :::image type="content" source="./media/apache-hadoop-linux-create-cluster-get-started-portal/azure-portal-create-resource.png" alt-text="Create a resource HDInsight cluster" border="true":::
1. Select **Analytics** > **Azure HDInsight** to go to the **Create HDInsight cluster** page. 1. From the **Basics** tab, provide the following information:
- |Property |Description |
- |||
- |Subscription | From the drop-down list, select the Azure subscription that's used for the cluster. |
- |Resource group | From the drop-down list, select your existing resource group, or select **Create new**.|
- |Cluster name | Enter a globally unique name. The name can consist of up to 59 characters including letters, numbers, and hyphens. The first and last characters of the name can't be hyphens. |
- |Region | From the drop-down list, select a region where the cluster is created. Choose a location closer to you for better performance. |
- |Cluster type| Select **Select cluster type**. Then select **Hadoop** as the cluster type.|
- |Version|From the drop-down list, select a **version**. Use the default version if you don't know what to choose.|
- |Cluster login username and password | The default login name is **admin**. The password must be at least 10 characters in length and must contain at least one digit, one uppercase, and one lower case letter, one non-alphanumeric character (except characters ' " ` \). Make sure you **do not provide** common passwords such as "Pass@word1".|
- |Secure Shell (SSH) username | The default username is **sshuser**. You can provide another name for the SSH username. |
- |Use cluster login password for SSH| Select this check box to use the same password for SSH user as the one you provided for the cluster login user.|
+ |Property |Description |
+ |||
+ |Subscription | From the drop-down list, select the Azure subscription that's used for the cluster. |
+ |Resource group | From the drop-down list, select your existing resource group, or select **Create new**.|
+ |Cluster name | Enter a globally unique name. The name can consist of up to 59 characters including letters, numbers, and hyphens. The first and last characters of the name can't be hyphens. |
+ |Region | From the drop-down list, select a region where the cluster is created. Choose a location closer to you for better performance. |
+ |Cluster type| Select **Select cluster type**. Then select **Hadoop** as the cluster type.|
+ |Version|From the drop-down list, select a **version**. Use the default version if you don't know what to choose.|
+ |Cluster login username and password | The default login name is **admin**. The password must be at least 10 characters in length and must contain at least one digit, one uppercase, and one lower case letter, one non-alphanumeric character (except characters ' " ` \). Make sure you **do not provide** common passwords such as "Pass@word1".|
+ |Secure Shell (SSH) username | The default username is **sshuser**. You can provide another name for the SSH username. |
+ |Use cluster login password for SSH| Select this check box to use the same password for SSH user as the one you provided for the cluster login user.|
- ![HDInsight Linux get started provide cluster basic values](./media/apache-hadoop-linux-create-cluster-get-started-portal/azure-portal-cluster-basics.png "Provide basic values for creating an HDInsight cluster")
+ :::image type="content" source="./media/apache-hadoop-linux-create-cluster-get-started-portal/azure-portal-cluster-basics.png" alt-text="HDInsight Linux get started provide cluster basic values" border="true":::
- Select the **Next: Storage >>** to advance to the storage settings.
+ Select the **Next: Storage >>** to advance to the storage settings.
1. From the **Storage** tab, provide the following values:
- |Property |Description |
- |||
- |Primary storage type|Use the default value **Azure Storage**.|
- |Selection method|Use the default value **Select from list**.|
- |Primary storage account|Use the drop-down list to select an existing storage account, or select **Create new**. If you create a new account, the name must be between 3 and 24 characters in length, and can include numbers and lowercase letters only|
- |Container|Use the autopopulated value.|
+ |Property |Description |
+ |||
+ |Primary storage type|Use the default value **Azure Storage**.|
+ |Selection method|Use the default value **Select from list**.|
+ |Primary storage account|Use the drop-down list to select an existing storage account, or select **Create new**. If you create a new account, the name must be between 3 and 24 characters in length, and can include numbers and lowercase letters only|
+ |Container|Use the autopopulated value.|
- ![HDInsight Linux get started provide cluster storage values](./media/apache-hadoop-linux-create-cluster-get-started-portal/azure-portal-cluster-storage.png "Provide storage values for creating an HDInsight cluster")
+ :::image type="content" source="./media/apache-hadoop-linux-create-cluster-get-started-portal/azure-portal-cluster-storage.png" alt-text="HDInsight Linux get started provide cluster storage values" border="true":::
- Each cluster has an [Azure Storage account](../hdinsight-hadoop-use-blob-storage.md), an [Azure Data Lake Gen1](../hdinsight-hadoop-use-data-lake-storage-gen1.md), or an [`Azure Data Lake Storage Gen2`](../hdinsight-hadoop-use-data-lake-storage-gen2.md) dependency. It's referred as the default storage account. HDInsight cluster and its default storage account must be colocated in the same Azure region. Deleting clusters doesn't delete the storage account.
+ Each cluster has an [Azure Storage account](../hdinsight-hadoop-use-blob-storage.md), an [Azure Data Lake Gen1](../hdinsight-hadoop-use-data-lake-storage-gen1.md), or an [`Azure Data Lake Storage Gen2`](../hdinsight-hadoop-use-data-lake-storage-gen2.md) dependency. It's referred as the default storage account. HDInsight cluster and its default storage account must be colocated in the same Azure region. Deleting clusters doesn't delete the storage account.
- Select the **Review + create** tab.
+ Select the **Review + create** tab.
1. From the **Review + create** tab, verify the values you selected in the earlier steps.
- ![HDInsight Linux get started cluster summary](./media/apache-hadoop-linux-create-cluster-get-started-portal/azure-portal-cluster-review-create-hadoop.png "HDInsight Linux get started cluster summary")
+ :::image type="content" source="./media/apache-hadoop-linux-create-cluster-get-started-portal/azure-portal-cluster-review-create-hadoop.png" alt-text="HDInsight Linux get started cluster summary" border="true":::
1. Select **Create**. It takes about 20 minutes to create a cluster.
- Once the cluster is created, you see the cluster overview page in the Azure portal.
+ Once the cluster is created, you see the cluster overview page in the Azure portal.
- ![HDInsight Linux get started cluster settings](./media/apache-hadoop-linux-create-cluster-get-started-portal/cluster-settings-overview.png "HDInsight cluster properties")
+ :::image type="content" source="./media/apache-hadoop-linux-create-cluster-get-started-portal/cluster-settings-overview.png" alt-text="HDInsight Linux get started cluster settings" border="true":::
## Run Apache Hive queries
In this section, you create a Hadoop cluster in HDInsight using the Azure portal
1. To open Ambari, from the previous screenshot, select **Cluster Dashboard**. You can also browse to `https://ClusterName.azurehdinsight.net` where `ClusterName` is the cluster you created in the previous section.
- ![HDInsight Linux get started cluster dashboard](./media/apache-hadoop-linux-create-cluster-get-started-portal/hdinsight-linux-get-started-open-cluster-dashboard.png "HDInsight Linux get started cluster dashboard")
+ :::image type="content" source="./media/apache-hadoop-linux-create-cluster-get-started-portal/hdinsight-linux-get-started-open-cluster-dashboard.png" alt-text="HDInsight Linux get started cluster dashboard" border="true":::
2. Enter the Hadoop username and password that you specified while creating the cluster. The default username is **admin**. 3. Open **Hive View** as shown in the following screenshot:
- ![Selecting Hive View from Ambari](./media/apache-hadoop-linux-create-cluster-get-started-portal/hdi-select-hive-view.png "HDInsight Hive Viewer menu")
+ :::image type="content" source="./media/apache-hadoop-linux-create-cluster-get-started-portal/hdi-select-hive-view.png" alt-text="Selecting Hive View from Ambari" border="true":::
4. In the **QUERY** tab, paste the following HiveQL statements into the worksheet:
- ```sql
- SHOW TABLES;
- ```
+ ```sql
+ SHOW TABLES;
+ ```
- ![HDInsight Hive View Query Editor](./media/apache-hadoop-linux-create-cluster-get-started-portal/hdi-apache-hive-view1.png "HDInsight Hive View Query Editor")
+ :::image type="content" source="./media/apache-hadoop-linux-create-cluster-get-started-portal/hdi-apache-hive-view1.png" alt-text="HDInsight Hive View Query Editor" border="true":::
-5. Select **Execute**. A **RESULTS** tab appears beneath the **QUERY** tab and displays information about the job.
+5. Select **Execute**. A **RESULTS** tab appears beneath the **QUERY** tab and displays information about the job.
- Once the query has finished, the **QUERY** tab displays the results of the operation. You shall see one table called **hivesampletable**. This sample Hive table comes with all the HDInsight clusters.
+ Once the query has finished, the **QUERY** tab displays the results of the operation. You shall see one table called **hivesampletable**. This sample Hive table comes with all the HDInsight clusters.
- ![HDInsight Apache Hive view results](./media/apache-hadoop-linux-create-cluster-get-started-portal/hdinsight-hive-views.png "HDInsight Apache Hive view results")
+ :::image type="content" source="./media/apache-hadoop-linux-create-cluster-get-started-portal/hdinsight-hive-views.png" alt-text="HDInsight Apache Hive view results" border="true":::
6. Repeat step 4 and step 5 to run the following query:
- ```sql
- SELECT * FROM hivesampletable;
- ```
+ ```sql
+ SELECT * FROM hivesampletable;
+ ```
7. You can also save the results of the query. Select the menu button on the right, and specify whether you want to download the results as a CSV file or store it to the storage account associated with the cluster.
- ![Save result of Apache Hive query](./media/apache-hadoop-linux-create-cluster-get-started-portal/hdinsight-linux-hive-view-save-results.png "Save result of Apache Hive query")
+ :::image type="content" source="./media/apache-hadoop-linux-create-cluster-get-started-portal/hdinsight-linux-hive-view-save-results.png" alt-text="Save result of Apache Hive query" border="true":::
After you've completed a Hive job, you can [export the results to Azure SQL Database or SQL Server database](apache-hadoop-use-sqoop-mac-linux.md), you can also [visualize the results using Excel](apache-hadoop-connect-excel-power-query.md). For more information about using Hive in HDInsight, see [Use Apache Hive and HiveQL with Apache Hadoop in HDInsight to analyze a sample Apache log4j file](hdinsight-use-hive.md).
After you complete the quickstart, you may want to delete the cluster. With HDIn
1. Go back to the browser tab where you have the Azure portal. You shall be on the cluster overview page. If you only want to delete the cluster but retain the default storage account, select **Delete**.
- ![Azure HDInsight delete cluster](./media/apache-hadoop-linux-create-cluster-get-started-portal/hdinsight-delete-cluster.png "Delete Azure HDInsight cluster")
+ :::image type="content" source="./media/apache-hadoop-linux-create-cluster-get-started-portal/hdinsight-delete-cluster.png" alt-text="Azure HDInsight delete cluster" border="true":::
2. If you want to delete the cluster as well as the default storage account, select the resource group name (highlighted in the previous screenshot) to open the resource group page.
hdinsight Apache Hadoop Linux Tutorial Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hadoop/apache-hadoop-linux-tutorial-get-started.md
Currently HDInsight comes with [seven different cluster types](../hdinsight-over
If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
-[![Deploy to Azure](../../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2F101-hdinsight-linux-ssh-password%2Fazuredeploy.json)
+[:::image type="icon" source="../../media/template-deployments/deploy-to-azure.svg" alt-text="Deploy to Azure":::](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2F101-hdinsight-linux-ssh-password%2Fazuredeploy.json)
## Prerequisites
Two Azure resources are defined in the template:
1. Select the **Deploy to Azure** button below to sign in to Azure and open the ARM template.
- [![Deploy to Azure](../../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2F101-hdinsight-linux-ssh-password%2Fazuredeploy.json)
+ [:::image type="icon" source="../../media/template-deployments/deploy-to-azure.svg" alt-text="Deploy to Azure":::](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2F101-hdinsight-linux-ssh-password%2Fazuredeploy.json)
1. Enter or select the following values:
Two Azure resources are defined in the template:
> [!NOTE] > The values you provide must be unique and should follow the naming guidelines. The template does not perform validation checks. If the values you provide are already in use, or do not follow the guidelines, you get an error after you have submitted the template.
- ![HDInsight Linux gets started Resource Manager template on portal](./media/apache-hadoop-linux-tutorial-get-started/hdinsight-linux-get-started-arm-template-on-portal.png "Deploy Hadoop cluster in HDInsight using the Azure portal and a resource group manager template")
+ :::image type="content" source="./media/apache-hadoop-linux-tutorial-get-started/hdinsight-linux-get-started-arm-template-on-portal.png " alt-text="HDInsight Linux gets started Resource Manager template on portal" border="true":::
1. Review the **TERMS AND CONDITIONS**. Then select **I agree to the terms and conditions stated above**, then **Purchase**. You'll receive a notification that your deployment is in progress. It takes about 20 minutes to create a cluster.
After you complete the quickstart, you may want to delete the cluster. With HDIn
From the Azure portal, navigate to your cluster, and select **Delete**.
-![HDInsight delete cluster from portal](./media/apache-hadoop-linux-tutorial-get-started/hdinsight-delete-cluster.png "HDInsight delete cluster from portal")
You can also select the resource group name to open the resource group page, and then select **Delete resource group**. By deleting the resource group, you delete both the HDInsight cluster, and the default storage account.
hdinsight Apache Hadoop Use Hive Ambari View https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hadoop/apache-hadoop-use-hive-ambari-view.md
A Hadoop cluster on HDInsight. See [Get Started with HDInsight on Linux](./apach
1. From the list of views, select __Hive View__.
- ![Apache Ambari select Apache Hive view](./media/apache-hadoop-use-hive-ambari-view/select-apache-hive-view.png)
+ :::image type="content" source="./media/apache-hadoop-use-hive-ambari-view/select-apache-hive-view.png" alt-text="Apache Ambari select Apache Hive view" border="true":::
The Hive view page is similar to the following image:
- ![Image of the query worksheet for the Hive view](./media/apache-hadoop-use-hive-ambari-view/ambari-worksheet-view.png)
+ :::image type="content" source="./media/apache-hadoop-use-hive-ambari-view/ambari-worksheet-view.png" alt-text="Image of the query worksheet for the Hive view" border="true":::
1. From the __Query__ tab, paste the following HiveQL statements into the worksheet:
To display the Tez UI for the query, select the **Tez UI** tab below the workshe
The __Jobs__ tab displays a history of Hive queries.
-![Apache Hive view jobs tab history](./media/apache-hadoop-use-hive-ambari-view/apache-hive-job-history.png)
## Database tables You can use the __Tables__ tab to work with tables within a Hive database.
-![Image of the Apache Hive tables tab](./media/apache-hadoop-use-hive-ambari-view/hdinsight-tables-tab.png)
## Saved queries From the **Query** tab, you can optionally save queries. After you save a query, you can reuse it from the __Saved Queries__ tab.
-![Apache Hive views saved queries tab](./media/apache-hadoop-use-hive-ambari-view/ambari-saved-queries.png)
> [!TIP] > Saved queries are stored in the default cluster storage. You can find the saved queries under the path `/user/<username>/hive/scripts`. These are stored as plain-text `.hql` files.
You can extend Hive through user-defined functions (UDF). Use a UDF to implement
Declare and save a set of UDFs by using the **UDF** tab at the top of the Hive View. These UDFs can be used with the **Query Editor**.
-![Apache Hive view UDFs tab display](./media/apache-hadoop-use-hive-ambari-view/user-defined-functions.png)
An **Insert udfs** button appears at the bottom of the **Query Editor**. This entry displays a drop-down list of the UDFs defined in the Hive View. Selecting a UDF adds HiveQL statements to your query to enable the UDF.
hdinsight Apache Hadoop Use Hive Dotnet Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hadoop/apache-hadoop-use-hive-dotnet-sdk.md
The HDInsight .NET SDK provides .NET client libraries, which makes it easier to
The output of the application should be similar to:
-![HDInsight Hadoop Hive job output](./media/apache-hadoop-use-hive-dotnet-sdk/hdinsight-hadoop-use-hive-net-sdk-output.png)
## Next steps
hdinsight Apache Hadoop Use Hive Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hadoop/apache-hadoop-use-hive-visual-studio.md
Ad hoc queries can be executed in either **Batch** or **Interactive** mode.
5. Select **Execute**. The execution mode defaults to **Interactive**.
- ![Execute interactive Hive query, Visual Studio](./media/apache-hadoop-use-hive-visual-studio/vs-execute-hive-query.png)
+ :::image type="content" source="./media/apache-hadoop-use-hive-visual-studio/vs-execute-hive-query.png" alt-text="Execute interactive Hive query, Visual Studio" border="true":::
6. To run the same query in **Batch** mode, toggle the drop-down list from **Interactive** to **Batch**. The execution button changes from **Execute** to **Submit**.
- ![Submit batch Hive query, Visual Studio](./media/apache-hadoop-use-hive-visual-studio/visual-studio-batch-query.png)
+ :::image type="content" source="./media/apache-hadoop-use-hive-visual-studio/visual-studio-batch-qu