Updates from: 07/01/2021 03:05:34
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Tutorial Create User Flows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/tutorial-create-user-flows.md
A user flow lets you determine how users interact with your application when the
- If you don't have one already, [create an Azure AD B2C tenant](tutorial-create-tenant.md) that is linked to your Azure subscription. - [Register a web application](tutorial-register-applications.md), and [enable ID token implicit grant](tutorial-register-applications.md#enable-id-token-implicit-grant).-- [Create a Facebook application](identity-provider-facebook.md#create-a-facebook-application). Skip the prerequisites and the reset of the steps in the [Set up sign-up and sign-in with a Facebook account](identity-provider-facebook.md) article. Although a Facebook application is not required for using custom policies, it's used in this walkthrough to demonstrate enabling social login in a custom policy.
+- [Create a Facebook application](identity-provider-facebook.md#create-a-facebook-application). Skip the prerequisites and the rest of the steps in the [Set up sign-up and sign-in with a Facebook account](identity-provider-facebook.md) article. Although a Facebook application is not required for using custom policies, it's used in this walkthrough to demonstrate enabling social login in a custom policy.
::: zone-end
active-directory Active Directory Acs Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/azuread-dev/active-directory-acs-migration.md
Each Microsoft cloud service that accepts tokens that are issued by Access Contr
| - | -- | | Azure Service Bus | [Migrate to shared access signatures](../../service-bus-messaging/service-bus-sas.md) | | Azure Service Bus Relay | [Migrate to shared access signatures](../../azure-relay/relay-migrate-acs-sas.md) |
-| Azure Managed Cache | [Migrate to Azure Cache for Redis](../../azure-cache-for-redis/cache-faq.md) |
+| Azure Managed Cache | [Migrate to Azure Cache for Redis](../../azure-cache-for-redis/cache-faq.yml) |
| Azure DataMarket | [Migrate to the Cognitive Services APIs](https://azure.microsoft.com/services/cognitive-services/) | | BizTalk Services | [Migrate to the Logic Apps feature of Azure App Service](https://azure.microsoft.com/services/cognitive-services/) | | Azure Media Services | [Migrate to Azure AD authentication](https://azure.microsoft.com/blog/azure-media-service-aad-auth-and-acs-deprecation/) |
active-directory Reference Cloud Sync Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/cloud-sync/reference-cloud-sync-faq.md
- Title: Azure AD Connect cloud sync FAQ
-description: This document describes frequently asked questions for cloud sync.
------ Previously updated : 06/25/2020----
-# Azure Active Directory Connect cloud sync FAQ
-
-Read about frequently asked questions for Azure Active Directory (Azure AD) Connect cloud sync.
-
-## General installation
-
-**Q: How often does cloud sync run?**
-
-Cloud provisioning is scheduled to run every 2 mins. Every 2 mins, any user, group and password hash changes will be provisioned to Azure AD.
-
-**Q: Seeing password hash sync failures on the first run. Why?**
-
-This is expected. The failures are due to the user object not present in Azure AD. Once the user is provisioned to Azure AD, password hashes should provisioning in the subsequent run. Wait for a couple of runs and confirm that password hash sync no longer has the errors.
-
-**Q: What happens if the Active Directory instance has attributes that are not supported by cloud sync (for instance, directory extensions)?**
-
-Cloud provisioning will run and provision the supported attributes. The unsupported attributes will not be provisioned to Azure AD. Review the directory extensions in Active Directory and ensure that you don't need those attributes to flow to Azure AD. If one or more attributes are required, consider using Azure AD Connect sync or moving the required information to one of the supported attributes (for instance, extension attributes 1-15).
-
-**Q: What's the difference between Azure AD Connect sync and cloud sync?**
-
-With Azure AD Connect sync, provisioning runs on the on-premises sync server. Configuration is stored on the on-premises sync server. With Azure AD Connect cloud sync, the provisioning configuration is stored in the cloud and runs in the cloud as part of the Azure AD provisioning service.
-
-**Q: Can I use cloud sync to sync from multiple Active Directory forests?**
-
-Yes. Cloud provisioning can be used to sync from multiple Active Directory forests. In the multi-forest environment, all the references (example, manager) need to be within the domain.
-
-**Q: How is the agent updated?**
-
-The agents are auto upgraded by Microsoft. For the IT team, this reduces the burden of having to test and validate new agent versions.
-
-**Q: Can I disable auto upgrade?**
-
-There is no supported way to disable auto upgrade.
-
-**Q: Can I change the source anchor for cloud sync?**
-
-By default, cloud sync uses ms-ds-consistency-GUID with a fallback to ObjectGUID as source anchor. There is no supported way to change the source anchor.
-
-**Q: I see new service principals with the AD domain name(s) when using cloud sync. Is it expected?**
-
-Yes, cloud sync creates a service principal for the provisioning configuration with the domain name as the service principal name. Do not make any changes to the service principal configuration.
-
-**Q: What happens when a synced user is required to change password on next logon?**
-
-If password hash sync is enabled in cloud sync and the synced user is required to change password on next logon in on-premises AD, cloud sync does not provision the "to-be-changed" password hash to Azure AD. Once the user changes the password, the user password hash is provisioned from AD to Azure AD.
-
-**Q: Does cloud sync support writeback of ms-ds-consistencyGUID for any object?**
-
-No, cloud sync does not support writeback of ms-ds-consistencyGUID for any object (including user objects).
-
-**Q: I am provisioning users using cloud sync. I deleted the configuration. Why do I still see the old synced objects in Azure AD?**
-
-When you delete the configuration, cloud sync does not automatically remove the synced objects in Azure AD. To ensure you do not have the old objects, change the scope of the configuration to an empty group or Organizational Units. Once the provisioning runs and cleans up the objects, disable and delete the configuration.
-
-**Q: What does it mean that Exchange hybrid is not supported?**
-
-The Exchange Hybrid Deployment feature allows for the co-existence of Exchange mailboxes both on-premises and in Microsoft 365. Azure AD Connect is synchronizing a specific set of attributes from Azure AD back into your on-premises directory. The cloud provisioning agent currently does not synchronize these attributes back into your on-premises directory and thus it is not supported as a replacement for Azure AD Connect.
-
-**Q: Can I install the cloud provisioning agent on Windows Server Core?**
-
-No, installing the agent on server core is not supported.
-
-**Q: Can I use a staging server with the cloud provisioning agent?**
-
-No, staging servers are not supported.
-
-**Q: Can I synchronize Guest user accounts?**
-
-No, synchronizing guest user accounts is not supported.
-
-**Q: If I move a user from an OU that is scoped for cloud sync to an OU that is scoped for Azure AD Connect, what happens?**
-
-The user will be deleted and re-created. Moving a user from an OU that is scoped for cloud sync will be viewed as a delete operation. If the user is moved to an OU that is managed by Azure AD Connect, it will be re-provisioned to Azure AD and a new user created.
-
-**Q: If I rename or move the OU that is in scope for the cloud sync filter, what happens to the user that were created in Azure AD?**
-
-Nothing. The users will not be deleted if the OU is renamed or moved.
-
-**Q: Does Azure AD Connect cloud sync support large groups?**
-
-Yes. Today we support up to 50K group members synchronized using the OU scope filtering. At the same time, when you use the group scope filtering, we recommend that you keep your group size to less than 1500 members. The reason for this is that even though you can sync a large group as part of group scoping filter, when you add members to that group by batches of greater than 1500, the delta synchronization will fail.
-
-## Next steps
--- [What is provisioning?](what-is-provisioning.md)-- [What is Azure AD Connect cloud sync?](what-is-cloud-sync.md)
active-directory Msal Compare Msal Js And Adal Js https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-compare-msal-js-and-adal-js.md
Title: Differences between MSAL.js and ADAL.js | Azure
+ Title: Differences between MSAL.js and ADAL.js| Azure
description: Learn about the differences between Microsoft Authentication Library for JavaScript (MSAL.js) and Azure AD Authentication Library for JavaScript (ADAL.js) and how to choose which to use. -+
Last updated 04/10/2019-+ #Customer intent: As an application developer, I want to learn about the differences between the ADAL.js and MSAL.js libraries so I can migrate my applications to MSAL.js.
active-directory Msal Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-overview.md
MSAL gives you many ways to get tokens, with a consistent API for a number of pl
* Helps you set up your application from configuration files. * Helps you troubleshoot your app by exposing actionable exceptions, logging, and telemetry.
+> [!VIDEO https://www.youtube.com/embed/zufQ0QRUHUk]
+ ## Application types and scenarios Using MSAL, a token can be acquired from a number of application types: web applications, web APIs, single-page apps (JavaScript), mobile and native applications, and daemons and server-side applications.
active-directory Choose Ad Authn https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/choose-ad-authn.md
Refer to [implementing password hash synchronization](../../active-directory/hyb
* **User experience**. To improve users' sign-in experience, deploy seamless SSO with Pass-through Authentication. Seamless SSO eliminates unnecessary prompts after users sign in.
-* **Advanced scenarios**. Pass-through Authentication enforces the on-premises account policy at the time of sign-in. For example, access is denied when an on-premises userΓÇÖs account state is disabled, locked out, or their [password expires](../../active-directory/hybrid/how-to-connect-pta-faq.md#what-happens-if-my-users-password-has-expired-and-they-try-to-sign-in-by-using-pass-through-authentication) or the logon attempt falls outside the hours when the user is allowed to sign in.
+* **Advanced scenarios**. Pass-through Authentication enforces the on-premises account policy at the time of sign-in. For example, access is denied when an on-premises userΓÇÖs account state is disabled, locked out, or their [password expires](../../active-directory/hybrid/how-to-connect-pta-faq.yml#what-happens-if-my-user-s-password-has-expired-and-they-try-to-sign-in-by-using-pass-through-authentication-) or the logon attempt falls outside the hours when the user is allowed to sign in.
Organizations that require multi-factor authentication with pass-through authentication must use Azure AD Multi-Factor Authentication (MFA) or [Conditional Access custom controls](../../active-directory/conditional-access/controls.md#custom-controls-preview). Those organizations can't use a third-party or on-premises multifactor authentication method that relies on federation. Advanced features require that password hash synchronization is deployed whether or not you choose pass-through authentication. An example is the leaked credentials report of Identity Protection.
Refer to [implementing password hash synchronization](../../active-directory/hyb
* **Considerations**. You can use password hash synchronization as a backup authentication method for pass-through authentication, when the agents can't validate a user's credentials due to a significant on-premises failure. Fail over to password hash synchronization doesn't happen automatically and you must use Azure AD Connect to switch the sign-on method manually.
- For other considerations on Pass-through Authentication, including Alternate ID support, see [frequently asked questions](../../active-directory/hybrid/how-to-connect-pta-faq.md).
+ For other considerations on Pass-through Authentication, including Alternate ID support, see [frequently asked questions](../../active-directory/hybrid/how-to-connect-pta-faq.yml).
Refer to [implementing pass-through authentication](../../active-directory/hybrid/how-to-connect-pta.md) for deployment steps.
The following diagrams outline the high-level architecture components required f
|Is there a TLS/SSL certificate requirement?|No|No|Yes| |Is there a health monitoring solution?|Not required|Agent status provided by [Azure Active Directory admin center](../../active-directory/hybrid/tshoot-connect-pass-through-authentication.md)|[Azure AD Connect Health](../../active-directory/hybrid/how-to-connect-health-adfs.md)| |Do users get single sign-on to cloud resources from domain-joined devices within the company network?|Yes with [Seamless SSO](../../active-directory/hybrid/how-to-connect-sso.md)|Yes with [Seamless SSO](../../active-directory/hybrid/how-to-connect-sso.md)|Yes|
-|What sign-in types are supported?|UserPrincipalName + password<br><br>Windows-Integrated Authentication by using [Seamless SSO](../../active-directory/hybrid/how-to-connect-sso.md)<br><br>[Alternate login ID](../../active-directory/hybrid/how-to-connect-install-custom.md)|UserPrincipalName + password<br><br>Windows-Integrated Authentication by using [Seamless SSO](../../active-directory/hybrid/how-to-connect-sso.md)<br><br>[Alternate login ID](../../active-directory/hybrid/how-to-connect-pta-faq.md)|UserPrincipalName + password<br><br>sAMAccountName + password<br><br>Windows-Integrated Authentication<br><br>[Certificate and smart card authentication](/windows-server/identity/ad-fs/operations/configure-user-certificate-authentication)<br><br>[Alternate login ID](/windows-server/identity/ad-fs/operations/configuring-alternate-login-id)|
+|What sign-in types are supported?|UserPrincipalName + password<br><br>Windows-Integrated Authentication by using [Seamless SSO](../../active-directory/hybrid/how-to-connect-sso.md)<br><br>[Alternate login ID](../../active-directory/hybrid/how-to-connect-install-custom.md)|UserPrincipalName + password<br><br>Windows-Integrated Authentication by using [Seamless SSO](../../active-directory/hybrid/how-to-connect-sso.md)<br><br>[Alternate login ID](../../active-directory/hybrid/how-to-connect-pta-faq.yml)|UserPrincipalName + password<br><br>sAMAccountName + password<br><br>Windows-Integrated Authentication<br><br>[Certificate and smart card authentication](/windows-server/identity/ad-fs/operations/configure-user-certificate-authentication)<br><br>[Alternate login ID](/windows-server/identity/ad-fs/operations/configuring-alternate-login-id)|
|Is Windows Hello for Business supported?|[Key trust model](/windows/security/identity-protection/hello-for-business/hello-identity-verification)|[Key trust model](/windows/security/identity-protection/hello-for-business/hello-identity-verification)<br>*Requires Windows Server 2016 Domain functional level*|[Key trust model](/windows/security/identity-protection/hello-for-business/hello-identity-verification)<br><br>[Certificate trust model](/windows/security/identity-protection/hello-for-business/hello-key-trust-adfs)| |What are the multifactor authentication options?|[Azure AD MFA](/azure/multi-factor-authentication/)<br><br>[Custom Controls with Conditional Access*](../../active-directory/conditional-access/controls.md)|[Azure AD MFA](/azure/multi-factor-authentication/)<br><br>[Custom Controls with Conditional Access*](../../active-directory/conditional-access/controls.md)|[Azure AD MFA](/azure/multi-factor-authentication/)<br><br>[Azure MFA server](../../active-directory/authentication/howto-mfaserver-deploy.md)<br><br>[Third-party MFA](/windows-server/identity/ad-fs/operations/configure-additional-authentication-methods-for-ad-fs)<br><br>[Custom Controls with Conditional Access*](../../active-directory/conditional-access/controls.md)| |What user account states are supported?|Disabled accounts<br>(up to 30-minute delay)|Disabled accounts<br><br>Account locked out<br><br>Account expired<br><br>Password expired<br><br>Sign-in hours|Disabled accounts<br><br>Account locked out<br><br>Account expired<br><br>Password expired<br><br>Sign-in hours|
active-directory How To Connect Health Adds https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-health-adds.md
By default, we have preselected four performance counters; however, you can incl
* [Azure AD Connect Health Operations](how-to-connect-health-operations.md) * [Using Azure AD Connect Health with AD FS](how-to-connect-health-adfs.md) * [Using Azure AD Connect Health for sync](how-to-connect-health-sync.md)
-* [Azure AD Connect Health FAQ](reference-connect-health-faq.md)
+* [Azure AD Connect Health FAQ](reference-connect-health-faq.yml)
* [Azure AD Connect Health Version History](reference-connect-health-version-history.md)
active-directory How To Connect Health Agent Install https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-health-agent-install.md
The "basic" audit level is enabled by default. For more information, see [AD FS
3. On the right, select **Filter Current Logs**. 4. For **Event sources**, select **AD FS Auditing**.
- For more information about audit logs, see [Operations questions](reference-connect-health-faq.md#operations-questions).
+ For more information about audit logs, see [Operations questions](/azure/active-directory/hybrid/reference-connect-health-faq#operations-questions).
![Screenshot showing the Filter Current Log window. In the "Event sources" field, "AD FS auditing" is selected.](./media/how-to-connect-health-agent-install/adfsaudit.png)
Check out the following related articles:
* [Using Azure AD Connect Health with AD FS](how-to-connect-health-adfs.md) * [Using Azure AD Connect Health for Sync](how-to-connect-health-sync.md) * [Using Azure AD Connect Health with Azure AD DS](how-to-connect-health-adds.md)
-* [Azure AD Connect Health FAQ](reference-connect-health-faq.md)
+* [Azure AD Connect Health FAQ](reference-connect-health-faq.yml)
* [Azure AD Connect Health version history](reference-connect-health-version-history.md)
active-directory How To Connect Health Alert Catalog https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-health-alert-catalog.md
Azure AD Connect Health alerts get resolved on a success condition. Azure AD Con
## Next steps
-* [Azure AD Connect Health FAQ](reference-connect-health-faq.md)
+* [Azure AD Connect Health FAQ](reference-connect-health-faq.yml)
active-directory How To Connect Health Data Freshness https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-health-data-freshness.md
The steps required to diagnose the issue is given below. The first is a set of b
If any of the above steps identified an issue, fix it and wait for the alert to resolve. The alert background process runs every 2 hours, so it will take up to 2 hours to resolve the alert. * [Azure AD Connect Health data retention policy](reference-connect-health-user-privacy.md#data-retention-policy)
-* [Azure AD Connect Health FAQ](reference-connect-health-faq.md)
+* [Azure AD Connect Health FAQ](reference-connect-health-faq.yml)
active-directory How To Connect Health Data Retrieval https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-health-data-retrieval.md
To retrieve accounts that were flagged with AD FS Bad Password attempts, use the
* [Azure AD Connect Health](./whatis-azure-ad-connect.md) * [Azure AD Connect Health Agent Installation](how-to-connect-health-agent-install.md) * [Azure AD Connect Health Operations](how-to-connect-health-operations.md)
-* [Azure AD Connect Health FAQ](reference-connect-health-faq.md)
+* [Azure AD Connect Health FAQ](reference-connect-health-faq.yml)
* [Azure AD Connect Health Version History](reference-connect-health-version-history.md)
active-directory How To Connect Health Operations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-health-operations.md
You can remove a user or a group added to Azure AD Connect Health and Azure RBAC
* [Using Azure AD Connect Health with AD FS](how-to-connect-health-adfs.md) * [Using Azure AD Connect Health for sync](how-to-connect-health-sync.md) * [Using Azure AD Connect Health with AD DS](how-to-connect-health-adds.md)
-* [Azure AD Connect Health FAQ](reference-connect-health-faq.md)
+* [Azure AD Connect Health FAQ](reference-connect-health-faq.yml)
* [Azure AD Connect Health version history](reference-connect-health-version-history.md)
active-directory How To Connect Health Sync https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-health-sync.md
Read more about [Diagnose and remediate duplicated attribute sync errors](how-to
* [Azure AD Connect Health Operations](how-to-connect-health-operations.md) * [Using Azure AD Connect Health with AD FS](how-to-connect-health-adfs.md) * [Using Azure AD Connect Health with AD DS](how-to-connect-health-adds.md)
-* [Azure AD Connect Health FAQ](reference-connect-health-faq.md)
+* [Azure AD Connect Health FAQ](reference-connect-health-faq.yml)
* [Azure AD Connect Health Version History](reference-connect-health-version-history.md)
active-directory How To Connect Install Custom https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-install-custom.md
Users use the *userPrincipalName* attribute when they sign in to Azure AD and Mi
If the userPrincipalName attribute is nonroutable and can't be verified, then you can select another attribute. You can, for example, select email as the attribute that holds the sign-in ID. When you use an attribute other than userPrincipalName, it's known as an *alternate ID*.
-The alternate ID attribute value must follow the RFC 822 standard. You can use an alternate ID with password hash sync, pass-through authentication, and federation. In Active Directory, the attribute can't be defined as multivalued, even if it has only a single value. For more information about the alternate ID, see [Pass-through authentication: Frequently asked questions](./how-to-connect-pta-faq.md#does-pass-through-authentication-support-alternate-id-as-the-username-instead-of-userprincipalname).
+The alternate ID attribute value must follow the RFC 822 standard. You can use an alternate ID with password hash sync, pass-through authentication, and federation. In Active Directory, the attribute can't be defined as multivalued, even if it has only a single value. For more information about the alternate ID, see [Pass-through authentication: Frequently asked questions](./how-to-connect-pta-faq.yml#does-pass-through-authentication-support--alternate-id--as-the-username--instead-of--userprincipalname--).
>[!NOTE] > When you enable pass-through authentication, you must have at least one verified domain to continue through the custom installation process.
active-directory How To Connect Install Roadmap https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-install-roadmap.md
To get started with Azure AD Connect Health, use the following steps:
The Azure AD Connect Health portal shows views of alerts, performance monitoring, and usage analytics. The https://aka.ms/aadconnecthealth URL takes you to the main blade of Azure AD Connect Health. You can think of a blade as a window. On The main blade, you see **Quick Start**, services within Azure AD Connect Health, and additional configuration options. See the following screenshot and brief explanations that follow the screenshot. After you deploy the agents, the health service automatically identifies the services that Azure AD Connect Health is monitoring. > [!NOTE]
-> For licensing information, see the [Azure AD Connect Health FAQ](reference-connect-health-faq.md) or the [Azure AD Pricing page](https://aka.ms/aadpricing).
+> For licensing information, see the [Azure AD Connect Health FAQ](reference-connect-health-faq.yml) or the [Azure AD Pricing page](https://aka.ms/aadpricing).
![Azure AD Connect Health Portal](./media/whatis-hybrid-identity-health/portalsidebar.png)
active-directory How To Connect Pta Current Limitations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-pta-current-limitations.md
# Azure Active Directory Pass-through Authentication: Current limitations
->[!IMPORTANT]
->Azure Active Directory (Azure AD) Pass-through Authentication is a free feature, and you don't need any paid editions of Azure AD to use it. Pass-through Authentication is only available in the world-wide instance of Azure AD, and not on the [Microsoft Azure Germany cloud](https://www.microsoft.de/cloud-deutschland) or the [Microsoft Azure Government cloud](https://azure.microsoft.com/features/gov/).
- ## Supported scenarios The following scenarios are supported:
The following scenarios are _not_ supported:
- [Migrate from AD FS to Pass-through Authentication](https://aka.ms/ADFSTOPTADPDownload) - A detailed guide to migrate from AD FS (or other federation technologies) to Pass-through Authentication. - [Smart Lockout](../authentication/howto-password-smart-lockout.md): Learn how to configure the Smart Lockout capability on your tenant to protect user accounts. - [Technical deep dive](how-to-connect-pta-how-it-works.md): Understand how the Pass-through Authentication feature works.-- [Frequently asked questions](how-to-connect-pta-faq.md): Find answers to frequently asked questions about the Pass-through Authentication feature.
+- [Frequently asked questions](how-to-connect-pta-faq.yml): Find answers to frequently asked questions about the Pass-through Authentication feature.
- [Troubleshoot](tshoot-connect-pass-through-authentication.md): Learn how to resolve common problems with the Pass-through Authentication feature. - [Security deep dive](how-to-connect-pta-security-deep-dive.md): Get deep technical information on the Pass-through Authentication feature. - [Azure AD Seamless SSO](how-to-connect-sso.md): Learn more about this complementary feature.
active-directory How To Connect Pta Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-pta-faq.md
- Title: 'Azure AD Connect: Pass-through Authentication - Frequently asked questions | Microsoft Docs'
-description: Answers to frequently asked questions about Azure Active Directory Pass-through Authentication
-
-keywords: Azure AD Connect Pass-through Authentication, install Active Directory, required components for Azure AD, SSO, Single Sign-on
----- Previously updated : 06/09/2020-----
-# Azure Active Directory Pass-through Authentication: Frequently asked questions
-
-This article addresses frequently asked questions about Azure Active Directory (Azure AD) Pass-through Authentication. Keep checking back for updated content.
-
-## Which of the methods to sign in to Azure AD, Pass-through Authentication, password hash synchronization, and Active Directory Federation Services (AD FS), should I choose?
-
-Review [this guide](./choose-ad-authn.md) for a comparison of the various Azure AD sign-in methods and how to choose the right sign-in method for your organization.
-
-## Is Pass-through Authentication a free feature?
-
-Pass-through Authentication is a free feature. You don't need any paid editions of Azure AD to use it.
-
-## Does [Conditional Access](../conditional-access/overview.md) work with Pass-through Authentication?
-
-Yes. All Conditional Access capabilities, including Azure AD Multi-Factor Authentication, work with Pass-through Authentication.
-
-## Does Pass-through Authentication support "Alternate ID" as the username, instead of "userPrincipalName"?
-Yes, sign-in using a non-UPN value, such as an alternate email, is supported for both pass-through authentication (PTA) and password hash sync (PHS). For more information about [Alternate Login ID](../authentication/howto-authentication-use-email-signin.md).
-
-## Does password hash synchronization act as a fallback to Pass-through Authentication?
-
-No. Pass-through Authentication _does not_ automatically failover to password hash synchronization. To avoid user sign-in failures, you should configure Pass-through Authentication for [high availability](how-to-connect-pta-quick-start.md#step-4-ensure-high-availability).
-
-## What happens when I switch from password hash synchronization to Pass-through Authentication?
-
-When you use Azure AD Connect to switch the sign-in method from password hash synchronization to Pass-through Authentication, Pass-through Authentication becomes the primary sign-in method for your users in managed domains. Please note that all users' password hashes which were previously synchronized by password hash synchronization remain stored on Azure AD.
-
-## Can I install an [Azure AD Application Proxy](../app-proxy/application-proxy.md) connector on the same server as a Pass-through Authentication Agent?
-
-Yes. The rebranded versions of the Pass-through Authentication Agent, version 1.5.193.0 or later, support this configuration.
-
-## What versions of Azure AD Connect and Pass-through Authentication Agent do you need?
-
-For this feature to work, you need version 1.1.750.0 or later for Azure AD Connect and 1.5.193.0 or later for the Pass-through Authentication Agent. Install all the software on servers with Windows Server 2012 R2 or later.
-
-## What happens if my user's password has expired and they try to sign in by using Pass-through Authentication?
-
-If you have configured [password writeback](../authentication/concept-sspr-writeback.md) for a specific user, and if the user signs in by using Pass-through Authentication, they can change or reset their passwords. The passwords are written back to on-premises Active Directory as expected.
-
-If you have not configured password writeback for a specific user or if the user doesn't have a valid Azure AD license assigned, the user can't update their password in the cloud. They can't update their password, even if their password has expired. The user instead sees this message: "Your organization doesn't allow you to update your password on this site. Update it according to the method recommended by your organization, or ask your admin if you need help." The user or the administrator must reset their password in on-premises Active Directory.
-
-## How does Pass-through Authentication protect you against brute-force password attacks?
-
-[Read information about Smart Lockout](../authentication/howto-password-smart-lockout.md).
-
-## What do Pass-through Authentication Agents communicate over ports 80 and 443?
--- The Authentication Agents make HTTPS requests over port 443 for all feature operations.-- The Authentication Agents make HTTP requests over port 80 to download the TLS/SSL certificate revocation lists (CRLs).-
- >[!NOTE]
- >Recent updates reduced the number of ports that the feature requires. If you have older versions of Azure AD Connect or the Authentication Agent, keep these ports open as well: 5671, 8080, 9090, 9091, 9350, 9352, and 10100-10120.
-
-## Can the Pass-through Authentication Agents communicate over an outbound web proxy server?
-
-Yes. If Web Proxy Auto-Discovery (WPAD) is enabled in your on-premises environment, Authentication Agents automatically attempt to locate and use a web proxy server on the network.
-
-If you don't have WPAD in your environment, you can add proxy information (as shown below) to allow a Pass-through Authentication Agent to communicate with Azure AD:
-- Configure proxy information in Internet Explorer before you install the Pass-through Authentication Agent on the server. This will allow you to complete the installation of the Authentication Agent, but it will still show up as **Inactive** on the Admin portal.-- On the server, navigate to "C:\Program Files\Microsoft Azure AD Connect Authentication Agent".-- Edit the "AzureADConnectAuthenticationAgentService" configuration file and add the following lines (replace "http\://contosoproxy.com:8080" with your actual proxy address):-
-```
- <system.net>
- <defaultProxy enabled="true" useDefaultCredentials="true">
- <proxy
- usesystemdefault="true"
- proxyaddress="http://contosoproxy.com:8080"
- bypassonlocal="true"
- />
- </defaultProxy>
- </system.net>
-```
-
-## Can I install two or more Pass-through Authentication Agents on the same server?
-
-No, you can only install one Pass-through Authentication Agent on a single server. If you want to configure Pass-through Authentication for high availability, [follow the instructions here](how-to-connect-pta-quick-start.md#step-4-ensure-high-availability).
-
-## Do I have to manually renew certificates used by Pass-through Authentication Agents?
-
-The communication between each Pass-through Authentication Agent and Azure AD is secured using certificate-based authentication. These [certificates are automatically renewed every few months by Azure AD](how-to-connect-pta-security-deep-dive.md#operational-security-of-the-authentication-agents). There is no need to manually renew these certificates. You can clean up older expired certificates as required.
-
-## How do I remove a Pass-through Authentication Agent?
-
-As long as a Pass-through Authentication Agent is running, it remains active and continually handles user sign-in requests. If you want to uninstall an Authentication Agent, go to **Control Panel -> Programs -> Programs and Features** and uninstall both the **Microsoft Azure AD Connect Authentication Agent** and the **Microsoft Azure AD Connect Agent Updater** programs.
-
-If you check the Pass-through Authentication blade on the [Azure Active Directory admin center](https://aad.portal.azure.com) after completing the preceding step, you'll see the Authentication Agent showing as **Inactive**. This is _expected_. The Authentication Agent is automatically dropped from the list after 10 days.
-
-## I already use AD FS to sign in to Azure AD. How do I switch it to Pass-through Authentication?
-
-If you are migrating from AD FS (or other federation technologies) to Pass-through Authentication, we highly recommend that you follow our detailed deployment guide published [here](https://github.com/Identity-Deployment-Guides/Identity-Deployment-Guides/blob/master/Authentication/Migrating%20from%20Federated%20Authentication%20to%20Pass-through%20Authentication.docx?raw=true).
-
-## Can I use Pass-through Authentication in a multi-forest Active Directory environment?
-
-Yes. Multi-forest environments are supported if there are forest trusts (two-way) between your Active Directory forests and if name suffix routing is correctly configured.
-
-## Does Pass-through Authentication provide load balancing across multiple Authentication Agents?
-
-No, installing multiple Pass-through Authentication Agents ensures only [high availability](how-to-connect-pta-quick-start.md#step-4-ensure-high-availability). It does not provide deterministic load balancing between the Authentication Agents. Any Authentication Agent (at random) can process a particular user sign-in request.
-
-## How many Pass-through Authentication Agents do I need to install?
-
-Installing multiple Pass-through Authentication Agents ensures [high availability](how-to-connect-pta-quick-start.md#step-4-ensure-high-availability). But, it does not provide deterministic load balancing between the Authentication Agents.
-
-Consider the peak and average load of sign-in requests that you expect to see on your tenant. As a benchmark, a single Authentication Agent can handle 300 to 400 authentications per second on a standard 4-core CPU, 16-GB RAM server.
-
-To estimate network traffic, use the following sizing guidance:
-- Each request has a payload size of (0.5K + 1K * num_of_agents) bytes; i.e., data from Azure AD to the Authentication Agent. Here, "num_of_agents" indicates the number of Authentication Agents registered on your tenant.-- Each response has a payload size of 1K bytes; i.e., data from the Authentication Agent to Azure AD.-
-For most customers, two or three Authentication Agents in total are sufficient for high availability and capacity. You should install Authentication Agents close to your domain controllers to improve sign-in latency.
-
->[!NOTE]
->There is a system limit of 40 Authentication Agents per tenant.
-
-## Why do I need a cloud-only Global Administrator account to enable Pass-through Authentication?
-
-It is recommended that you enable or disable Pass-through Authentication using a cloud-only Global Administrator account. Learn about [adding a cloud-only Global Administrator account](../fundamentals/add-users-azure-active-directory.md). Doing it this way ensures that you don't get locked out of your tenant.
-
-## How can I disable Pass-through Authentication?
-
-Rerun the Azure AD Connect wizard and change the user sign-in method from Pass-through Authentication to another method. This change disables Pass-through Authentication on the tenant and uninstalls the Authentication Agent from the server. You must manually uninstall the Authentication Agents from the other servers.
-
-## What happens when I uninstall a Pass-through Authentication Agent?
-
-If you uninstall a Pass-through Authentication Agent from a server, it causes the server to stop accepting sign-in requests. To avoid breaking the user sign-in capability on your tenant, ensure that you have another Authentication Agent running before you uninstall a Pass-through Authentication Agent.
-
-## I have an older tenant that was originally setup using AD FS. We recently migrated to PTA but now are not seeing our UPN changes synchronizing to Azure AD. Why are our UPN changes not being synchronized?
-
-A: Under the following circumstances your on-premises UPN changes may not synchronize if:
--- Your Azure AD tenant was created prior to June 15th 2015-- You initially were federated with your Azure AD tenant using AD FS for authentication-- You switched to having managed users using PTA as authentication-
-This is because the default behavior of tenants created prior to June 15th 2015 was to block UPN changes. If you need to un-block UPN changes you need to run the following PowerShell cmdlt:
-
-`Set-MsolDirSyncFeature -Feature SynchronizeUpnForManagedUsers -Enable $True`
-
-Tenants created after June 15th 2015 have the default behavior of synchronizing UPN changes.
---
-## Next steps
-- [Current limitations](how-to-connect-pta-current-limitations.md): Learn which scenarios are supported and which ones are not.-- [Quick start](how-to-connect-pta-quick-start.md): Get up and running on Azure AD Pass-through Authentication.-- [Migrate from AD FS to Pass-through Authentication](https://github.com/Identity-Deployment-Guides/Identity-Deployment-Guides/blob/master/Authentication/Migrating%20from%20Federated%20Authentication%20to%20Pass-through%20Authentication.docx?raw=true) - A detailed guide to migrate from AD FS (or other federation technologies) to Pass-through Authentication.-- [Smart Lockout](../authentication/howto-password-smart-lockout.md): Learn how to configure the Smart Lockout capability on your tenant to protect user accounts.-- [Technical deep dive](how-to-connect-pta-how-it-works.md): Understand how the Pass-through Authentication feature works.-- [Troubleshoot](tshoot-connect-pass-through-authentication.md): Learn how to resolve common problems with the Pass-through Authentication feature.-- [Security deep dive](how-to-connect-pta-security-deep-dive.md): Get deep technical information on the Pass-through Authentication feature.-- [Azure AD Seamless SSO](how-to-connect-sso.md): Learn more about this complementary feature.-- [UserVoice](https://feedback.azure.com/forums/169401-azure-active-directory/category/160611-directory-synchronization-aad-connect): Use the Azure Active Directory Forum to file new feature requests.
active-directory How To Connect Pta How It Works https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-pta-how-it-works.md
The following diagram illustrates all the components and the steps involved:
- [Quick Start](how-to-connect-pta-quick-start.md): Get up and running on Azure AD Pass-through Authentication. - [Migrate from AD FS to Pass-through Authentication](https://aka.ms/adfstoPTADP) - A detailed guide to migrate from AD FS (or other federation technologies) to Pass-through Authentication. - [Smart Lockout](../authentication/howto-password-smart-lockout.md): Configure the Smart Lockout capability on your tenant to protect user accounts.-- [Frequently Asked Questions](how-to-connect-pta-faq.md): Find answers to frequently asked questions.
+- [Frequently Asked Questions](how-to-connect-pta-faq.yml): Find answers to frequently asked questions.
- [Troubleshoot](tshoot-connect-pass-through-authentication.md): Learn how to resolve common problems with the Pass-through Authentication feature. - [Security Deep Dive](how-to-connect-pta-security-deep-dive.md): Get deep technical information on the Pass-through Authentication feature. - [Azure AD Seamless SSO](how-to-connect-sso.md): Learn more about this complementary feature.
active-directory How To Connect Pta Quick Start https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-pta-quick-start.md
Smart Lockout assists in locking out bad actors who are trying to guess your use
- [Smart Lockout](../authentication/howto-password-smart-lockout.md): Learn how to configure the Smart Lockout capability on your tenant to protect user accounts. - [Current limitations](how-to-connect-pta-current-limitations.md): Learn which scenarios are supported with the Pass-through Authentication and which ones are not. - [Technical deep dive](how-to-connect-pta-how-it-works.md): Understand how the Pass-through Authentication feature works.-- [Frequently asked questions](how-to-connect-pta-faq.md): Find answers to frequently asked questions.
+- [Frequently asked questions](how-to-connect-pta-faq.yml): Find answers to frequently asked questions.
- [Troubleshoot](tshoot-connect-pass-through-authentication.md): Learn how to resolve common problems with the Pass-through Authentication feature. - [Security deep dive](how-to-connect-pta-security-deep-dive.md): Get technical information on the Pass-through Authentication feature. - [Azure AD Seamless SSO](how-to-connect-sso.md): Learn more about this complementary feature.
active-directory How To Connect Pta Security Deep Dive https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-pta-security-deep-dive.md
To auto-update an Authentication Agent:
- [Migrate from AD FS to Pass-through Authentication](https://aka.ms/adfstoptadpdownload) - A detailed guide to migrate from AD FS (or other federation technologies) to Pass-through Authentication. - [Smart Lockout](../authentication/howto-password-smart-lockout.md): Configure the Smart Lockout capability on your tenant to protect user accounts. - [How it works](how-to-connect-pta-how-it-works.md): Learn the basics of how Azure AD Pass-through Authentication works.-- [Frequently asked questions](how-to-connect-pta-faq.md): Find answers to frequently asked questions.
+- [Frequently asked questions](how-to-connect-pta-faq.yml): Find answers to frequently asked questions.
- [Troubleshoot](tshoot-connect-pass-through-authentication.md): Learn how to resolve common problems with the Pass-through Authentication feature. - [Azure AD Seamless SSO](how-to-connect-sso.md): Learn more about this complementary feature.
active-directory How To Connect Pta https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-pta.md
You can combine Pass-through Authentication with the [Seamless Single Sign-On](h
- [Smart Lockout](../authentication/howto-password-smart-lockout.md) - Configure Smart Lockout capability on your tenant to protect user accounts. - [Current limitations](how-to-connect-pta-current-limitations.md) - Learn which scenarios are supported and which ones are not. - [Technical Deep Dive](how-to-connect-pta-how-it-works.md) - Understand how this feature works.-- [Frequently Asked Questions](how-to-connect-pta-faq.md) - Answers to frequently asked questions.
+- [Frequently Asked Questions](how-to-connect-pta-faq.yml) - Answers to frequently asked questions.
- [Troubleshoot](tshoot-connect-pass-through-authentication.md) - Learn how to resolve common issues with the feature. - [Security Deep Dive](how-to-connect-pta-security-deep-dive.md) - Additional deep technical information on the feature. - [Azure AD Seamless SSO](how-to-connect-sso.md) - Learn more about this complementary feature.
active-directory How To Connect Sso Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-sso-faq.md
- Title: 'Azure AD Connect: Seamless Single Sign-On - Frequently asked questions | Microsoft Docs'
-description: Answers to frequently asked questions about Azure Active Directory Seamless Single Sign-On.
-
-keywords: what is Azure AD Connect, install Active Directory, required components for Azure AD, SSO, Single Sign-on
----- Previously updated : 10/07/2019-----
-# Azure Active Directory Seamless Single Sign-On: Frequently asked questions
-
-In this article, we address frequently asked questions about Azure Active Directory Seamless Single Sign-On (Seamless SSO). Keep checking back for new content.
-
-**Q: What sign-in methods do Seamless SSO work with**
-
-Seamless SSO can be combined with either the [Password Hash Synchronization](how-to-connect-password-hash-synchronization.md) or [Pass-through Authentication](how-to-connect-pta.md) sign-in methods. However this feature cannot be used with Active Directory Federation Services (ADFS).
-
-**Q: Is Seamless SSO a free feature?**
-
-Seamless SSO is a free feature and you don't need any paid editions of Azure AD to use it.
-
-**Q: Is Seamless SSO available in the [Microsoft Azure Germany cloud](https://www.microsoft.de/cloud-deutschland) and the [Microsoft Azure Government cloud](https://azure.microsoft.com/features/gov/)?**
-
-Seamless SSO is available for the Azure Government cloud. For details, view [Hybrid Identity Considerations for Azure Government](./reference-connect-government-cloud.md).
-
-**Q: What applications take advantage of `domain_hint` or `login_hint` parameter capability of Seamless SSO?**
-
-Listed below is a non-exhaustive list of applications that can send these parameters to Azure AD, and therefore provides users a silent sign-on experience using Seamless SSO (i.e., no need for your users to input their usernames or passwords):
-
-| Application name | Application URL to be used |
-| -- | -- |
-| Access panel | https:\//myapps.microsoft.com/contoso.com |
-| Outlook on Web | https:\//outlook.office365.com/contoso.com |
-| Office 365 portals | https:\//portal.office.com?domain_hint=contoso.com, https:\//www.office.com?domain_hint=contoso.com |
-
-In addition, users get a silent sign-on experience if an application sends sign-in requests to Azure AD's endpoints set up as tenants - that is, https:\//login.microsoftonline.com/contoso.com/<..> or https:\//login.microsoftonline.com/<tenant_ID>/<..> - instead of Azure AD's common endpoint - that is, https:\//login.microsoftonline.com/common/<...>. Listed below is a non-exhaustive list of applications that make these types of sign-in requests.
-
-| Application name | Application URL to be used |
-| -- | -- |
-| SharePoint Online | https:\//contoso.sharepoint.com |
-| Azure portal | https:\//portal.azure.com/contoso.com |
-
-In the above tables, replace "contoso.com" with your domain name to get to the right application URLs for your tenant.
-
-If you want other applications using our silent sign-on experience, let us know in the feedback section.
-
-**Q: Does Seamless SSO support `Alternate ID` as the username, instead of `userPrincipalName`?**
-
-Yes. Seamless SSO supports `Alternate ID` as the username when configured in Azure AD Connect as shown [here](how-to-connect-install-custom.md). Not all Microsoft 365 applications support `Alternate ID`. Refer to the specific application's documentation for the support statement.
-
-**Q: What is the difference between the single sign-on experience provided by [Azure AD Join](../devices/overview.md) and Seamless SSO?**
-
-[Azure AD Join](../devices/overview.md) provides SSO to users if their devices are registered with Azure AD. These devices don't necessarily have to be domain-joined. SSO is provided using *primary refresh tokens* or *PRTs*, and not Kerberos. The user experience is most optimal on Windows 10 devices. SSO happens automatically on the Microsoft Edge browser. It also works on Chrome with the use of a browser extension.
-
-You can use both Azure AD Join and Seamless SSO on your tenant. These two features are complementary. If both features are turned on, then SSO from Azure AD Join takes precedence over Seamless SSO.
-
-**Q: I want to register non-Windows 10 devices with Azure AD, without using AD FS. Can I use Seamless SSO instead?**
-
-Yes, this scenario needs version 2.1 or later of the [workplace-join client](https://www.microsoft.com/download/details.aspx?id=53554).
-
-**Q: How can I roll over the Kerberos decryption key of the `AZUREADSSO` computer account?**
-
-It is important to frequently roll over the Kerberos decryption key of the `AZUREADSSO` computer account (which represents Azure AD) created in your on-premises AD forest.
-
->[!IMPORTANT]
->We highly recommend that you roll over the Kerberos decryption key at least every 30 days.
-
-Follow these steps on the on-premises server where you are running Azure AD Connect:
-
- > [!NOTE]
- >You will need both domain administrator and global administrator credentials for the steps below.
- >If you are not a domain admin and you were assigned permissions by the domain admin, you should call `Update-AzureADSSOForest -OnPremCredentials $creds -PreserveCustomPermissionsOnDesktopSsoAccount`
-
- **Step 1. Get list of AD forests where Seamless SSO has been enabled**
-
- 1. First, download, and install [Azure AD PowerShell](/powershell/azure/active-directory/overview).
- 2. Navigate to the `$env:programfiles\Microsoft Azure Active Directory Connect` folder.
- 3. Import the Seamless SSO PowerShell module using this command: `Import-Module .\AzureADSSO.psd1`.
- 4. Run PowerShell as an Administrator. In PowerShell, call `New-AzureADSSOAuthenticationContext`. This command should give you a popup to enter your tenant's Global Administrator credentials.
- 5. Call `Get-AzureADSSOStatus | ConvertFrom-Json`. This command provides you the list of AD forests (look at the "Domains" list) on which this feature has been enabled.
-
- **Step 2. Update the Kerberos decryption key on each AD forest that it was set it up on**
-
- 1. Call `$creds = Get-Credential`. When prompted, enter the Domain Administrator credentials for the intended AD forest.
-
- > [!NOTE]
- >The domain administrator credentials username must be entered in the SAM account name format (contoso\johndoe or contoso.com\johndoe). We use the domain portion of the username to locate the Domain Controller of the Domain Administrator using DNS.
-
- >[!NOTE]
- >The domain administrator account used must not be a member of the Protected Users group. If so, the operation will fail.
-
- 2. Call `Update-AzureADSSOForest -OnPremCredentials $creds`. This command updates the Kerberos decryption key for the `AZUREADSSO` computer account in this specific AD forest and updates it in Azure AD.
-
- 3. Repeat the preceding steps for each AD forest that youΓÇÖve set up the feature on.
-
- >[!NOTE]
- >If you are updating a forest, other than the Azure AD Connect one, make sure connectivity to the global catalog server (TCP 3268 and TCP 3269) is available.
-
- >[!IMPORTANT]
- >Ensure that you _don't_ run the `Update-AzureADSSOForest` command more than once. Otherwise, the feature stops working until the time your users' Kerberos tickets expire and are reissued by your on-premises Active Directory.
-
-**Q: How can I disable Seamless SSO?**
-
- **Step 1. Disable the feature on your tenant**
-
- **Option A: Disable using Azure AD Connect**
-
- 1. Run Azure AD Connect, choose **Change user sign-in page** and click **Next**.
- 2. Uncheck the **Enable single sign on** option. Continue through the wizard.
-
- After completing the wizard, Seamless SSO will be disabled on your tenant. However, you will see a message on screen that reads as follows:
-
- "Single sign-on is now disabled, but there are additional manual steps to perform in order to complete clean-up. [Learn more](tshoot-connect-sso.md#step-3-disable-seamless-sso-for-each-active-directory-forest-where-youve-set-up-the-feature)"
-
- To complete the clean-up process, follow steps 2 and 3 on the on-premises server where you are running Azure AD Connect.
-
- **Option B: Disable using PowerShell**
-
- Run the following steps on the on-premises server where you are running Azure AD Connect:
-
- 1. First, download, and install [Azure AD PowerShell](/powershell/azure/active-directory/overview).
- 2. Navigate to the `$env:ProgramFiles\Microsoft Azure Active Directory Connect` folder.
- 3. Import the Seamless SSO PowerShell module using this command: `Import-Module .\AzureADSSO.psd1`.
- 4. Run PowerShell as an Administrator. In PowerShell, call `New-AzureADSSOAuthenticationContext`. This command should give you a popup to enter your tenant's Global Administrator credentials.
- 5. Call `Enable-AzureADSSO -Enable $false`.
-
- At this point Seamless SSO is disabled but the domains will remain configured in case you would like to enable Seamless SSO back. If you would like to remove the domains from Seamless SSO configuration completely, call the following cmdlet after you completed step 5 above: `Disable-AzureADSSOForest -DomainFqdn <fqdn>`.
-
- >[!IMPORTANT]
- >Disabling Seamless SSO using PowerShell will not change the state in Azure AD Connect. Seamless SSO will show as enabled in the **Change user sign-in** page.
-
- **Step 2. Get list of AD forests where Seamless SSO has been enabled**
-
- Follow tasks 1 through 4 below if you have disabled Seamless SSO using Azure AD Connect. If you have disabled Seamless SSO using PowerShell instead, jump ahead to task 5 below.
-
- 1. First, download, and install [Azure AD PowerShell](/powershell/azure/active-directory/overview).
- 2. Navigate to the `$env:ProgramFiles\Microsoft Azure Active Directory Connect` folder.
- 3. Import the Seamless SSO PowerShell module using this command: `Import-Module .\AzureADSSO.psd1`.
- 4. Run PowerShell as an Administrator. In PowerShell, call `New-AzureADSSOAuthenticationContext`. This command should give you a popup to enter your tenant's Global Administrator credentials.
- 5. Call `Get-AzureADSSOStatus | ConvertFrom-Json`. This command provides you the list of AD forests (look at the "Domains" list) on which this feature has been enabled.
-
- **Step 3. Manually delete the `AZUREADSSO` computer account from each AD forest that you see listed.**
-
-## Next steps
--- [**Quickstart**](how-to-connect-sso-quick-start.md) - Get up and running Azure AD Seamless SSO.-- [**Technical Deep Dive**](how-to-connect-sso-how-it-works.md) - Understand how this feature works.-- [**Troubleshoot**](tshoot-connect-sso.md) - Learn how to resolve common issues with the feature.-- [**UserVoice**](https://feedback.azure.com/forums/169401-azure-active-directory/category/160611-directory-synchronization-aad-connect) - For filing new feature requests.
active-directory How To Connect Sso How It Works https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-sso-how-it-works.md
Seamless SSO is enabled using Azure AD Connect as shown [here](how-to-connect-ss
- The computer account's Kerberos decryption key is shared securely with Azure AD. If there are multiple AD forests, each computer account will have its own unique Kerberos decryption key. >[!IMPORTANT]
-> The `AZUREADSSOACC` computer account needs to be strongly protected for security reasons. Only Domain Admins should be able to manage the computer account. Ensure that Kerberos delegation on the computer account is disabled, and that no other account in Active Directory has delegation permissions on the `AZUREADSSOACC` computer account.. Store the computer account in an Organization Unit (OU) where they are safe from accidental deletions and where only Domain Admins have access. The Kerberos decryption key on the computer account should also be treated as sensitive. We highly recommend that you [roll over the Kerberos decryption key](how-to-connect-sso-faq.md) of the `AZUREADSSOACC` computer account at least every 30 days.
+> The `AZUREADSSOACC` computer account needs to be strongly protected for security reasons. Only Domain Admins should be able to manage the computer account. Ensure that Kerberos delegation on the computer account is disabled, and that no other account in Active Directory has delegation permissions on the `AZUREADSSOACC` computer account.. Store the computer account in an Organization Unit (OU) where they are safe from accidental deletions and where only Domain Admins have access. The Kerberos decryption key on the computer account should also be treated as sensitive. We highly recommend that you [roll over the Kerberos decryption key](how-to-connect-sso-faq.yml) of the `AZUREADSSOACC` computer account at least every 30 days.
>[!IMPORTANT]
-> Seamless SSO supports the AES256_HMAC_SHA1, AES128_HMAC_SHA1 and RC4_HMAC_MD5 encryption types for Kerberos. It is recommended that the encryption type for the AzureADSSOAcc$ account is set to AES256_HMAC_SHA1, or one of the AES types vs. RC4 for added security. The encryption type is stored on the msDS-SupportedEncryptionTypes attribute of the account in your Active Directory. If the AzureADSSOAcc$ account encryption type is set to RC4_HMAC_MD5, and you want to change it to one of the AES encryption types, please make sure that you first roll over the Kerberos decryption key of the AzureADSSOAcc$ account as explained in the [FAQ document](how-to-connect-sso-faq.md) under the relevant question, otherwise Seamless SSO will not happen.
+> Seamless SSO supports the `AES256_HMAC_SHA1`, `AES128_HMAC_SHA1` and `RC4_HMAC_MD5` encryption types for Kerberos. It is recommended that the encryption type for the `AzureADSSOAcc$` account is set to `AES256_HMAC_SHA1`, or one of the AES types vs. RC4 for added security. The encryption type is stored on the `msDS-SupportedEncryptionTypes` attribute of the account in your Active Directory. If the `AzureADSSOAcc$` account encryption type is set to `RC4_HMAC_MD5`, and you want to change it to one of the AES encryption types, please make sure that you first roll over the Kerberos decryption key of the `AzureADSSOAcc$` account as explained in the [FAQ document](how-to-connect-sso-faq.yml) under the relevant question, otherwise Seamless SSO will not happen.
Once the set-up is complete, Seamless SSO works the same way as any other sign-in that uses Integrated Windows Authentication (IWA).
The sign-in flow on a web browser is as follows:
3. The user types in their user name into the Azure AD sign-in page. >[!NOTE]
- >For [certain applications](./how-to-connect-sso-faq.md), steps 2 & 3 are skipped.
+ >For [certain applications](./how-to-connect-sso-faq.yml), steps 2 & 3 are skipped.
4. Using JavaScript in the background, Azure AD challenges the browser, via a 401 Unauthorized response, to provide a Kerberos ticket. 5. The browser, in turn, requests a ticket from Active Directory for the `AZUREADSSOACC` computer account (which represents Azure AD).
The following diagram illustrates all the components and the steps involved.
## Next steps - [**Quick Start**](how-to-connect-sso-quick-start.md) - Get up and running Azure AD Seamless SSO.-- [**Frequently Asked Questions**](how-to-connect-sso-faq.md) - Answers to frequently asked questions.
+- [**Frequently Asked Questions**](how-to-connect-sso-faq.yml) - Answers to frequently asked questions.
- [**Troubleshoot**](tshoot-connect-sso.md) - Learn how to resolve common issues with the feature. - [**UserVoice**](https://feedback.azure.com/forums/169401-azure-active-directory/category/160611-directory-synchronization-aad-connect) - For filing new feature requests.
active-directory How To Connect Sso Quick Start https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-sso-quick-start.md
In Step 2, Azure AD Connect creates computer accounts (representing Azure AD) in
>[!IMPORTANT] >The Kerberos decryption key on a computer account, if leaked, can be used to generate Kerberos tickets for any user in its AD forest. Malicious actors can then impersonate Azure AD sign-ins for compromised users. We highly recommend that you periodically roll over these Kerberos decryption keys - at least once every 30 days.
-For instructions on how to roll over keys, see [Azure Active Directory Seamless Single Sign-On: Frequently asked questions](how-to-connect-sso-faq.md).
+For instructions on how to roll over keys, see [Azure Active Directory Seamless Single Sign-On: Frequently asked questions](how-to-connect-sso-faq.yml).
>[!IMPORTANT] >You don't need to do this step _immediately_ after you have enabled the feature. Roll over the Kerberos decryption keys at least once every 30 days.
For instructions on how to roll over keys, see [Azure Active Directory Seamless
## Next steps - [Technical deep dive](how-to-connect-sso-how-it-works.md): Understand how the Seamless Single Sign-On feature works.-- [Frequently asked questions](how-to-connect-sso-faq.md): Get answers to frequently asked questions about Seamless Single Sign-On.
+- [Frequently asked questions](how-to-connect-sso-faq.yml): Get answers to frequently asked questions about Seamless Single Sign-On.
- [Troubleshoot](tshoot-connect-sso.md): Learn how to resolve common problems with the Seamless Single Sign-On feature. - [UserVoice](https://feedback.azure.com/forums/169401-azure-active-directory/category/160611-directory-synchronization-aad-connect): Use the Azure Active Directory Forum to file new feature requests.
active-directory How To Connect Sso https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-sso.md
For more information on how SSO works with Windows 10 using PRT, see: [Primary R
- [**Quick Start**](how-to-connect-sso-quick-start.md) - Get up and running Azure AD Seamless SSO. - [**Deployment Plan**](../manage-apps/plan-sso-deployment.md) - Step-by-step deployment plan. - [**Technical Deep Dive**](how-to-connect-sso-how-it-works.md) - Understand how this feature works.-- [**Frequently Asked Questions**](how-to-connect-sso-faq.md) - Answers to frequently asked questions.
+- [**Frequently Asked Questions**](how-to-connect-sso-faq.yml) - Answers to frequently asked questions.
- [**Troubleshoot**](tshoot-connect-sso.md) - Learn how to resolve common issues with the feature. - [**UserVoice**](https://feedback.azure.com/forums/169401-azure-active-directory/category/160611-directory-synchronization-aad-connect) - For filing new feature requests.
active-directory Plan Migrate Adfs Pass Through Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/plan-migrate-adfs-pass-through-authentication.md
It's important to frequently roll over the Kerberos decryption key of the AZUREA
Initiate the rollover of the seamless SSO Kerberos decryption key on the on-premises server that's running Azure AD Connect.
-For more information, see [How do I roll over the Kerberos decryption key of the AZUREADSSOACC computer account?](./how-to-connect-sso-faq.md).
+For more information, see [How do I roll over the Kerberos decryption key of the AZUREADSSOACC computer account?](./how-to-connect-sso-faq.yml).
## Monitoring and logging
active-directory Plan Migrate Adfs Password Hash Sync https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/plan-migrate-adfs-password-hash-sync.md
It's important to frequently roll over the Kerberos decryption key of the AZUREA
Initiate the rollover of the seamless SSO Kerberos decryption key on the on-premises server that's running Azure AD Connect.
-For more information, see [How do I roll over the Kerberos decryption key of the AZUREADSSOACC computer account?](./how-to-connect-sso-faq.md).
+For more information, see [How do I roll over the Kerberos decryption key of the AZUREADSSOACC computer account?](./how-to-connect-sso-faq.yml).
## Next steps
active-directory Reference Connect Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/reference-connect-faq.md
- Title: 'Azure Active Directory Connect FAQ - | Microsoft Docs'
-description: This article answers frequently asked questions about Azure AD Connect.
------ Previously updated : 08/23/2019-----
-# Azure Active Directory Connect FAQ
-
-## General installation
-
-**Q: How can I harden my Azure AD Connect server to decrease the security attack surface?**
-
-Microsoft recommends hardening your Azure AD Connect server to decrease the security attack surface for this critical component of your IT environment. Following the recommendations below will decrease the security risks to your organization.
-
-* Deploy Azure AD Connect on a domain joined server and restrict administrative access to domain administrators or other tightly controlled security groups
-
-To learn more, see:
-
-* [Securing administrators groups](/windows-server/identity/ad-ds/plan/security-best-practices/appendix-g--securing-administrators-groups-in-active-directory)
-
-* [Securing built-in administrator accounts](/windows-server/identity/ad-ds/plan/security-best-practices/appendix-d--securing-built-in-administrator-accounts-in-active-directory)
-
-* [Security improvement and sustainment by reducing attack surfaces](/windows-server/identity/securing-privileged-access/securing-privileged-access#2-reduce-attack-surfaces )
-
-* [Reducing the Active Directory attack surface](/windows-server/identity/ad-ds/plan/security-best-practices/reducing-the-active-directory-attack-surface)
-
-**Q: Will installation work if the Azure Active Directory (Azure AD) Global Admin has two-factor authentication (2FA) enabled?**
-As of the February 2016 builds, this scenario is supported.
-
-**Q: Is there a way to install Azure AD Connect unattended?**
-Azure AD Connect installation is supported only when you use the installation wizard. An unattended, silent installation is not supported.
-
-**Q: I have a forest where one domain cannot be contacted. How do I install Azure AD Connect?**
-As of the February 2016 builds, this scenario is supported.
-
-**Q: Does the Azure Active Directory Domain Services (Azure AD DS) health agent work on server core?**
-Yes. After you install the agent, you can complete the registration process by using the following PowerShell cmdlet:
-
-`Register-AzureADConnectHealthADDSAgent -Credentials $cred`
-
-**Q: Does Azure AD Connect support syncing from two domains to an Azure AD?**
-Yes, this scenario is supported. Refer to [Multiple Domains](how-to-connect-install-multiple-domains.md).
-
-**Q: Can you have multiple connectors for the same Active Directory domain in Azure AD Connect?**
-No, multiple connectors for the same AD domain are not supported.
-
-**Q: Can I move the Azure AD Connect database from the local database to a remote SQL Server instance?**
-Yes, the following steps provide general guidance on how to do this. We are currently working on a more detailed document.
-1. Back up the LocalDB ADSync database.
-The simplest way to do this is to use SQL Server Management Studio installed on the same machine as Azure AD Connect. Connect to *(LocalDb).\ADSync*, and then back up the ADSync database.
-
-2. Restore the ADSync database to your remote SQL Server instance.
-
-3. Install Azure AD Connect against the existing [remote SQL database](how-to-connect-install-existing-database.md).
- The article demonstrates how to migrate to using a local SQL database. If you are migrating to using a remote SQL database, in step 5 of the process you must also enter an existing service account that the Windows Sync service will run as. This sync engine service account is described here:
-
- **Use an existing service account**: By default, Azure AD Connect uses a virtual service account for the synchronization services to use. If you use a remote SQL Server instance or use a proxy that requires authentication, use a managed service account or a service account in the domain, and know the password. In those cases, enter the account to use. Make sure that users who are running the installation are system administrators in SQL so that login credentials for the service account can be created. For more information, see [Azure AD Connect accounts and permissions](reference-connect-accounts-permissions.md#adsync-service-account).
-
- With the latest build, provisioning the database can now be performed out of band by the SQL administrator and then installed by the Azure AD Connect administrator with database owner rights. For more information, see [Install Azure AD Connect by using SQL delegated administrator permissions](how-to-connect-install-sql-delegation.md).
-
-To keep things simple, we recommend that users who install Azure AD Connect be system administrators in SQL. However, with recent builds you can now use delegated SQL administrators, as described in [Install Azure AD Connect using SQL delegated administrator permissions](how-to-connect-install-sql-delegation.md).
-
-**Q: What are some of the best practices from the field?**
-
-The following is an informational document that presents some of the best practices that engineering, support and our consultants have developed over the years. This is presented in a bullet list that can be quickly referenced. Although this list attempts to be comprehensive, there may be additional best practices that might not have made it on the list yet.
--- If using Full SQL then it should remain local vs. remote
- - Fewer hops
- - Easier to troubleshoot
- - Less complexity
- - Need to designate resources to SQL and allow overhead for Azure AD Connect and OS
-- Bypass Proxy if at all possible, if you are unable to bypass the proxy then you need to ensure that the timeout value is greater than 5 minutes.-- If proxy is required then you must add the proxy to the machine.config file-- Be aware of local SQL jobs and maintenance and how they will impact Azure AD Connect - particularly re-indexing-- Ensure than DNS can resolve externally-- Ensure that [server specifications](how-to-connect-install-prerequisites.md#hardware-requirements-for-azure-ad-connect) are per recommendation whether you are using physical or virtual servers-- Ensure that if you are using a virtual server that resources required are dedicated-- Ensure that you have the disk and disk configuration meet Best Practices for SQL Server-- Install and configure Azure AD Connect Health for monitoring-- Use the Delete Threshold that is built into Azure AD Connect.-- Carefully review release updates to be prepared for all changes and new attributes that may be added-- Backup everything
- - Backup Keys
- - Backup Synchronization Rules
- - Backup Server Configuration
- - Backup SQL Database
-- Ensure that there are no 3rd party backup agents that are backing up SQL without the SQL VSS Writer (common in virtual servers with 3rd party snapshots)-- Limit the amount of custom synchronization rules that are used as they add complexity-- Treat Azure AD Connect Servers as Tier 0 Servers-- Be leery of modifying cloud synchronization rules without great understanding of the impact and the right business drivers-- Make sure that the correct URL's and Firewall ports are open for support of Azure AD Connect and Azure AD Connect Health-- Leverage the cloud filtered attribute to troubleshoot and prevent phantom objects-- With the Staging Server ensure that you are using the Azure AD Connect Configuration Documenter for consistency between servers-- Staging Servers should be in separate datacenters (Physical Locations-- Staging servers are not meant to be a High Availability solution, but you can have multiple staging servers-- Introducing a "Lag" Staging Servers could mitigate some potential downtime in case of error-- Test and Validate all upgrades on the Staging Server first-- Always validate exports before switching over to the staging server. Leverage the staging server for Full Imports and Full Synchronizations to reduce business impact-- Keep version consistency between Azure AD Connect Servers as much as possible -
-**Q: Can I allow Azure AD Connect to create the Azure AD Connector account on Workgroup machine?**
-No. In order to allow Azure AD Connect to auto-create the Azure AD Connector account, the machine must be domain-joined.
-
-## Network
-**Q: I have a firewall, network device, or something else that limits the time that connections can stay open on my network. What should my client-side timeout threshold be when I use Azure AD Connect?**
-All networking software, physical devices, or anything else that limits the maximum time that connections can remain open should use a threshold of at least five minutes (300 seconds) for connectivity between the server where the Azure AD Connect client is installed and Azure Active Directory. This recommendation also applies to all previously released Microsoft Identity synchronization tools.
-
-**Q: Are single label domains (SLDs) supported?**
-While we strongly recommend against this network configuration ([see article](https://support.microsoft.com/help/2269810/microsoft-support-for-single-label-domains)), using Azure AD Connect sync with a single label domain is supported, as long as the network configuration for the single level domain is functioning correctly.
-
-**Q: Are Forests with disjoint AD domains supported?**
-No, Azure AD Connect does not support on-premises forests that contain disjoint namespaces.
-
-**Q: Are "dotted" NetBIOS names supported?**
-No, Azure AD Connect does not support on-premises forests or domains where the NetBIOS name contains a dot (.).
-
-**Q: Is pure IPv6 environment supported?**
-No, Azure AD Connect does not support a pure IPv6 environment.
-
-**Q:I have a multi-forest environment and the network between the two forests is using NAT (Network Address Translation). Is using Azure AD Connect between these two forests supported?**</br>
-No, using Azure AD Connect over NAT is not supported.
-
-## Federation
-**Q: What do I do if I receive an email that asks me to renew my Microsoft 365 certificate?**
-For guidance about renewing the certificate, see [renew certificates](how-to-connect-fed-o365-certs.md).
-
-**Q: I have "Automatically update relying party" set for the Microsoft 365 relying party. Do I have to take any action when my token signing certificate automatically rolls over?**
-Use the guidance that's outlined in the article [renew certificates](how-to-connect-fed-o365-certs.md).
-
-## Environment
-**Q: Is it supported to rename the server after Azure AD Connect has been installed?**
-No. Changing the server name renders the sync engine unable to connect to the SQL database instance, and the service cannot start.
-
-**Q: Are Next Generation Cryptographic (NGC) sync rules supported on a FIPS-enabled machine?**
-No. They are not supported.
-
-**Q. If I disabled a synced device (for example: HAADJ) in the Azure portal, why it is re-enabled?**<br>
-Synced devices might be authored or mastered on premises. If a synced device is enabled on premises, it might be re-enabled in the Azure portal even if was previously disabled by an administrator. To disable a synced device, use the on-premises Active Directory to disable the computer account.
-
-**Q. If I block user sign-in at the Microsoft 365 or Azure AD portal for synced users, why it is unblocked upon signing in again?**<br>
-Synced users might be authored or mastered on premises. If the account is enabled on premises, it can unblock the sign-in block placed by administrator.
-
-## Identity data
-**Q: Why doesn't the userPrincipalName (UPN) attribute in Azure AD match the on-premises UPN?**
-For information, see these articles:
-
-* [Usernames in Microsoft 365, Azure, or Intune don't match the on-premises UPN or alternate login ID](https://mskb.pkisolutions.com/kb/2523192)
-* [Changes aren't synced by the Azure Active Directory sync tool after you change the UPN of a user account to use a different federated domain](https://mskb.pkisolutions.com/kb/2669550)
-
-You can also configure Azure AD to allow the sync engine to update the UPN, as described in [Azure AD Connect sync service features](how-to-connect-syncservice-features.md).
-
-**Q: Is it supported to soft-match an on-premises Azure AD group or contact object with an existing Azure AD group or contact object?**
-Yes, this soft match is based on the proxyAddress. Soft matching is not supported for groups that are not mail-enabled.
-
-**Q: Is it supported to manually set the ImmutableId attribute on an existing Azure AD group or contact object to hard-match it to an on-premises Azure AD group or contact object?**
-No, manually setting the ImmutableId attribute on an existing Azure AD group or contact object to hard-match it is currently not supported.
-
-## Custom configuration
-**Q: Where are the PowerShell cmdlets for Azure AD Connect documented?**
-With the exception of the cmdlets that are documented on this site, other PowerShell cmdlets found in Azure AD Connect are not supported for customer use.
-
-**Q: Can I use the "Server export/server import" option that's found in Synchronization Service Manager to move the configuration between servers?**
-No. This option does not retrieve all configuration settings, and it should not be used. Instead, use the wizard to create the base configuration on the second server, and use the sync rule editor to generate PowerShell scripts to move any custom rule between servers. For more information, see [Swing migration](how-to-upgrade-previous-version.md#swing-migration).
-
-**Q: Can passwords be cached for the Azure sign-in page, and can this caching be prevented because it contains a password input element with the *autocomplete = "false"* attribute?**
-Currently, modifying the HTML attributes of the **Password** field, including the autocomplete tag, is not supported. We are currently working on a feature that allows for custom JavaScript, which lets you add any attribute to the **Password** field.
-
-**Q: The Azure sign-in page displays the usernames of users who have previously signed in successfully. Can this behavior be turned off?**
-Currently, modifying the HTML attributes of the **Password** input field, including the autocomplete tag, is not supported. We are currently working on a feature that allows for custom JavaScript, which lets you add any attribute to the **Password** field.
-
-**Q: Is there a way to prevent concurrent sessions?**
-No.
-
-## Auto upgrade
-
-**Q: What are the advantages and consequences of using auto upgrade?**
-We are advising all customers to enable auto upgrade for their Azure AD Connect installation. The benefit is that you always receive the latest patches, including security updates for vulnerabilities that have been found in Azure AD Connect. The upgrade process is painless and happens automatically as soon as a new version is available. Many thousands of Azure AD Connect customers use auto upgrade with every new release.
-
-The auto-upgrade process always first establishes whether an installation is eligible for auto upgrade. If it is eligible, the upgrade is performed and tested. The process also includes looking for custom changes to rules and specific environmental factors. If the tests show that an upgrade is unsuccessful, the previous version is automatically restored.
-
-Depending on the size of the environment, the process can take a couple of hours. While the upgrade is in progress, no sync between Windows Server Active Directory and Azure AD happens.
-
-**Q: I received an email telling me that my auto upgrade no longer works and I need to install a new version. Why do I need to do this?**
-Last year, we released a version of Azure AD Connect that, under certain circumstances, might have disabled the auto-upgrade feature on your server. We have fixed the issue in Azure AD Connect version 1.1.750.0. If you have been affected by the issue, you can mitigate it by running a PowerShell script to repair it or by manually upgrading to the latest version of Azure AD Connect.
-
-To run the PowerShell script, [download the script](/samples/browse/?redirectedfrom=TechNet-Gallery) and run it on your Azure AD Connect server in an administrative PowerShell window. To learn how to run the script, [view this short video](https://aka.ms/repairaadcau).
-
-To manually upgrade, you must download and run the latest version of the AADConnect.msi file.
-
-- If your current version is older than 1.1.750.0, [download and upgrade to the latest version](https://www.microsoft.com/download/details.aspx?id=47594).-- If your Azure AD Connect version is 1.1.750.0 or later, no further action is required. YouΓÇÖre already using the version that contains the auto-upgrade fix. -
-**Q: I received an email telling me to upgrade to the latest version to re-enable auto upgrade. I am using version 1.1.654.0. Do I need to upgrade?**
-Yes, you need to upgrade to version 1.1.750.0 or later to re-enable auto upgrade. [Download and upgrade to the latest version](https://www.microsoft.com/download/details.aspx?id=47594).
-
-**Q: I received an email telling me to upgrade to the latest version to re-enable auto upgrade. If I have used PowerShell to enable auto upgrade, do I still need to install the latest version?**
-Yes, you still need to upgrade to version 1.1.750.0 or later. Enabling the auto-upgrade service with PowerShell does not mitigate the auto-upgrade issue found in versions before 1.1.750.0.
-
-**Q: I want to upgrade to a newer version but IΓÇÖm not sure who installed Azure AD Connect, and we do not have the username and password. Do we need this?**
-You donΓÇÖt need to know the username and password that was initially used to upgrade Azure AD Connect. Use any Azure AD account that has the Global Administrator role.
-
-**Q: How can I find which version of Azure AD Connect I am using?**
-To verify which version of Azure AD Connect is installed on your server, go to Control Panel and look up the installed version of Microsoft Azure AD Connect by selecting **Programs** > **Programs and Features**, as shown here:
-
-![Azure AD Connect version in Control Panel](./media/reference-connect-faq/faq1.png)
-
-**Q: How do I upgrade to the latest version of Azure AD Connect?**
-To learn how to upgrade to the latest version, see [Azure AD Connect: Upgrade from a previous version to the latest](how-to-upgrade-previous-version.md).
-
-**Q: We already upgraded to the latest version of Azure AD Connect last year. Do we need to upgrade again?**
-The Azure AD Connect team makes frequent updates to the service. To benefit from bug fixes and security updates as well as new features, it is important to keep your server up to date with the latest version. If you enable auto upgrade, your software version is updated automatically. To find the version release history of Azure AD Connect, see [Azure AD Connect: Version release history](reference-connect-version-history.md).
-
-**Q: How long does it take to perform the upgrade, and what is the impact on my users?**
-The time needed to upgrade depends on your tenant size. For larger organizations, it might be best to perform the upgrade in the evening or weekend. During the upgrade, no synchronization activity takes place.
-
-**Q: I believe I upgraded to Azure AD Connect, but the Office portal still mentions DirSync. Why is this?**
-The Office team is working to get the Office portal updates to reflect the current product name. It does not reflect which sync tool you are using.
-
-**Q: My auto-upgrade status says, ΓÇ£Suspended." Why is it suspended? Should I enable it?**
-A bug was introduced in a previous version that, under certain circumstances, leaves the auto-upgrade status set to ΓÇ£Suspended.ΓÇ¥ Manually enabling it is technically possible but would require several complex steps. The best thing you can do is install the latest version of Azure AD Connect.
-
-**Q: My company has strict change-management requirements, and I want to control when itΓÇÖs pushed out. Can I control when auto upgrade is launched?**
-No, there is no such feature today. The feature is being evaluated for a future release.
-
-**Q: Will I get an email if the auto upgrade failed? How will I know that it was successful?**
-You will not be notified of the result of the upgrade. The feature is being evaluated for a future release.
-
-**Q: Do you publish a timeline for when you plan to push out auto upgrades?**
-Auto upgrade is the first step in the release process of a newer version. Whenever there is a new release, upgrades are pushed automatically. Newer versions of Azure AD Connect are pre-announced in the [Azure AD Roadmap](../fundamentals/whats-new.md).
-
-**Q: Does auto upgrade also upgrade Azure AD Connect Health?**
-Yes, auto upgrade also upgrades Azure AD Connect Health.
-
-**Q: Do you also auto-upgrade Azure AD Connect servers in staging mode?**
-Yes, you can auto-upgrade an Azure AD Connect server that is in staging mode.
-
-**Q: If auto upgrade fails and my Azure AD Connect server does not start, what should I do?**
-In rare cases, the Azure AD Connect service does not start after you perform the upgrade. In these cases, rebooting the server usually fixes the issue. If the Azure AD Connect service still does not start, open a support ticket. For more information, see [Create a service request to contact Microsoft 365 support](/archive/blogs/praveenkumar/how-to-create-service-requests-to-contact-office-365-support).
-
-**Q: IΓÇÖm not sure what the risks are when I upgrade to a newer version of Azure AD Connect. Can you call me to help me with the upgrade?**
-If you need help upgrading to a newer version of Azure AD Connect, open a support ticket at [Create a service request to contact Microsoft 365 support](/archive/blogs/praveenkumar/how-to-create-service-requests-to-contact-office-365-support).
-
-## Operational best practice
-Below are some best practices you should implement when syncing between Windows Server Active Directory and Azure Active Directory.
-
-**Apply Multi-Factor Authentication for all synced accounts**
-Azure AD Multi-Factor Authentication helps safeguard access to data and applications while maintaining simplicity for users. It provides additional security by requiring a second form of authentication and delivers strong authentication via a range of easy to use authentication methods. Users may or may not be challenged for MFA based on configuration decisions that an administrator makes. You can read more about MFA here: https://www.microsoft.com/security/business/identity/mfa?rtc=1
-
-**Follow the Azure AD Connect server security guidelines**
-The Azure AD Connect server contains critical identity data and should be treated as a Tier 0 component as documented in the [Active Directory administrative tier model](/windows-server/identity/securing-privileged-access/securing-privileged-access-reference-material). Please also refer to our [guidelines for securing your AADConnect server](./how-to-connect-install-prerequisites.md#azure-ad-connect-server).
-
-**Enable PHS for leaked credentials detection**
-Password Hash Sync also enables [leaked credential detection](../identity-protection/concept-identity-protection-risks.md) for your hybrid accounts. Microsoft works alongside dark web researchers and law enforcement agencies to find publicly available username/password pairs. If any of these pairs match those of your users, the associated account is moved to high risk.
--
-## Troubleshooting
-**Q: How can I get help with Azure AD Connect?**
-
-[Search the Microsoft Knowledge Base (KB)](https://www.microsoft.com/en-us/search/result.aspx?q=azure+active+directory+connect)
-
-* Search the KB for technical solutions to common break-fix issues about support for Azure AD Connect.
-
-[Microsoft Q&A question page for Azure Active Directory](/answers/topics/azure-active-directory.html)
-
-* Search for technical questions and answers or ask your own questions by going to [the Azure AD community](/answers/topics/azure-active-directory.html).
-
-[Get support for Azure AD](../fundamentals/active-directory-troubleshooting-support-howto.md)
-
-**Q: Why am I seeing Events 6311 and 6401 occur after Sync Step Errors?**
-
-The events 6311 - **The server encountered an unexpected error while performing a callback** and 6401 - **The management agent controller encountered an unexpected error** - are always logged after a synchronization step error. To resolve these errors, you need to clean up the synchronization step errors. For more information, see [Troubleshooting errors during synchronization](tshoot-connect-sync-errors.md) and [Troubleshoot object synchronization with Azure AD Connect sync](tshoot-connect-objectsync.md)
active-directory Reference Connect Health Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/reference-connect-health-faq.md
- Title: Azure Active Directory Connect Health FAQ - Azure | Microsoft Docs
-description: This FAQ answers questions about Azure AD Connect Health. This FAQ covers questions about using the service, including the billing model, capabilities, limitations, and support.
------- Previously updated : 07/18/2017---
-# Azure AD Connect Health frequently asked questions
-This article includes answers to frequently asked questions (FAQs) about Azure Active Directory (Azure AD) Connect Health. These FAQs cover questions about how to use the service, which includes the billing model, capabilities, limitations, and support.
-
-## General questions
-**Q: I manage multiple Azure AD directories. How do I switch to the one that has Azure Active Directory Premium?**
-
-To switch between different Azure AD tenants, select the currently signed-in **User Name** on the upper-right corner, and then choose the appropriate account. If the account is not listed here, select **Sign out**, and then use the global admin credentials of the directory that has Azure Active Directory Premium (P1 or P2) enabled to sign in.
-
-**Q: What version of identity roles are supported by Azure AD Connect Health?**
-
-The following table lists the roles and supported operating system versions.
-
-|Role| Operating system / Version|
-|--|--|
-|Active Directory Federation Services (AD FS)| <ul><li> Windows Server 2012 </li> <li>Windows Server 2012 R2 </li> <li> Windows Server 2016 </li> <li> Windows Server 2019 </li> </ul>|
-|Azure AD Connect | Version 1.0.9125 or higher|
-|Active Directory Domain Services (AD DS)| <ul><li> Windows Server 2012 </li> <li>Windows Server 2012 R2 </li> <li> Windows Server 2016 </li> <li> Windows Server 2019 </li> </ul>|
-
-Windows Server Core installations are not supported.
-
-Note that the features provided by the service may differ based on the role and the operating system. In other words, all the features may not be available for all operating system versions. See the feature descriptions for details.
-
-**Q: How many licenses do I need to monitor my infrastructure?**
-
-* The first Connect Health Agent requires at least one Azure AD Premium (P1 or P2) license.
-* Each additional registered agent requires 25 additional Azure AD Premium (P1 or P2) licenses.
-* Agent count is equivalent to the total number of agents that are registered across all monitored roles (AD FS, Azure AD Connect, and/or AD DS).
-* AAD Connect Health licensing does not require you to assign the license to specific users. You only need to have the requisite number of valid licenses.
-
-Licensing information is also found on the [Azure AD Pricing page](https://aka.ms/aadpricing).
-
-Example:
-
-| Registered agents | Licenses needed | Example monitoring configuration |
-| | | |
-| 1 | 1 | 1 Azure AD Connect server |
-| 2 | 26| 1 Azure AD Connect server and 1 domain controller |
-| 3 | 51 | 1 Active Directory Federation Services (AD FS) server, 1 AD FS proxy, and 1 domain controller |
-| 4 | 76 | 1 AD FS server, 1 AD FS proxy, and 2 domain controllers |
-| 5 | 101 | 1 Azure AD Connect server, 1 AD FS server, 1 AD FS proxy, and 2 domain controllers |
-
-**Q: Does Azure AD Connect Health support Azure Germany Cloud?**
-
-Azure AD Connect Health is not supported in Germany Cloud except for the [sync errors report feature](how-to-connect-health-sync.md#object-level-synchronization-error-report).
-
-| Roles | Features | Supported in German Cloud |
-| | | |
-| Connect Health for Sync | Monitoring / Insight / Alerts / Analysis | No |
-| | Sync error report | Yes |
-| Connect Health for ADFS | Monitoring / Insight / Alerts / Analysis | No |
-| Connect Health for ADDS | Monitoring / Insight / Alerts / Analysis | No |
-
-To ensure the agent connectivity of Connect Health for sync, please configure the [installation requirement](how-to-connect-health-agent-install.md#outbound-connectivity-to-the-azure-service-endpoints) accordingly.
-
-## Installation questions
-
-**Q: What is the impact of installing the Azure AD Connect Health Agent on individual servers?**
-
-The impact of installing the Microsoft Azure AD Connect Health Agent, AD FS, web application proxy servers, Azure AD Connect (sync) servers, domain controllers is minimal with respect to the CPU, memory consumption, network bandwidth, and storage.
-
-The following numbers are an approximation:
-
-* CPU consumption: ~1-5% increase.
-* Memory consumption: Up to 10 % of the total system memory.
-
-> [!NOTE]
-> If the agent cannot communicate with Azure, the agent stores the data locally for a defined maximum limit. The agent overwrites the ΓÇ£cachedΓÇ¥ data on a ΓÇ£least recently servicedΓÇ¥ basis.
->
->
-
-* Local buffer storage for Azure AD Connect Health Agents: ~20 MB.
-* For AD FS servers, we recommend that you provision a disk space of 1,024 MB (1 GB) for the AD FS audit channel for Azure AD Connect Health Agents to process all the audit data before it is overwritten.
-
-**Q: Will I have to reboot my servers during the installation of the Azure AD Connect Health Agents?**
-
-No. The installation of the agents will not require you to reboot the server. However, installation of some prerequisite steps might require a reboot of the server.
-
-For example, on Windows Server 2008 R2, installation of .NET 4.5 Framework requires a server reboot.
-
-**Q: Does Azure AD Connect Health work through a pass-through HTTP proxy?**
-
-Yes. For ongoing operations, you can configure the Health Agent to use an HTTP proxy to forward outbound HTTP requests.
- Read more about [configuring HTTP Proxy for Health Agents](how-to-connect-health-agent-install.md#configure-azure-ad-connect-health-agents-to-use-http-proxy).
-
-If you need to configure a proxy during agent registration, you might need to modify your Internet Explorer Proxy settings beforehand.
-
-1. Open Internet Explorer > **Settings** > **Internet Options** > **Connections** > **LAN Settings**.
-2. Select **Use a Proxy Server for your LAN**.
-3. Select **Advanced** if you have different proxy ports for HTTP and HTTPS/Secure.
-
-**Q: Does Azure AD Connect Health support Basic authentication when connecting to HTTP proxies?**
-
-No. A mechanism to specify an arbitrary user name and password for Basic authentication is not currently supported.
-
-**Q: What firewall ports do I need to open for the Azure AD Connect Health Agent to work?**
-
-See the [requirements section](how-to-connect-health-agent-install.md#requirements) for the list of firewall ports and other connectivity requirements.
-
-**Q: Why do I see two servers with the same name in the Azure AD Connect Health portal?**
-
-When you remove an agent from a server, the server is not automatically removed from the Azure AD Connect Health portal. If you manually remove an agent from a server or remove the server itself, you need to manually delete the server entry from the Azure AD Connect Health portal.
-
-You might reimage a server or create a new server with the same details (such as machine name). If you did not remove the already registered server from the Azure AD Connect Health portal, and you installed the agent on the new server, you might see two entries with the same name.
-
-In this case, manually delete the entry that belongs to the older server. The data for this server should be out of date.
-
-**Q: Can I install the Azure AD Connect health agent on Windows Server Core?**
-
-No. Installation on Server Core is not supported.
-
-## Health Agent registration and data freshness
-
-**Q: What are common reasons for the Health Agent registration failures and how do I troubleshoot issues?**
-
-The health agent can fail to register due to the following possible reasons:
-
-* The agent cannot communicate with the required endpoints because a firewall is blocking traffic. This is particularly common on web application proxy servers. Make sure that you have allowed outbound communication to the required endpoints and ports. See the [requirements section](how-to-connect-health-agent-install.md#requirements) for details.
-* Outbound communication is subjected to an TLS inspection by the network layer. This causes the certificate that the agent uses to be replaced by the inspection server/entity, and the steps to complete the agent registration fail.
-* The user does not have access to perform the registration of the agent. Global admins have access by default. You can use [Azure role-based access control (Azure RBAC)](how-to-connect-health-operations.md#manage-access-with-azure-rbac) to delegate access to other users.
-
-**Q: I am getting alerted that "Health Service data is not up to date." How do I troubleshoot the issue?**
-
-Azure AD Connect Health generates the alert when it does not receive all the data points from the server in the last two hours. [Read more](how-to-connect-health-data-freshness.md).
-
-## Operations questions
-**Q: Do I need to enable auditing on the web application proxy servers?**
-
-No, auditing does not need to be enabled on the web application proxy servers.
-
-**Q: How do Azure AD Connect Health Alerts get resolved?**
-
-Azure AD Connect Health alerts get resolved on a success condition. Azure AD Connect Health Agents detect and report the success conditions to the service periodically. For a few alerts, the suppression is time-based. In other words, if the same error condition is not observed within 72 hours from alert generation, the alert is automatically resolved.
-
-**Q: I am getting alerted that "Test Authentication Request (Synthetic Transaction) failed to obtain a token." How do I troubleshoot the issue?**
-
-Azure AD Connect Health for AD FS generates this alert when the Health Agent installed on an AD FS server fails to obtain a token as part of a synthetic transaction initiated by the Health Agent. The Health agent uses the local system context and attempts to get a token for a self relying party. This is a catch-all test to ensure that AD FS is in a state of issuing tokens.
-
-Most often this test fails because the Health Agent is unable to resolve the AD FS farm name. This can happen if the AD FS servers are behind a network load balancers and the request gets initiated from a node that's behind the load balancer (as opposed to a regular client that is in front of the load balancer). This can be fixed by updating the "hosts" file located under "C:\Windows\System32\drivers\etc" to include the IP address of the AD FS server or a loopback IP address (127.0.0.1) for the AD FS farm name (such as sts.contoso.com). Adding the host file will short-circuit the network call, thus allowing the Health Agent to get the token.
-
-**Q: I got an email indicating my machines are NOT patched for the recent ransomware attacks. Why did I receive this email?**
-
-Azure AD Connect Health service scanned all the machines it monitors to ensure the required patches were installed. The email was sent to the tenant administrators if at least one machine did not have the critical patches. The following logic was used to make this determination.
-1. Find all the hotfixes installed on the machine.
-2. Check if at least one of the HotFixes from the defined list is present.
-3. If Yes, the machine is protected. If Not, the machine is at risk for the attack.
-
-You can use the following PowerShell script to perform this check manually. It implements the above logic.
-
-```powershell
-Function CheckForMS17-010 ()
-{
- $hotfixes = "KB3205409", "KB3210720", "KB3210721", "KB3212646", "KB3213986", "KB4012212", "KB4012213", "KB4012214", "KB4012215", "KB4012216", "KB4012217", "KB4012218", "KB4012220", "KB4012598", "KB4012606", "KB4013198", "KB4013389", "KB4013429", "KB4015217", "KB4015438", "KB4015546", "KB4015547", "KB4015548", "KB4015549", "KB4015550", "KB4015551", "KB4015552", "KB4015553", "KB4015554", "KB4016635", "KB4019213", "KB4019214", "KB4019215", "KB4019216", "KB4019263", "KB4019264", "KB4019472", "KB4015221", "KB4019474", "KB4015219", "KB4019473"
-
- #checks the computer it's run on if any of the listed hotfixes are present
- $hotfix = Get-HotFix -ComputerName $env:computername | Where-Object {$hotfixes -contains $_.HotfixID} | Select-Object -property "HotFixID"
-
- #confirms whether hotfix is found or not
- if (Get-HotFix | Where-Object {$hotfixes -contains $_.HotfixID})
- {
- "Found HotFix: " + $hotfix.HotFixID
- } else {
- "Didn't Find HotFix"
- }
-}
-
-CheckForMS17-010
-
-```
-
-**Q: Why does the PowerShell cmdlet <i>Get-MsolDirSyncProvisioningError</i> show less sync errors in the result?**
-
-<i>Get-MsolDirSyncProvisioningError</i> will only return DirSync provisioning errors. Besides that, Connect Health portal also shows other sync error types such as export errors. This is consistent with Azure AD Connect delta result. Read more about [Azure AD Connect Sync errors](./tshoot-connect-sync-errors.md).
-
-**Q: Why are my ADFS audits not being generated?**
-
-Please use PowerShell cmdlet <i>Get-AdfsProperties -AuditLevel</i> to ensure audit logs is not in disabled state. Read more about [ADFS audit logs](/windows-server/identity/ad-fs/technical-reference/auditing-enhancements-to-ad-fs-in-windows-server#auditing-levels-in-ad-fs-for-windows-server-2016). Notice if there are advanced audit settings pushed to the ADFS server, any changes with auditpol.exe will be overwritten (event if Application Generated is not configured). In this case, please set the local security policy to log Application Generated failures and success.
-
-**Q: When will the agent certificate be automatic renewed before expiration?**
-The agent certification will be automatic renewed **6 months** before its expiration date. If it is not renewed, please ensure the network connection of the agent is stable. Restart the agent services or update to the latest version may also solve the issue.
---
-## Related links
-* [Azure AD Connect Health](./whatis-azure-ad-connect.md)
-* [Azure AD Connect Health Agent installation](how-to-connect-health-agent-install.md)
-* [Azure AD Connect Health operations](how-to-connect-health-operations.md)
-* [Using Azure AD Connect Health with AD FS](how-to-connect-health-adfs.md)
-* [Using Azure AD Connect Health for sync](how-to-connect-health-sync.md)
-* [Using Azure AD Connect Health with AD DS](how-to-connect-health-adds.md)
-* [Azure AD Connect Health version history](reference-connect-health-version-history.md)
active-directory Tshoot Connect Sso https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/tshoot-connect-sso.md
This article helps you find troubleshooting information about common problems re
- If a user is part of too many groups in Active Directory, the user's Kerberos ticket will likely be too large to process, and this will cause Seamless SSO to fail. Azure AD HTTPS requests can have headers with a maximum size of 50 KB; Kerberos tickets need to be smaller than that limit to accommodate other Azure AD artifacts (typically, 2 - 5 KB) such as cookies. Our recommendation is to reduce user's group memberships and try again. - If you're synchronizing 30 or more Active Directory forests, you can't enable Seamless SSO through Azure AD Connect. As a workaround, you can [manually enable](#manual-reset-of-the-feature) the feature on your tenant. - Adding the Azure AD service URL (`https://autologon.microsoftazuread-sso.com`) to the Trusted sites zone instead of the Local intranet zone *blocks users from signing in*.-- Seamless SSO supports the AES256_HMAC_SHA1, AES128_HMAC_SHA1 and RC4_HMAC_MD5 encryption types for Kerberos. It is recommended that the encryption type for the AzureADSSOAcc$ account is set to AES256_HMAC_SHA1, or one of the AES types vs. RC4 for added security. The encryption type is stored on the msDS-SupportedEncryptionTypes attribute of the account in your Active Directory. If the AzureADSSOAcc$ account encryption type is set to RC4_HMAC_MD5, and you want to change it to one of the AES encryption types, please make sure that you first roll over the Kerberos decryption key of the AzureADSSOAcc$ account as explained in the [FAQ document](how-to-connect-sso-faq.md) under the relevant question, otherwise Seamless SSO will not happen.
+- Seamless SSO supports the AES256_HMAC_SHA1, AES128_HMAC_SHA1 and RC4_HMAC_MD5 encryption types for Kerberos. It is recommended that the encryption type for the AzureADSSOAcc$ account is set to AES256_HMAC_SHA1, or one of the AES types vs. RC4 for added security. The encryption type is stored on the msDS-SupportedEncryptionTypes attribute of the account in your Active Directory. If the AzureADSSOAcc$ account encryption type is set to RC4_HMAC_MD5, and you want to change it to one of the AES encryption types, please make sure that you first roll over the Kerberos decryption key of the AzureADSSOAcc$ account as explained in the [FAQ document](how-to-connect-sso-faq.yml) under the relevant question, otherwise Seamless SSO will not happen.
- If you have more than one forest with forest trust, enabling SSO in one of the forests, will enable SSO in all trusted forests. If you enable SSO in a forest where SSO is already enabled, you'll get an error saying that SSO is already enabled in the forest. ## Check status of feature
active-directory Overview Identity Protection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/identity-protection/overview-identity-protection.md
Identity Protection is a tool that allows organizations to accomplish three key
- [Automate the detection and remediation of identity-based risks](howto-identity-protection-configure-risk-policies.md). - [Investigate risks](howto-identity-protection-investigate-risk.md) using data in the portal.-- [Export risk detection data to your SEIM](../../sentinel/connect-azure-ad-identity-protection.md).
+- [Export risk detection data to your SIEM](../../sentinel/connect-azure-ad-identity-protection.md).
Identity Protection uses the learnings Microsoft has acquired from their position in organizations with Azure AD, the consumer space with Microsoft Accounts, and in gaming with Xbox to protect your users. Microsoft analyses 6.5 trillion signals per day to identify and protect customers from threats.
active-directory Delegate By Task https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/delegate-by-task.md
In this article, you can find the information needed to restrict a user's admini
> | Create, read, update, and delete sign-up user flow |External ID User Flow Administrator | | > | Create, read, update, and delete user attributes | External ID User Flow Attribute Administrator | | > | Create, read, update, and delete users | User Administrator | |
+> | Configure B2B external collaboration settings | Global Administrator | |
> | Read all configuration | Global Reader | | > | Read B2C audit logs | Global Reader ([see documentation](../../active-directory-b2c/faq.yml)) | |
active-directory Manage Roles Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/manage-roles-portal.md
Previously updated : 05/14/2021 Last updated : 06/29/2021
# Assign Azure AD roles to users
-You can now see and manage all the members of the administrator roles in the Azure AD admin center. If you frequently manage role assignments, you will probably prefer this experience. This article describes how to assign Azure AD roles using the Azure AD admin center.
+To grant access to users in Azure Active Directory (Azure AD), you assign Azure AD roles. A role is a collection of permissions. This article describes how to assign Azure AD roles using the Azure portal and PowerShell.
## Prerequisites - Privileged Role Administrator or Global Administrator - Azure AD Premium P2 license when using Privileged Identity Management (PIM)
+- AzureADPreview module when using PowerShell
-## Assign a role
+For more information, see [Prerequisites to use PowerShell or Graph Explorer](prerequisites.md).
-1. Sign in to the [Azure AD admin center](https://aad.portal.azure.com).
+## Azure portal
+
+Follow these steps to assign Azure AD roles using the Azure portal. Your experience will be different depending on whether you have [Azure AD Privileged Identity Management (PIM)](../privileged-identity-management/pim-configure.md) enabled.
+
+### Assign a role
-1. Select **Azure Active Directory**.
+1. Sign in to the [Azure AD admin center](https://aad.portal.azure.com).
-1. Select **Roles and administrators** to see the list of all available roles.
+1. Select **Azure Active Directory** > **Roles and administrators** to see the list of all available roles.
- ![Screenshot of the Roles and administrators page](./media/manage-roles-portal/roles-and-administrators.png)
+ ![Roles and administrators page in Azure Active Directory.](./media/manage-roles-portal/roles-and-administrators.png)
1. Select a role to see its assignments.
- To help you find the role you need, Azure AD can show you subsets of the roles based on role categories. Check out the **Type** filter to show you only the roles in the selected type.
+ To help you find the role you need, use **Add filters** to filter the roles.
1. Select **Add assignments** and then select the users you want to assign to this role.
- If you see something different from the following picture, read the Note in [Privileged Identity Management (PIM)](#privileged-identity-management-pim) to verify whether you are using PIM.
+ If you see something different from the following picture, you might have PIM enabled. See the next section.
- ![list of permissions for an admin role](./media/manage-roles-portal/add-assignments.png)
+ ![Add assignments pane for selected role.](./media/manage-roles-portal/add-assignments.png)
1. Select **Add** to assign the role.
-## Privileged Identity Management (PIM)
+### Assign a role using PIM
+
+If you have [Azure AD Privileged Identity Management (PIM)](../privileged-identity-management/pim-configure.md) enabled, you have additional role assignment capabilities. For example, you can make a user eligible for a role or set the duration. When PIM is enabled, there are two ways that you can assign roles using the Azure portal. You can use the Roles and administrators page or the PIM experience. Either way uses the same PIM service.
+
+Follow these steps to assign roles using the [Roles and administrators](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/RolesAndAdministrators) page. If you want to assign roles using the [Privileged Identity Management](https://portal.azure.com/#blade/Microsoft_Azure_PIMCommon/CommonMenuBlade/quickStart) page, see [Assign Azure AD roles in Privileged Identity Management](../privileged-identity-management/pim-how-to-add-role-to-user.md).
+
+1. Sign in to the [Azure AD admin center](https://aad.portal.azure.com).
+
+1. Select **Azure Active Directory** > **Roles and administrators** to see the list of all available roles.
+
+ ![Roles and administrators page in Azure Active Directory when PIM enabled.](./media/manage-roles-portal/roles-and-administrators.png)
+
+1. Select a role to see its eligible, active, and expired role assignments.
+
+ To help you find the role you need, use **Add filters** to filter the roles.
+
+1. Select **Add assignments**.
+
+1. Select **No member selected** and then select the users you want to assign to this role.
+
+ ![Add assignments page and Select a member pane with PIM enabled.](./media/manage-roles-portal/add-assignments-pim.png)
+
+1. Select **Next**.
+
+1. On the **Setting** tab, select whether you wan to make this role assignment **Eligible** or **Active**.
+
+ An eligible role assignment means that the user must perform one or more actions to use the role. An active role assignment means that the user doesn't have to perform any action to use the role. For more information about what these settings mean, see [PIM terminology](../privileged-identity-management/pim-configure.md#terminology).
+
+ ![Add assignments page and Setting tab with PIM enabled.](./media/manage-roles-portal/add-assignments-pim-setting.png)
+
+1. Use the remaining options to set the duration for the assignment.
+
+1. Select **Assign** to assign the role.
+
+## PowerShell
+
+Follow these steps to assign Azure AD roles using PowerShell.
+
+### Setup
+
+1. Open a PowerShell window and use [Import-Module](/powershell/module/microsoft.powershell.core/import-module) to import the AzureADPreview module. For more information, see [Prerequisites to use PowerShell or Graph Explorer](prerequisites.md).
+
+ ```powershell
+ Import-Module -Name AzureADPreview -Force
+ ```
+
+1. In a PowerShell window, use [Connect-AzureAD](/powershell/module/azuread/connect-azuread) to sign in to your tenant.
+
+ ```powershell
+ Connect-AzureAD
+ ```
+
+1. Use [Get-AzureADUser](/powershell/module/azuread/get-azureaduser) to get the user you want to assign a role to.
+
+ ```powershell
+ $user = Get-AzureADUser -Filter "userPrincipalName eq 'user@contoso.com'"
+ ```
+
+### Assign a role
+
+1. Use [Get-AzureADMSRoleDefinition](/powershell/module/azuread/get-azureadmsroledefinition) to get the role you want to assign.
+
+ ```powershell
+ $roleDefinition = Get-AzureADMSRoleDefinition -Filter "displayName eq 'Billing Administrator'"
+ ```
+
+1. Use [New-AzureADMSRoleAssignment](/powershell/module/azuread/new-azureadmsroleassignment) to assign the role.
+
+ ```powershell
+ $roleAssignment = New-AzureADMSRoleAssignment -DirectoryScopeId '/' -RoleDefinitionId $roleDefinition.Id -PrincipalId $user.objectId
+ ```
+
+### Assign a role as eligible using PIM
+
+If PIM is enabled, you have additional capabilities, such as making a user eligible for a role assignment or defining the start and end time for a role assignment. These capabilities use a different set of PowerShell commands. For more information about using PowerShell and PIM, see [PowerShell for Azure AD roles in Privileged Identity Management](../privileged-identity-management/powershell-for-azure-ad-roles.md).
++
+1. Use [Get-AzureADMSRoleDefinition](/powershell/module/azuread/get-azureadmsroledefinition) to get the role you want to assign.
+
+ ```powershell
+ $roleDefinition = Get-AzureADMSRoleDefinition -Filter "displayName eq 'Billing Administrator'"
+ ```
+
+1. Use [Get-AzureADMSPrivilegedResource](/powershell/module/azuread/get-azureadmsprivilegedresource) to get the privileged resource. In this case, your tenant.
+
+ ```powershell
+ $aadTenant = Get-AzureADMSPrivilegedResource -ProviderId aadRoles
+ ```
-You can select **Manage in PIM** for additional management capabilities using [Azure AD Privileged Identity Management (PIM)](../privileged-identity-management/pim-configure.md). Privileged Role Administrators can change ΓÇ£PermanentΓÇ¥ (always active in the role) assignments to ΓÇ£EligibleΓÇ¥ (in the role only when elevated). If you don't have Privileged Identity Management, you can still select **Manage in PIM** to sign up for a trial. Privileged Identity Management requires an [Azure AD Premium P2 license](../privileged-identity-management/subscription-requirements.md).
+1. Use [New-Object](/powershell/module/microsoft.powershell.utility/new-object) to create a new `AzureADMSPrivilegedSchedule` object to define the start and end time of the role assignment.
-![Screenshot that shows the "User Administrator - Assignments" page with the "Manage in PIM" action selected](./media/manage-roles-portal/member-list-pim.png)
+ ```powershell
+ $schedule = New-Object Microsoft.Open.MSGraph.Model.AzureADMSPrivilegedSchedule
+ $schedule.Type = "Once"
+ $schedule.StartDateTime = (Get-Date).ToUniversalTime().ToString("yyyy-MM-ddTHH:mm:ss.fffZ")
+ $schedule.EndDateTime = "2021-07-25T20:00:00.000Z"
+ ```
-If you are a Global Administrator or a Privileged Role Administrator, you can easily add or remove members, filter the list, or select a member to see their active assigned roles.
+1. Use [Open-AzureADMSPrivilegedRoleAssignmentRequest](/powershell/module/azuread/open-azureadmsprivilegedroleassignmentrequest) to assign the role as eligible.
-> [!Note]
-> If you have an Azure AD premium P2 license and you already use Privileged Identity Management, all role management tasks are performed in Privilege Identity Management and not in Azure AD.
->
-> ![Azure AD roles managed in PIM for users who already use PIM and have a Premium P2 license](./media/manage-roles-portal/pim-manages-roles-for-p2.png)
+ ```powershell
+ $roleAssignmentEligible = Open-AzureADMSPrivilegedRoleAssignmentRequest -ProviderId 'aadRoles' -ResourceId $aadTenant.Id -RoleDefinitionId $roleDefinition.Id -SubjectId $user.objectId -Type 'AdminAdd' -AssignmentState 'Eligible' -schedule $schedule -reason "Review billing info"
+ ```
## Next steps
-* Feel free to share with us on the [Azure AD administrative roles forum](https://feedback.azure.com/forums/169401-azure-active-directory?category_id=166032).
-* For more about roles, see [Azure AD built-in roles](permissions-reference.md).
-* For default user permissions, see a [comparison of default guest and member user permissions](../fundamentals/users-default-permissions.md).
+- [List Azure AD role assignments](view-assignments.md)
+- [Assign custom roles with resource scope using PowerShell](custom-assign-powershell.md)
+- [Azure AD built-in roles](permissions-reference.md)
active-directory Envoy Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/envoy-provisioning-tutorial.md
Previously updated : 06/3/2019 Last updated : 06/28/2021
The scenario outlined in this tutorial assumes that you already have the followi
1. Sign in to your [Envoy Admin Console](https://dashboard.envoy.com/login). Click on **Integrations**.
- ![Envoy Integrations](media/envoy-provisioning-tutorial/envoy01.png)
+ ![Envoy Integrations](media/envoy-provisioning-tutorial/envoy-01.png)
2. Click on **Install** for the **Microsoft Azure SCIM integration**.
- ![Envoy Install](media/envoy-provisioning-tutorial/envoy02.png)
+ ![Envoy Install](media/envoy-provisioning-tutorial/integrations.png)
3. Click on **Save** for **Sync all users**.
- ![Envoy Save](media/envoy-provisioning-tutorial/envoy03.png)
+ ![Envoy Save](media/envoy-provisioning-tutorial/microsoft-azure.png)
4. Copy the **OAUTH BEARER TOKEN**. This value will be entered in the **Secret Token** field in the provisioning tab of your Envoy application in the Azure portal.
- ![Envoy OAUTH](media/envoy-provisioning-tutorial/envoy04.png)
+ ![Envoy OAUTH](media/envoy-provisioning-tutorial/token.png)
## Step 3. Add Envoy from the Azure AD application gallery
active-directory Envoy Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/envoy-tutorial.md
Previously updated : 04/01/2021 Last updated : 06/25/2021
Follow these steps to enable Azure AD SSO in the Azure portal.
![Edit Basic SAML Configuration](common/edit-urls.png)
-1. On the **Basic SAML Configuration** section, enter the values for the following fields:
+1. On the **Basic SAML Configuration** section, perform the following step:
In the **Sign-on URL** text box, type a URL using the following pattern: `https://app.envoy.com/a/saml/auth/<company-ID-from-Envoy>`
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
![My apps extension](common/install-myappssecure-extension.png)
-2. After adding extension to the browser, click on **Setup Envoy** will direct you to the Envoy application. From there, provide the admin credentials to sign into Envoy. The browser extension will automatically configure the application for you and automate steps 3-7.
+2. After adding extension to the browser, click on **Setup Envoy** will direct you to the Envoy application. From there, provide the admin credentials to sign into Envoy. The browser extension will automatically configure the application for you and automate steps 3-5.
![Setup configuration](common/setup-sso.png)
-3. If you want to setup Envoy manually, open a new web browser window and sign into your Envoy company site as an administrator and perform the following steps:
+3. If you want to setup Envoy manually, open a new web browser window and sign into your Envoy company site as an administrator and perform the following steps.
-4. In the toolbar on the top, click **Settings**.
+4. Go to **Integrations** > **All integrations** and click to **Install** SAML under **Single sign-on**.
- ![Envoy](./media/envoy-tutorial/envoy-1.png "Envoy")
+ ![SAML Authentication](./media/envoy-tutorial/integrations.png "SAML Authentication")
-5. Click **Company**.
+5. Navigate to **Enabled integrations** section, and perform the following steps:
- ![Company](./media/envoy-tutorial/envoy-2.png "Company")
-
-6. Click **SAML**.
-
- ![SAML](./media/envoy-tutorial/envoy-3.png "SAML")
-
-7. In the **SAML Authentication** configuration section, perform the following steps:
-
- ![SAML authentication](./media/envoy-tutorial/envoy-4.png "SAML authentication")
+ ![Single sign-on](./media/envoy-tutorial/configuration.png "Single sign-on")
>[!NOTE] >The value for the HQ location ID is auto generated by the application.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
b. Paste **Login URL** value, which you have copied form the Azure portal into the **IDENTITY PROVIDER HTTP SAML URL** textbox.
- c. Click **Save changes**.
+ c. Click **Save**.
### Create Envoy test user
active-directory Solarwinds Orion Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/solarwinds-orion-tutorial.md
Previously updated : 03/01/2021 Last updated : 06/29/2021
application integration page, find the **Manage** section and select **single si
| LastName | user.surname | | Email |user.mail |
+1. In **User Attributes & Claims** section, click the pencil icon to edit and click **Add a group claim**.
+
+ ![Screenshot for User Attributes & Claims.](./media/solarwinds-orion-tutorial/group-claim.png)
+
+1. Choose **Security groups**.
+1. If you have Azure AD synchronized with your on-premises AD, change **Source attribute** to **sAMAccountName**. Otherwise, leave it as Group ID.
+
+1. In the **Advanced options**, tick mark **Customize the name of the group claim** and give OrionGroups as the name.
+
+1. Click **Save**.
+ 1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer. ![The Certificate download link](common/certificatebase64.png)
You can also use Microsoft My Apps to test the application in any mode. When you
## Next steps
-Once you configure SolarWinds Orion you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
+Once you configure SolarWinds Orion you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
active-directory Sosafe Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/sosafe-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure SoSafe for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to SoSafe.
+
+documentationcenter: ''
+
+writer: Zhchia
+
+ms.assetid: 30de9f90-482e-43ef-9fcb-f3d4f5eac533
+++
+ na
+ms.devlang: na
+ Last updated : 06/07/2021+++
+# Tutorial: Configure SoSafe for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both SoSafe and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [SoSafe](https://sosafe.de/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Capabilities supported
+> [!div class="checklist"]
+> * Create users in SoSafe.
+> * Remove users in SoSafe when they do not require access anymore.
+> * Keep user attributes synchronized between Azure AD and SoSafe.
+> * Provision groups and group memberships in SoSafe.
+> * [Single sign-on](servicessosafe-tutorial.md) to SoSafe (recommended).
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md).
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A [SoSafe](https://sosafe.de/) tenant.
+* A user account in SoSafe with Admin permissions.
++
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and SoSafe](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure SoSafe to support provisioning with Azure AD
+
+1. Login to [Sosafe admin console](https://manager.sosafe.de) and navigate to **Extended Data > SCIM** tab.
+1. Enter your Azure Tenant ID under **Identity Provider Tenant ID (Azure, Okta, etc.)** and select **Save**.
+1. Click on **Generate Token**.
+1. Copy the **Tenant URL** and **Token** visible on this page. These values will be entered in the **Tenant URL** and **Secret Token** * field in the Provisioning tab of your Sosafe application in the Azure portal.
+
+## Step 3. Add SoSafe from the Azure AD application gallery
+
+Add SoSafe from the Azure AD application gallery to start managing provisioning to SoSafe. If you have previously setup SoSafe for SSO you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* When assigning users and groups to SoSafe, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
++
+## Step 5. Configure automatic user provisioning to SoSafe
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in SoSafe based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for SoSafe in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Enterprise applications blade](common/enterprise-applications.png)
+
+1. In the applications list, select **SoSafe**.
+
+ ![The SoSafe link in the Applications list](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Provisioning tab](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Provisioning tab automatic](common/provisioning-automatic.png)
+
+1. In the **Admin Credentials** section, input your SoSafe **Tenant URL** and **Secret Token**. Click **Test Connection** to ensure Azure AD can connect to SoSafe. If the connection fails , ensure your SoSafe account has Admin permissions and try again.
+
+ ![Token](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Notification Email](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. In the **Mappings** section, select **Synchronize Azure Active Directory Users to SoSafe**.
+
+1. Review the user attributes that are synchronized from Azure AD to SoSafe in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in SoSafe for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the SoSafe API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|
+ ||||
+ |userName|String|&check;
+ |active|Boolean|
+ |displayName|String|
+ |title|String|
+ |emails[type eq "work"].value|String|
+ |preferredLanguage|String|
+ |name.givenName|String|
+ |name.familyName|String|
+ |name.formatted|String|
+ |name.honorificPrefix|String|
+ |name.honorificSuffix|String|
+ |addresses[type eq "work"].formatted|String|
+ |addresses[type eq "work"].streetAddress|String|
+ |addresses[type eq "work"].locality|String|
+ |addresses[type eq "work"].region|String|
+ |addresses[type eq "work"].postalCode|String|
+ |addresses[type eq "work"].country|String|
+ |phoneNumbers[type eq "work"].value|String|
+ |externalId|String|
+ |nickName|String|
+ |userType|String|
+ |locale|String|
+ |timezone|String|
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:employeeNumber|String|
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:costCenter|String|
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:organization|String|
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:division|String|
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department|String|
++
+1. In the **Mappings** section, select **Synchronize Azure Active Directory Groups to SoSafe**.
+
+1. Review the group attributes that are synchronized from Azure AD to SoSafe in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in SoSafe for update operations. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|
+ ||||
+ |displayName|String|&check;
+ |members|Reference|
+ |externalId|String|
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for SoSafe, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+
+1. Define the users and/or groups that you would like to provision to SoSafe by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Provisioning Scope](common/provisioning-scope.png)
+
+1. When you are ready to provision, click **Save**.
+
+ ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
aks Certificate Rotation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/certificate-rotation.md
AKS generates and uses the following certificates, Certificate Authorities, and
* The `kubectl` client has a certificate for communicating with the AKS cluster. > [!NOTE]
-> AKS clusters created prior to March 2019 have certificates that expire after two years. Any cluster created after March 2019 or any cluster that has its certificates rotated have Cluster CA certificates that expire after 30 years. All other certificates expire after two years. To verify when your cluster was created, use `kubectl get nodes` to see the *Age* of your node pools.
+> AKS clusters created prior to May 2019 have certificates that expire after two years. Any cluster created after May 2019 or any cluster that has its certificates rotated have Cluster CA certificates that expire after 30 years. All other certificates expire after two years. To verify when your cluster was created, use `kubectl get nodes` to see the *Age* of your node pools.
>
-> Additionally, you can check the expiration date of your cluster's certificate. For example, the following Bash command displays the certificate details for the *myAKSCluster* cluster.
+> Additionally, you can check the expiration date of your cluster's certificate. For example, the following bash command displays the certificate details for the *myAKSCluster* cluster in resource group *rg*
> ```console
-> kubectl config view --raw -o jsonpath="{.clusters[?(@.name == 'myAKSCluster')].cluster.certificate-authority-data}" | base64 -d | openssl x509 -text | grep -A2 Validity
+> kubectl config view --raw -o jsonpath="{.users[?(@.name == 'clusterUser_rg_myAKSCluster')].user.client-certificate-data}" | base64 -d | openssl x509 -text | grep -A2 Validity
> ```
+* Check expiration date of certificate on VMAS agent node
+```console
+az vm run-command invoke -g MC_rg_myAKSCluster_region -n vm-name --command-id RunShellScript --query 'value[0].message' -otsv --scripts "openssl x509 -in /etc/kubernetes/certs/client.crt -noout -enddate"
+```
+
+* Check expiration date of certificate on one VMSS agent node
+```console
+az vmss run-command invoke -g MC_rg_myAKSCluster_region -n vmss-name --instance-id 0 --command-id RunShellScript --query 'value[0].message' -otsv --scripts "openssl x509 -in /etc/kubernetes/certs/client.crt -noout -enddate"
+```
+ ## Rotate your cluster certificates > [!WARNING]
aks Cluster Container Registry Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/cluster-container-registry-integration.md
description: Learn how to integrate Azure Kubernetes Service (AKS) with Azure Co
Previously updated : 01/08/2021- Last updated : 06/10/2021+ # Authenticate with Azure Container Registry from Azure Kubernetes Service
-When you're using Azure Container Registry (ACR) with Azure Kubernetes Service (AKS), an authentication mechanism needs to be established. This operation is implemented as part of the CLI and Portal experience by granting the required permissions to your ACR. This article provides examples for configuring authentication between these two Azure services.
+When you're using Azure Container Registry (ACR) with Azure Kubernetes Service (AKS), an authentication mechanism needs to be established. This operation is implemented as part of the CLI, PowerShell, and Portal experience by granting the required permissions to your ACR. This article provides examples for configuring authentication between these two Azure services.
-You can set up the AKS to ACR integration in a few simple commands with the Azure CLI. This integration assigns the AcrPull role to the managed identity associated to the AKS Cluster.
+You can set up the AKS to ACR integration in a few simple commands with the Azure CLI or Azure PowerShell. This integration assigns the AcrPull role to the managed identity associated to the AKS Cluster.
> [!NOTE] > This article covers automatic authentication between AKS and ACR. If you need to pull an image from a private external registry, use an [image pull secret][Image Pull Secret].
You can set up the AKS to ACR integration in a few simple commands with the Azur
These examples require:
+### [Azure CLI](#tab/azure-cli)
+ * **Owner**, **Azure account administrator**, or **Azure co-adminstrator** role on the **Azure subscription** * Azure CLI version 2.7.0 or later
+### [Azure PowerShell](#tab/azure-powershell)
+
+* **Owner**, **Azure account administrator**, or **Azure co-adminstrator** role on the **Azure subscription**
+* Azure PowerShell version 5.9.0 or later
+++ To avoid needing an **Owner**, **Azure account administrator**, or **Azure co-adminstrator** role, you can use an existing managed identity to authenticate ACR from AKS. For more information, see [Use an Azure managed identity to authenticate to an Azure container registry](../container-registry/container-registry-authentication-managed-identity.md). ## Create a new AKS cluster with ACR integration
-You can set up AKS and ACR integration during the initial creation of your AKS cluster. To allow an AKS cluster to interact with ACR, an Azure Active Directory **managed identity** is used. The following CLI command allows you to authorize an existing ACR in your subscription and configures the appropriate **ACRPull** role for the managed identity. Supply valid values for your parameters below.
+You can set up AKS and ACR integration during the initial creation of your AKS cluster. To allow an AKS cluster to interact with ACR, an Azure Active Directory **managed identity** is used. The following command allows you to authorize an existing ACR in your subscription and configures the appropriate **ACRPull** role for the managed identity. Supply valid values for your parameters below.
+
+### [Azure CLI](#tab/azure-cli)
```azurecli # set this to the name of your Azure Container Registry. It must be globally unique
Alternatively, you can specify the ACR name using an ACR resource ID, which has
az aks create -n myAKSCluster -g myResourceGroup --generate-ssh-keys --attach-acr /subscriptions/<subscription-id>/resourceGroups/myContainerRegistryResourceGroup/providers/Microsoft.ContainerRegistry/registries/myContainerRegistry ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+# set this to the name of your Azure Container Registry. It must be globally unique
+$MYACR = 'myContainerRegistry'
+
+# Run the following line to create an Azure Container Registry if you do not already have one
+New-AzContainerRegistry -Name $MYACR -ResourceGroupName myContainerRegistryResourceGroup -Sku Basic
+
+# Create an AKS cluster with ACR integration
+New-AzAksCluster -Name myAKSCluster -ResourceGroupName myResourceGroup -GenerateSshKey -AcrNameToAttach $MYACR
+```
+++ This step may take several minutes to complete. ## Configure ACR integration for existing AKS clusters
+### [Azure CLI](#tab/azure-cli)
+ Integrate an existing ACR with existing AKS clusters by supplying valid values for **acr-name** or **acr-resource-id** as below. ```azurecli
or
az aks update -n myAKSCluster -g myResourceGroup --detach-acr <acr-resource-id> ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+Integrate an existing ACR with existing AKS clusters by supplying valid values for **acr-name** as below.
+
+```azurepowershell
+Set-AzAksCluster -Name myAKSCluster -ResourceGroupName myResourceGroup -AcrNameToAttach <acr-name>
+```
+
+> [!NOTE]
+> Running `Set-AzAksCluster -AcrNameToAttach` uses the permissions of the user running the command to create the role ACR assignment. This role is assigned to the kubelet managed identity. For more information on the AKS managed identities, see [Summary of managed identities][summary-msi].
+
+You can also remove the integration between an ACR and an AKS cluster with the following
+
+```azurepowershell
+Set-AzAksCluster -Name myAKSCluster -ResourceGroupName myResourceGroup -AcrNameToDetach <acr-name>
+```
+++ ## Working with ACR & AKS ### Import an image into your ACR Import an image from docker hub into your ACR by running the following:
+### [Azure CLI](#tab/azure-cli)
```azurecli az acr import -n <acr-name> --source docker.io/library/nginx:latest --image nginx:v1 ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+Import-AzContainerRegistryImage -RegistryName <acr-name> -ResourceGroupName myResourceGroup -SourceRegistryUri docker.io -SourceImage library/nginx:latest
+```
+++ ### Deploy the sample image from ACR to AKS Ensure you have the proper AKS credentials
+### [Azure CLI](#tab/azure-cli)
+ ```azurecli az aks get-credentials -g myResourceGroup -n myAKSCluster ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+Import-AzAksCredential -ResourceGroupName myResourceGroup -Name myAKSCluster
+```
+++ Create a file called **acr-nginx.yaml** that contains the following. Substitute the resource name of your registry for **acr-name**. Example: *myContainerRegistry*. ```yaml
nginx0-deployment-669dfc4d4b-xdpd6 1/1 Running 0 20s
[AKS AKS CLI]: /cli/azure/aks#az_aks_create [Image Pull secret]: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
-[summary-msi]: use-managed-identity.md#summary-of-managed-identities
+[summary-msi]: use-managed-identity.md#summary-of-managed-identities
api-management Api Management Caching Policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-caching-policies.md
Last updated 03/08/2021
# API Management caching policies
-This topic provides a reference for the following API Management policies. For information on adding and configuring policies, see [Policies in API Management](./api-management-policies.md).
+
+This article provides a reference for the following API Management policies. For information on adding and configuring policies, see [Policies in API Management](./api-management-policies.md).
> [!IMPORTANT] > Built-in cache is volatile and is shared by all units in the same region in the same API Management service.
Use the `cache-lookup` policy to perform cache look up and return a valid cached
</cache-lookup> ```
+> [!NOTE]
+> When using `vary-by-query-parameter`, you might want to declare the parameters in the rewrite-uri template or set the attribute `copy-unmatched-params` to `false`. By deactivating this flag, parameters that aren't declared are sent to the back end.
+ ### Examples #### Example
api-management Api Management Using With Internal Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-using-with-internal-vnet.md
When API Management deploys in internal VNET mode, you can only view the followi
Use API Management in internal mode to:
-* Make APIs hosted in your private datacenter securely accessible by third parties, using site-to-site or Azure ExpressRoute VPN connections.
+* Make APIs hosted in your private datacenter securely accessible by third parties outside of it by using Azure VPN Connections or Azure ExpressRoute.
* Enable hybrid cloud scenarios by exposing your cloud-based APIs and on-premises APIs through a common gateway. * Manage your APIs hosted in multiple geographic locations, using a single gateway endpoint.
app-service Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/networking/private-endpoint.md
description: Connect privately to a Web App using Azure Private Endpoint
ms.assetid: 2dceac28-1ba6-4904-a15d-9e91d5ee162c Previously updated : 06/15/2021 Last updated : 07/01/2021
A Private Endpoint is a special network interface (NIC) for your Azure Web App i
When you create a Private Endpoint for your Web App, it provides secure connectivity between clients on your private network and your Web App. The Private Endpoint is assigned an IP Address from the IP address range of your VNet. The connection between the Private Endpoint and the Web App uses a secure [Private Link][privatelink]. Private Endpoint is only used for incoming flows to your Web App. Outgoing flows will not use this Private Endpoint, but you can inject outgoing flows to your network in a different subnet through the [VNet integration feature][vnetintegrationfeature].
+Each slot of an app is configured separately. You can plug up to 100 Private Endpoints per slot. You cannot share a Private Endpoint between slots.
+ The Subnet where you plug the Private Endpoint can have other resources in it, you don't need a dedicated empty Subnet. You can also deploy the Private Endpoint in a different region than the Web App.
In the Web HTTP logs of your Web App, you will find the client source IP. This f
> [!div class="mx-imgBorder"] > ![Web App Private Endpoint global overview](media/private-endpoint/global-schema-web-app.png) + ## DNS When you use Private Endpoint for Web App, the requested URL must match the name of your Web App. By default mywebappname.azurewebsites.net.
app-service Quickstart Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/quickstart-java.md
JBoss EAP is only available on the Linux version of App Service. Please select t
Clone the Pet Store demo application. ```azurecli-interactive
-git clone https://github.com/andxu/migrate-javaee-app-to-azure-training.git
+git clone https://github.com/agoncal/agoncal-application-petstore-ee7.git
``` Change directory to the cloned project. ```azurecli-interactive
-cd migrate-javaee-app-to-azure-training
+cd agoncal-application-petstore-ee7
``` ::: zone-end
azure-cache-for-redis Cache Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-best-practices.md
If you would like to test how your code works under error conditions, consider u
* **We recommend using Dv2 VM Series** for your client as they have better hardware and will give the best results. * Make sure the client VM you use has **at least as much compute and bandwidth* as the cache being tested. * **Test under failover conditions** on your cache. It's important to ensure that you don't test the performance of your cache only under steady state conditions. Test under failover conditions, too, and measure the CPU/Server Load on your cache during that time. You can start a failover by [rebooting the primary node](cache-administration.md#reboot). Testing under failover conditions allows you to see how your application behaves in terms of throughput and latency during failover conditions. Failover can happen during updates and during an unplanned event. Ideally you don't want to see CPU/Server Load peak to more than say 80% even during a failover as that can affect performance.
-* **Some cache sizes** are hosted on VMs with four or more cores. Distribute the TLS encryption/decryption and TLS connection/disconnection workloads across multiple cores to bring down overall CPU usage on the cache VMs. [See here for details around VM sizes and cores](cache-planning-faq.md#azure-cache-for-redis-performance)
+* **Some cache sizes** are hosted on VMs with four or more cores. Distribute the TLS encryption/decryption and TLS connection/disconnection workloads across multiple cores to bring down overall CPU usage on the cache VMs. [See here for details around VM sizes and cores](/azure/azure-cache-for-redis/cache-planning-faq#azure-cache-for-redis-performance)
* **Enable VRSS** on the client machine if you are on Windows. [See here for details](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/dn383582(v=ws.11)). Example PowerShell script: >PowerShell -ExecutionPolicy Unrestricted Enable-NetAdapterRSS -Name ( Get-NetAdapter).Name * **Consider using Premium tier Redis instances**. These cache sizes will have better network latency and throughput because they're running on better hardware for both CPU and Network. > [!NOTE]
- > Our observed performance results are [published here](cache-planning-faq.md#azure-cache-for-redis-performance) for your reference. Also, be aware that SSL/TLS adds some overhead, so you may get different latencies and/or throughput if you're using transport encryption.
+ > Our observed performance results are [published here](/azure/azure-cache-for-redis/cache-planning-faq#azure-cache-for-redis-performance) for your reference. Also, be aware that SSL/TLS adds some overhead, so you may get different latencies and/or throughput if you're using transport encryption.
### Redis-Benchmark examples
azure-cache-for-redis Cache Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-configure.md
Each pricing tier has different limits for client connections, memory, and bandw
| Azure Cache for Redis metric | More information | | | |
-| Network bandwidth usage |[Cache performance - available bandwidth](cache-planning-faq.md#azure-cache-for-redis-performance) |
+| Network bandwidth usage |[Cache performance - available bandwidth](/azure/azure-cache-for-redis/cache-planning-faq#azure-cache-for-redis-performance) |
| Connected clients |[Default Redis server configuration - max clients](#maxclients) | | Server load |[Usage charts - Redis Server Load](cache-how-to-monitor.md#usage-charts) |
-| Memory usage |[Cache performance - size](cache-planning-faq.md#azure-cache-for-redis-performance) |
+| Memory usage |[Cache performance - size](/azure/azure-cache-for-redis/cache-planning-faq#azure-cache-for-redis-performance) |
To upgrade your cache, select **Upgrade now** to change the pricing tier and [scale](#scale) your cache. For more information on choosing a pricing tier, see [Choosing the right tier](cache-overview.md#choosing-the-right-tier)
azure-cache-for-redis Cache Development Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-development-faq.md
Redis Databases are just a logical separation of data within the same Redis inst
## Next steps
-Learn about other [Azure Cache for Redis FAQs](cache-faq.md).
+Learn about other [Azure Cache for Redis FAQs](cache-faq.yml).
azure-cache-for-redis Cache Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-faq.md
- Title: Azure Cache for Redis FAQ
-description: Learn the answers to common questions, patterns, and best practices for Azure Cache for Redis
---- Previously updated : 04/29/2019-
-# Azure Cache for Redis FAQ
-Learn the answers to common questions, patterns, and best practices for Azure Cache for Redis.
-
-* [Planning FAQs](cache-planning-faq.md)
-* [Development FAQs](cache-development-faq.md)
-* [Management FAQs](cache-management-faq.md)
-* [Monitoring and troubleshooting FAQs](cache-monitor-troubleshoot-faq.md)
-
-## Deprecated cache services
-
-### Managed Cache service
-[Managed Cache service was retired November 30, 2016.](https://azure.microsoft.com/blog/azure-managed-cache-and-in-role-cache-services-to-be-retired-on-11-30-2016/)
-
-To view archived documentation, see [Archived Managed Cache Service Documentation](/previous-versions/azure/azure-services/dn386094(v=azure.100)).
-
-### In-Role Cache
-[In-Role Cache was retired November 30, 2016.](https://azure.microsoft.com/blog/azure-managed-cache-and-in-role-cache-services-to-be-retired-on-11-30-2016/)
-
-To view archived documentation, see [Archived In-Role Cache Documentation](/previous-versions/azure/azure-services/dn386103(v=azure.100)).
-
-["minIoThreads" configuration setting]: /previous-versions/dotnet/netframework-4.0/7w2sway1(v=vs.100)
-
-## What if my question isn't answered here?
-If your question isn't listed here, let us know and we'll help you find an answer.
-
-* To reach a wider audience, you can post a question on the [Microsoft Q&A question page for Azure Cache](/answers/topics/azure-cache-redis.html) and engage with the Azure Cache team and other members of the community.
-* If you want to make a feature request, you can submit your requests and ideas to [Azure Cache for Redis User Voice](https://feedback.azure.com/forums/169382-cache).
-* You can also send your question to us at [azurecache@microsoft.com](mailto:azurecache@microsoft.com).
azure-cache-for-redis Cache How To Monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-how-to-monitor.md
Each metric includes two versions. One metric measures performance for the entir
| Cache Hits |The number of successful key lookups during the specified reporting interval. This number maps to `keyspace_hits` from the Redis [INFO](https://redis.io/commands/info) command. | | Cache Latency (Preview) | The latency of the cache calculated using the internode latency of the cache. This metric is measured in microseconds, and has three dimensions: `Avg`, `Min`, and `Max`. The dimensions represent the average, minimum, and maximum latency of the cache during the specified reporting interval. | | Cache Misses |The number of failed key lookups during the specified reporting interval. This number maps to `keyspace_misses` from the Redis INFO command. Cache misses don't necessarily mean there's an issue with the cache. For example, when using the cache-aside programming pattern, an application looks first in the cache for an item. If the item isn't there (cache miss), the item is retrieved from the database and added to the cache for next time. Cache misses are normal behavior for the cache-aside programming pattern. If the number of cache misses is higher than expected, examine the application logic that populates and reads from the cache. If items are being evicted from the cache because of memory pressure, then there may be some cache misses, but a better metric to monitor for memory pressure would be `Used Memory` or `Evicted Keys`. |
-| Cache Read |The amount of data read from the cache in Megabytes per second (MB/s) during the specified reporting interval. This value is derived from the network interface cards that support the virtual machine that hosts the cache and isn't Redis specific. **This value corresponds to the network bandwidth used by this cache. If you want to set up alerts for server-side network bandwidth limits, then create it using this `Cache Read` counter. See [this table](cache-planning-faq.md#azure-cache-for-redis-performance) for the observed bandwidth limits for various cache pricing tiers and sizes.** |
+| Cache Read |The amount of data read from the cache in Megabytes per second (MB/s) during the specified reporting interval. This value is derived from the network interface cards that support the virtual machine that hosts the cache and isn't Redis specific. **This value corresponds to the network bandwidth used by this cache. If you want to set up alerts for server-side network bandwidth limits, then create it using this `Cache Read` counter. See [this table](/azure/azure-cache-for-redis/cache-planning-faq#azure-cache-for-redis-performance) for the observed bandwidth limits for various cache pricing tiers and sizes.** |
| Cache Write |The amount of data written to the cache in Megabytes per second (MB/s) during the specified reporting interval. This value is derived from the network interface cards that support the virtual machine that hosts the cache and isn't Redis specific. This value corresponds to the network bandwidth of data sent to the cache from the client. | | Connected Clients |The number of client connections to the cache during the specified reporting interval. This number maps to `connected_clients` from the Redis INFO command. Once the [connection limit](cache-configure.md#default-redis-server-configuration) is reached, later attempts to connect to the cache fail. Even if there are no active client applications, there may still be a few instances of connected clients because of internal processes and connections. | | CPU |The CPU utilization of the Azure Cache for Redis server as a percentage during the specified reporting interval. This value maps to the operating system `\Processor(_Total)\% Processor Time` performance counter. |
azure-cache-for-redis Cache Management Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-management-faq.md
For more information about the different connections limits for each tier, see [
## Next steps
-Learn about other [Azure Cache for Redis FAQs](cache-faq.md).
+Learn about other [Azure Cache for Redis FAQs](cache-faq.yml).
azure-cache-for-redis Cache Monitor Troubleshoot Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-monitor-troubleshoot-faq.md
The following are some common reason for a cache disconnect.
For more information about monitoring and troubleshooting your Azure Cache for Redis instances, see [How to monitor Azure Cache for Redis](cache-how-to-monitor.md) and the various troubleshoot guides.
-Learn about other [Azure Cache for Redis FAQs](cache-faq.md).
+Learn about other [Azure Cache for Redis FAQs](cache-faq.yml).
azure-cache-for-redis Cache Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-overview.md
The [Azure Cache for Redis Pricing](https://azure.microsoft.com/pricing/details/
Consider the following options when choosing an Azure Cache for Redis tier: * **Memory**: The Basic and Standard tiers offer 250 MB ΓÇô 53 GB; the Premium tier 6 GB - 1.2 TB; the Enterprise tiers 12 GB - 14 TB. To create a Premium tier cache larger than 120 GB, you can use Redis OSS clustering. For more information, see [Azure Cache for Redis Pricing](https://azure.microsoft.com/pricing/details/cache/). For more information, see [How to configure clustering for a Premium Azure Cache for Redis](cache-how-to-premium-clustering.md).
-* **Performance**: Caches in the Premium and Enterprise tiers are deployed on hardware that has faster processors, giving better performance compared to the Basic or Standard tier. Premium tier Caches have higher throughput and lower latencies. For more information, see [Azure Cache for Redis performance](cache-planning-faq.md#azure-cache-for-redis-performance).
+* **Performance**: Caches in the Premium and Enterprise tiers are deployed on hardware that has faster processors, giving better performance compared to the Basic or Standard tier. Premium tier Caches have higher throughput and lower latencies. For more information, see [Azure Cache for Redis performance](/azure/azure-cache-for-redis/cache-planning-faq#azure-cache-for-redis-performance).
* **Dedicated core for Redis server**: All caches except C0 run dedicated VM cores. Redis, by design, uses only one thread for command processing. Azure Cache for Redis uses other cores for I/O processing. Having more cores improves throughput performance even though it may not produce linear scaling. Furthermore, larger VM sizes typically come with higher bandwidth limits than smaller ones. That helps you avoid network saturation, which will cause timeouts in your application.
-* **Network performance**: If you have a workload that requires high throughput, the Premium or Enterprise tier offers more bandwidth compared to Basic or Standard. Also within each tier, larger size caches have more bandwidth because of the underlying VM that hosts the cache. For more information, see [Azure Cache for Redis performance](cache-planning-faq.md#azure-cache-for-redis-performance).
+* **Network performance**: If you have a workload that requires high throughput, the Premium or Enterprise tier offers more bandwidth compared to Basic or Standard. Also within each tier, larger size caches have more bandwidth because of the underlying VM that hosts the cache. For more information, see [Azure Cache for Redis performance](/azure/azure-cache-for-redis/cache-planning-faq#azure-cache-for-redis-performance).
* **Maximum number of client connections**: The Premium and Enterprise tiers offer the maximum numbers of clients that can connect to Redis, offering higher numbers of connections for larger sized caches. Clustering increases the total amount of network bandwidth available for a clustered cache. * **High availability**: Azure Cache for Redis provides multiple [high availability](cache-high-availability.md) options. It guarantees that a Standard, Premium, or Enterprise cache is available according to our [SLA](https://azure.microsoft.com/support/legal/sla/cache/v1_0/). The SLA only covers connectivity to the cache endpoints. The SLA doesn't cover protection from data loss. We recommend using the Redis data persistence feature in the Premium and Enterprise tiers to increase resiliency against data loss. * **Data persistence**: The Premium and Enterprise tiers allow you to persist the cache data to an Azure Storage account and a Managed Disk respectively. Underlying infrastructure issues might result in potential data loss. We recommend using the Redis data persistence feature in these tiers to increase resiliency against data loss. Azure Cache for Redis offers both RDB and AOF (preview) options. Data persistence can be enabled through Azure portal and CLI. For the Premium tier, see [How to configure persistence for a Premium Azure Cache for Redis](cache-how-to-premium-persistence.md).
azure-cache-for-redis Cache Planning Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-planning-faq.md
- Title: Azure Cache for Redis planning FAQs
-description: Learn the answers to common questions that help you plan for Azure Cache for Redis
---- Previously updated : 08/06/2020-
-# Azure Cache for Redis planning FAQs
-
-This article provides answers to common questions about how to plan for Azure Cache for Redis.
-
-## Common questions and answers
-This section covers the following FAQs:
-
-* [Azure Cache for Redis performance](#azure-cache-for-redis-performance)
-* [In what region should I locate my cache?](#in-what-region-should-i-locate-my-cache)
-* [Where do my cached data reside?](#where-do-my-cached-data-reside)
-* [How am I billed for Azure Cache for Redis?](#how-am-i-billed-for-azure-cache-for-redis)
-* [Can I use Azure Cache for Redis with Azure Government Cloud, Azure China 21Vianet Cloud, or Microsoft Azure Germany?](#can-i-use-azure-cache-for-redis-with-azure-government-cloud-azure-china-21vianet-cloud-or-microsoft-azure-germany)
-
-### Azure Cache for Redis performance
-The following table shows the maximum bandwidth values observed while testing various sizes of Standard and Premium caches using `redis-benchmark.exe` from an IaaS VM against the Azure Cache for Redis endpoint. For TLS throughput, redis-benchmark is used with stunnel to connect to the Azure Cache for Redis endpoint.
-
->[!NOTE]
->These values are not guaranteed and there is no SLA for these numbers, but should be typical. You should load test your own application to determine the right cache size for your application.
->These numbers might change as we post newer results periodically.
->
-
-From this table, we can draw the following conclusions:
-
-* Throughput for the caches that are the same size is higher in the Premium tier as compared to the Standard tier. For example, with a 6 GB Cache, throughput of P1 is 180,000 requests per second (RPS) as compared to 100,000 RPS for C3.
-* With Redis clustering, throughput increases linearly as you increase the number of shards (nodes) in the cluster. For example, if you create a P4 cluster of 10 shards, then the available throughput is 400,000 * 10 = 4 million RPS.
-* Throughput for bigger key sizes is higher in the Premium tier as compared to the Standard Tier.
-
-| Pricing tier | Size | CPU cores | Available bandwidth | 1-KB value size | 1-KB value size |
-| | | | | | |
-| **Standard cache sizes** | | |**Megabits per sec (Mb/s) / Megabytes per sec (MB/s)** |**Requests per second (RPS) Non-SSL** |**Requests per second (RPS) SSL** |
-| C0 | 250 MB | Shared | 100 / 12.5 | 15,000 | 7,500 |
-| C1 | 1 GB | 1 | 500 / 62.5 | 38,000 | 20,720 |
-| C2 | 2.5 GB | 2 | 500 / 62.5 | 41,000 | 37,000 |
-| C3 | 6 GB | 4 | 1000 / 125 | 100,000 | 90,000 |
-| C4 | 13 GB | 2 | 500 / 62.5 | 60,000 | 55,000 |
-| C5 | 26 GB | 4 | 1,000 / 125 | 102,000 | 93,000 |
-| C6 | 53 GB | 8 | 2,000 / 250 | 126,000 | 120,000 |
-| **Premium cache sizes** | |**CPU cores per shard** | **Megabits per sec (Mb/s) / Megabytes per sec (MB/s)** |**Requests per second (RPS) Non-SSL, per shard** |**Requests per second (RPS) SSL, per shard** |
-| P1 | 6 GB | 2 | 1,500 / 187.5 | 180,000 | 172,000 |
-| P2 | 13 GB | 4 | 3,000 / 375 | 350,000 | 341,000 |
-| P3 | 26 GB | 4 | 3,000 / 375 | 350,000 | 341,000 |
-| P4 | 53 GB | 8 | 6,000 / 750 | 400,000 | 373,000 |
-| P5 | 120 GB | 20 | 6,000 / 750 | 400,000 | 373,000 |
-
-For instructions on setting up stunnel or downloading the Redis tools such as `redis-benchmark.exe`, see [How can I run Redis commands?](cache-development-faq.md#how-can-i-run-redis-commands).
-
-### In what region should I locate my cache?
-For best performance and lowest latency, locate your Azure Cache for Redis in the same region as your cache client application.
-
-### Where do my cached data reside?
-Azure Cache for Redis stores your application data in the RAM of the VM or VMs, depending on the tier, that host your cache. Your data reside strictly in the Azure region you've selected by default. There are two cases where your data may leave a region:
-* When you enable persistence on the cache, Azure Cache for Redis will backup your data to an Azure Storage account you own. If the storage account you provide happens to be in another region, a copy of your data will end up there.
-* If you set up geo-replication and your secondary cache is in a different region, which would be the case normally, your data will be replicated to that region.
-
-You'll need to explicitly configure Azure Cache for Redis to use these features. You also have complete control over the region that the storage account or secondary cache is located.
-
-### How am I billed for Azure Cache for Redis?
-Azure Cache for Redis pricing is [here](https://azure.microsoft.com/pricing/details/cache/). The pricing page lists pricing as an hourly and monthly rate. Caches are billed on a per-minute basis from the time that the cache is created until the time that a cache is deleted. There is no option for stopping or pausing the billing of a cache.
-
-### Can I use Azure Cache for Redis with Azure Government Cloud, Azure China 21Vianet Cloud, or Microsoft Azure Germany?
-Yes, Azure Cache for Redis is available in Azure Government Cloud, Azure China 21Vianet Cloud, and Microsoft Azure Germany. The URLs for accessing and managing Azure Cache for Redis are different in these clouds compared with Azure Public Cloud.
-
-| Cloud | Dns Suffix for Redis |
-|||
-| Public | *.redis.cache.windows.net |
-| US Gov | *.redis.cache.usgovcloudapi.net |
-| Germany | *.redis.cache.cloudapi.de |
-| China | *.redis.cache.chinacloudapi.cn |
-
-For more information on considerations when using Azure Cache for Redis with other clouds, see the following links.
--- [Azure Government Databases - Azure Cache for Redis](../azure-government/compare-azure-government-global-azure.md)-- [Azure China 21Vianet Cloud - Azure Cache for Redis](https://www.azure.cn/home/features/redis-cache/)-- [Microsoft Azure Germany](https://azure.microsoft.com/overview/clouds/germany/)-
-For information on using Azure Cache for Redis with PowerShell in Azure Government Cloud, Azure China 21Vianet Cloud, and Microsoft Azure Germany, see [How to connect to other clouds - Azure Cache for Redis PowerShell](cache-how-to-manage-redis-cache-powershell.md#how-to-connect-to-other-clouds).
-
-## Next steps
-
-Learn about other [Azure Cache for Redis FAQs](cache-faq.md).
azure-cache-for-redis Cache Redis Cache Arm Provision https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-redis-cache-arm-provision.md
The following resources are defined in the template:
Resource Manager templates for the new [Premium tier](cache-overview.md#service-tiers) are also available. * [Create a Premium Azure Cache for Redis with clustering](https://azure.microsoft.com/resources/templates/redis-premium-cluster-diagnostics/)
-* [Create Premium Azure Cache for Redis with data persistence](https://azure.microsoft.com/resources/templates/201-redis-premium-persistence/)
+* [Create Premium Azure Cache for Redis with data persistence](https://azure.microsoft.com/resources/templates/redis-premium-persistence/)
* [Create Premium Redis Cache deployed into a Virtual Network](https://azure.microsoft.com/resources/templates/redis-premium-vnet/) To check for the latest templates, see [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/) and search for _Azure Cache for Redis_.
azure-functions Durable Functions Dotnet Entities https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/durable/durable-functions-dotnet-entities.md
Title: Developer's Guide to Durable Entities in .NET - Azure Functions
description: How to work with durable entities in .NET with the Durable Functions extension for Azure Functions. Previously updated : 10/06/2019 Last updated : 06/30/2021 #Customer intent: As a developer, I want to learn how to use Durable Entities in .NET so I can persist object state in a serverless context.
If only the entity key is specified and a unique implementation can't be found a
As usual, all parameter and return types must be JSON-serializable. Otherwise, serialization exceptions are thrown at runtime. We also enforce some additional rules:
+* Entity interfaces must be defined in the same assembly as the entity class.
* Entity interfaces must only define methods. * Entity interfaces must not contain generic parameters. * Entity interface methods must not have more than one parameter.
-* Entity interface methods must return `void`, `Task`, or `Task<T>`
+* Entity interface methods must return `void`, `Task`, or `Task<T>`.
If any of these rules are violated, an `InvalidOperationException` is thrown at runtime when the interface is used as a type argument to `SignalEntity` or `CreateProxy`. The exception message explains which rule was broken.
azure-functions Functions Bindings Event Grid Trigger https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-event-grid-trigger.md
Use the function trigger to respond to an event sent to an Event Grid topic.
For information on setup and configuration details, see the [overview](./functions-bindings-event-grid.md).
+> [!NOTE]
+> Event Grid triggers aren't natively supported in an internal load balancer App Service Environment. The trigger uses an HTTP request that can't reach the function app without a gateway into the virtual network.
+ ## Example # [C#](#tab/csharp)
In Azure Functions 2.x and higher, you also have the option to use the following
* `Microsoft.Azure.EventGrid.Models.EventGridEvent`- Defines properties for the fields common to all event types. > [!NOTE]
-> In Functions v1 if you try to bind to `Microsoft.Azure.WebJobs.Extensions.EventGrid.EventGridEvent`, the compiler will display a "deprecated" message and advise you to use `Microsoft.Azure.EventGrid.Models.EventGridEvent` instead. To use the newer type, reference the [Microsoft.Azure.EventGrid](https://www.nuget.org/packages/Microsoft.Azure.EventGrid) NuGet package and fully qualify the `EventGridEvent` type name by prefixing it with `Microsoft.Azure.EventGrid.Models`.
+> If you try to bind to `Microsoft.Azure.WebJobs.Extensions.EventGrid.EventGridEvent`, the compiler displays a "deprecated" message and advises you to use `Microsoft.Azure.EventGrid.Models.EventGridEvent` instead. To use the newer type, reference the [Microsoft.Azure.EventGrid](https://www.nuget.org/packages/Microsoft.Azure.EventGrid) NuGet package and fully qualify the `EventGridEvent` type name by prefixing it with `Microsoft.Azure.EventGrid.Models`.
+
+### Additional types
-### Additional types
Apps using the 3.0.0 or higher version of the Event Grid extension use the `EventGridEvent` type from the [Azure.Messaging.EventGrid](/dotnet/api/azure.messaging.eventgrid.eventgridevent) namespace. In addition, you can bind to the `CloudEvent` type from the [Azure.Messaging](/dotnet/api/azure.messaging.cloudevent) namespace. # [C# Script](#tab/csharp-script)
In Azure Functions 2.x and higher, you also have the option to use the following
* `Microsoft.Azure.EventGrid.Models.EventGridEvent`- Defines properties for the fields common to all event types. > [!NOTE]
-> In Functions v1 if you try to bind to `Microsoft.Azure.WebJobs.Extensions.EventGrid.EventGridEvent`, the compiler will display a "deprecated" message and advise you to use `Microsoft.Azure.EventGrid.Models.EventGridEvent` instead. To use the newer type, reference the [Microsoft.Azure.EventGrid](https://www.nuget.org/packages/Microsoft.Azure.EventGrid) NuGet package and fully qualify the `EventGridEvent` type name by prefixing it with `Microsoft.Azure.EventGrid.Models`. For information about how to reference NuGet packages in a C# script function, see [Using NuGet packages](functions-reference-csharp.md#using-nuget-packages)
+> If you try to bind to `Microsoft.Azure.WebJobs.Extensions.EventGrid.EventGridEvent`, the compiler will display a "deprecated" message and advise you to use `Microsoft.Azure.EventGrid.Models.EventGridEvent` instead. To use the newer type, reference the [Microsoft.Azure.EventGrid](https://www.nuget.org/packages/Microsoft.Azure.EventGrid) NuGet package and fully qualify the `EventGridEvent` type name by prefixing it with `Microsoft.Azure.EventGrid.Models`. For information about how to reference NuGet packages in a C# script function, see [Using NuGet packages](functions-reference-csharp.md#using-nuget-packages)
### Additional types Apps using the 3.0.0 or higher version of the Event Grid extension use the `EventGridEvent` type from the [Azure.Messaging.EventGrid](/dotnet/api/azure.messaging.eventgrid.eventgridevent) namespace. In addition, you can bind to the `CloudEvent` type from the [Azure.Messaging](/dotnet/api/azure.messaging.cloudevent) namespace.
azure-functions Functions Bindings Event Grid https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-event-grid.md
The code in this reference defaults to .NET Core syntax, used in Functions versi
Working with the trigger and bindings requires that you reference the appropriate package. The NuGet package is used for .NET class libraries while the extension bundle is used for all other application types.
-| Language | Add by... | Remarks
-|-||-|
-| C# | Installing the [NuGet package], version 2.x | |
-| C# Script, Java, JavaScript, Python, PowerShell | Registering the [extension bundle] | The [Azure Tools extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack) is recommended to use with Visual Studio Code. |
-| C# Script (online-only in Azure portal) | Adding a binding | To update existing binding extensions without having to republish your function app, see [Update your extensions]. |
+| Language | Add by... | Remarks |
+||||
+| C# | Installing the [NuGet package], version 2.x | |
+| C# Script, Java, JavaScript, Python, PowerShell | Registering the [extension bundle] | The [Azure Tools extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack) is recommended to use with Visual Studio Code. |
+| C# Script (online-only in Azure portal) | Adding a binding | To update existing binding extensions without having to republish your function app, see [Update your extensions]. |
[core tools]: ./functions-run-local.md [extension bundle]: ./functions-bindings-register.md#extension-bundles
azure-functions Functions Bindings Signalr Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-signalr-service.md
To use the SignalR Service annotations in Java functions, you need to add a depe
</dependency> ```
+## Connection string settings
+
+Add the `AzureSignalRConnectionString` key to the _host.json_ file that points to the application setting with your connection string. For local development, this value may exist in the _local.settings.json_ file.
+ ## Next steps - [Handle messages from SignalR Service (Trigger binding)](./functions-bindings-signalr-service-trigger.md)
azure-functions Functions Bindings Storage Queue Trigger https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-storage-queue-trigger.md
The queue trigger implements a random exponential back-off algorithm to reduce t
The algorithm uses the following logic: -- When a message is found, the runtime waits two seconds and then checks for another message-- When no message is found, it waits about four seconds before trying again.
+- When a message is found, the runtime waits 100 milliseconds and then checks for another message
+- When no message is found, it waits about 200 milliseconds before trying again.
- After subsequent failed attempts to get a queue message, the wait time continues to increase until it reaches the maximum wait time, which defaults to one minute. - The maximum wait time is configurable via the `maxPollingInterval` property in the [host.json file](functions-host-json-v1.md#queues).
azure-functions Functions Bindings Storage Queue https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-storage-queue.md
This section describes the global configuration settings available for this bind
|Property |Default | Description | ||||
-|maxPollingInterval|00:00:01|The maximum interval between queue polls. Minimum is 00:00:00.100 (100 ms) and increments up to 00:01:00 (1 min). In Functions 2.x and higher the data type is a `TimeSpan`, while in version 1.x it is in milliseconds.|
+|maxPollingInterval|00:01:00|The maximum interval between queue polls. The minimum interval is 00:00:00.100 (100 ms). Intervals increment up to `maxPollingInterval`. The default value of `maxPollingInterval` is 00:01:00 (1 min). `maxPollingInterval` must not be less than 00:00:00.100 (100 ms). In Functions 2.x and later, the data type is a `TimeSpan`. In Functions 1.x, it is in milliseconds.|
|visibilityTimeout|00:00:00|The time interval between retries when processing of a message fails. | |batchSize|16|The number of queue messages that the Functions runtime retrieves simultaneously and processes in parallel. When the number being processed gets down to the `newBatchThreshold`, the runtime gets another batch and starts processing those messages. So the maximum number of concurrent messages being processed per function is `batchSize` plus `newBatchThreshold`. This limit applies separately to each queue-triggered function. <br><br>If you want to avoid parallel execution for messages received on one queue, you can set `batchSize` to 1. However, this setting eliminates concurrency as long as your function app runs only on a single virtual machine (VM). If the function app scales out to multiple VMs, each VM could run one instance of each queue-triggered function.<br><br>The maximum `batchSize` is 32. | |maxDequeueCount|5|The number of times to try processing a message before moving it to the poison queue.|
azure-functions Functions Create Serverless Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-create-serverless-api.md
In this section, you create a new proxy, which serves as a frontend to your over
Repeat the steps to [Create a function app](./functions-get-started.md) to create a new function app in which you will create your proxy. This new app's URL serves as the frontend for our API, and the function app you were previously editing serves as a backend. 1. Navigate to your new frontend function app in the portal.
-1. Select **Platform Features** and choose **Application Settings**.
+1. Select **Configuration** and choose **Application Settings**.
1. Scroll down to **Application settings**, where key/value pairs are stored, and create a new setting with the key `HELLO_HOST`. Set its value to the host of your backend function app, such as `<YourBackendApp>.azurewebsites.net`. This value is part of the URL that you copied earlier when testing your HTTP function. You'll reference this setting in the configuration later. > [!NOTE]
azure-monitor Data Platform Metrics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/essentials/data-platform-metrics.md
For most resources in Azure, platform metrics are stored for 93 days. There are
> You can [send platform metrics for Azure Monitor resources to a Log Analytics workspace](./resource-logs.md#send-to-azure-storage) for long term trending.
+> [!NOTE]
+> As mentioned above, for most resources in Azure, platform metrics are stored for 93 days. However, you can only query (in Metrics tile) for no more than 30 days worth of data on any single chart. This limitation doesn't apply to log-based metrics. In case you see a blank chart or your chart only displays part of metric data, verify the difference between start and end dates in the time picker doesn't exceed the 30-day interval. Once you have selected a 30 day interval, you can [pan](https://docs.microsoft.com/azure/azure-monitor/essentials/metrics-charts#pan) the chart to view the full retention window.
+
azure-monitor Cross Workspace Query https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/cross-workspace-query.md
description: This article describes how you can query against resources from mul
Previously updated : 04/11/2021 Last updated : 06/30/2021
-# Perform log query in Azure Monitor that span across workspaces and apps
+# Perform log queries in Azure Monitor that span across workspaces and apps
-Azure Monitor Logs support query across multiple Log Analytics workspaces and Application Insights app in the same resource group, another resource group, or another subscription. This provides you with a system-wide view of your data.
+Azure Monitor Logs support querying across multiple Log Analytics workspaces and Application Insights apps in the same resource group, another resource group, or another subscription. This provides you with a system-wide view of your data.
+
+If you manage subscriptions in other Azure Active Directory (Azure AD) tenants through [Azure Lighthouse](/azure/lighthouse/overview), you can include [Log Analytics workspaces created in those customer tenants](/azure/lighthouse/how-to/monitor-at-scale) in your queries.
There are two methods to query data that is stored in multiple workspace and apps:+ 1. Explicitly by specifying the workspace and app details. This technique is detailed in this article. 2. Implicitly using [resource-context queries](./design-logs-deployment.md#access-mode). When you query in the context of a specific resource, resource group or a subscription, the relevant data will be fetched from all workspaces that contains data for these resources. Application Insights data that is stored in apps, will not be fetched. > [!IMPORTANT]
-> If you are using a [workspace-based Application Insights resource](../app/create-workspace-resource.md) telemetry is stored in a Log Analytics workspace with all other log data. Use the workspace() expression to write a query that includes application in multiple workspaces. For multiple applications in the same workspace, you don't need a cross workspace query.
-
+> If you are using a [workspace-based Application Insights resource](../app/create-workspace-resource.md), telemetry is stored in a Log Analytics workspace with all other log data. Use the workspace() expression to write a query that includes applications in multiple workspaces. For multiple applications in the same workspace, you don't need a cross workspace query.
## Cross-resource query limits
azure-monitor Roles Permissions Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/roles-permissions-security.md
Azure MonitorΓÇÖs built-in roles are designed to help limit access to resources
### Monitoring Reader People assigned the Monitoring Reader role can view all monitoring data in a subscription but cannot modify any resource or edit any settings related to monitoring resources. This role is appropriate for users in an organization, such as support or operations engineers, who need to be able to:
-* View monitoring dashboards in the portal and create their own private monitoring dashboards.
+* View monitoring dashboards in the portal.
* View alert rules defined in [Azure Alerts](alerts/alerts-overview.md) * Query for metrics using the [Azure Monitor REST API](/rest/api/monitor/metrics), [PowerShell cmdlets](powershell-samples.md), or [cross-platform CLI](cli-samples.md). * Query the Activity Log using the portal, Azure Monitor REST API, PowerShell cmdlets, or cross-platform CLI.
People assigned the Monitoring Reader role can view all monitoring data in a sub
### Monitoring Contributor People assigned the Monitoring Contributor role can view all monitoring data in a subscription and create or modify monitoring settings, but cannot modify any other resources. This role is a superset of the Monitoring Reader role, and is appropriate for members of an organizationΓÇÖs monitoring team or managed service providers who, in addition to the permissions above, also need to be able to:
-* Publish monitoring dashboards as a shared dashboard.
+* View monitoring dashboards in the portal and create their own private monitoring dashboards.
* Set [diagnostic settings](essentials/diagnostic-settings.md) for a resource.\* * Set the [log profile](essentials/activity-log.md#legacy-collection-methods) for a subscription.\* * Set alert rules activity and settings via [Azure Alerts](alerts/alerts-overview.md).
azure-netapp-files Azure Netapp Files Develop With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-develop-with-rest-api.md
na ms.devlang: na Previously updated : 06/02/2020 Last updated : 06/29/2021 # Develop for Azure NetApp Files with REST API
The REST API specification for Azure NetApp Files is published through [GitHub](
`https://github.com/Azure/azure-rest-api-specs/tree/master/specification/netapp/resource-manager`
+## Considerations
+
+* When the API limit has been exceeded, the HTTP response code is **429**. For example:
+
+ `"Microsoft.Azure.ResourceProvider.Common.Exceptions.ResourceProviderException: Error getting Pool. Rate limit exceeded for this endpoint - try again later > CloudVolumes.Service.Client.Client.ApiException: Error calling V2DescribePool: {\"code\":429,\"message\":\"Rate limit exceeded for this endpoint - try again later\"}`
+
+ This response code can come from throttling or a temporary condition. See [Azure Resource Manager HTTP 429 response code](../azure-resource-manager/management/request-limits-and-throttling.md#error-code) for more information.
## Access the Azure NetApp Files REST API
The following example shows how to create a snapshot of a volume:
## Next steps
-[See the Azure NetApp Files REST API reference](/rest/api/netapp/)
+[See the Azure NetApp Files REST API reference](/rest/api/netapp/)
azure-netapp-files Create Volumes Dual Protocol https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/create-volumes-dual-protocol.md
na ms.devlang: na Previously updated : 06/14/2021 Last updated : 06/29/2021 # Create a dual-protocol (NFSv3 and SMB) volume for Azure NetApp Files
You need to set the following attributes for LDAP users and LDAP groups:
`objectClass: posixGroup`, `gidNumber: 555` * All users and groups must have unique `uidNumber` and `gidNumber`, respectively.
+Azure Active Directory Domain Services (AADDS) doesnΓÇÖt allow you to modify POSIX attributes on users and groups created in the organizational ADDC Users OU. As a workaround, you can create a custom OU and create users and groups in the custom OU.
+
+If you are synchronizing the users and groups in your Azure AD tenancy to users and groups in the AADDC Users OU, you cannot move users and groups into a custom OU. Users and groups created in the custom OU will not be synchronized to your AD tenancy. For more information, see the [AADDS Custom OU Considerations and Limitations](../active-directory-domain-services/create-ou.md#custom-ou-considerations-and-limitations).
+ ### Access Active Directory Attribute Editor On a Windows system, you can access the Active Directory Attribute Editor as follows:
azure-percept Quickstart Percept Audio Setup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/quickstart-percept-audio-setup.md
Azure Percept Audio works out of the box with Azure Percept DK. No unique setup
1. (Optional) connect your speaker or headphones to your Azure Percept Audio device via the audio jack, labeled "Line Out." This will allow you to hear audio responses.
-1. Power on the devkit. LED L02 will change to blinking white, which indicates that the device was powered on and is authenticating.
+1. Power on the dev kit by connecting it to the power adaptor. LED L02 will change to blinking white, which indicates that the device was powered on and is authenticating.
1. Wait for the authentication process to complete, which takes up to 5 minutes.
azure-percept Quickstart Percept Dk Set Up https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/quickstart-percept-dk-set-up.md
To verify if your Azure account is an ΓÇ£ownerΓÇ¥ or ΓÇ£contributorΓÇ¥ within th
1. Click **View your device stream**. If this is the first time viewing the video stream of your device, you will see a notification that a new model is being deployed in the upper right-hand corner. This may take a few minutes.
- :::image type="content" source="./media/quickstart-percept-dk-setup/portal-03-1-start-video-stream.png" alt-text="View your video stream.":::
+ :::image type="content" source="./media/quickstart-percept-dk-setup/view-stream.png" alt-text="View your video stream.":::
Once the model has deployed, you will get another notification with a **View stream** link. Click on the link to view the video stream from your Azure Percept Vision camera in a new browser window. The dev kit is preloaded with an AI model that automatically performs object detection of many common objects.
To verify if your Azure account is an ΓÇ£ownerΓÇ¥ or ΓÇ£contributorΓÇ¥ within th
1. Azure Percept Studio also has a number of sample AI models. To deploy a sample model to your dev kit, navigate back to your device page and click **Deploy a sample model**.
- :::image type="content" source="./media/quickstart-percept-dk-setup/portal-04-explore-prebuilt.png" alt-text="Explore pre-built models.":::
+ :::image type="content" source="./media/quickstart-percept-dk-setup/deploy-sample-model.png" alt-text="Explore pre-built models.":::
1. Select a sample model from the library and click **Deploy to device**.
azure-resource-manager Microsoft Common Dropdown https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/managed-applications/microsoft-common-dropdown.md
When filtering is enabled, the control includes a text box for adding the filter
"placeholder": "", "defaultValue": "Value two", "toolTip": "",
- "multiselect": true, 
-   "selectAll": true, 
-   "filter": true, 
-   "filterPlaceholder": "Filter items ...", 
-   "multiLine": true, 
-   "defaultDescription": "A value for selection", 
+ "multiselect": true,
+ "selectAll": true,
+ "filter": true,
+ "filterPlaceholder": "Filter items ...",
+ "multiLine": true,
+ "defaultDescription": "A value for selection",
"constraints": { "allowedValues": [ {
azure-video-analyzer Configure Signal Gate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/configure-signal-gate.md
Example diagram:
> [!IMPORTANT] > The preceding diagrams assume that every event arrives at the same instant in physical time and media time. That is, they assume that there are no late arrivals.
-## Next steps
+### Naming video or files
+
+Pipelines allows for recording of videos to the cloud, or as MP4 files on the edge device. These can be generated by [continuous video recording](use-continuous-video-recording.md) or by [event-based video recording](record-event-based-live-video.md).
+
+The recommended naming structure for recording to the cloud is to name the video resource as "<anytext>-${System.TopologyName}-${System.PipelineName}". A given live pipeline can only connect to one RTSP-capable IP camera, and you should record the input from that camera to one video resource. As an example, you can set the `VideoName` on the Video Sink as follows:
+
+```
+"VideoName": "sampleVideo-${System.TopologyName}-${System.PipelineName}"
+```
+Note that the substitution pattern is defined by the `$` sign followed by braces: **${variableName}**.
+
+When recording to MP4 files on the edge device using event-based recording, you can use:
+
+```
+"fileNamePattern": "sampleFilesFromEVR-${System.TopologyName}-${System.PipelineName}-${fileSinkOutputName}-${System.Runtime.DateTime}"
+```
+
+> [!Note]
+> In the example above, the variable **fileSinkOutputName** is a sample variable name that you define when creating the live pipeline. This is **not** a system variable. Note how the use of **DateTime** ensures a unique MP4 file name for each event.
+
+#### System variables
-Try out the [Event-based video recording tutorial](record-event-based-live-video.md). Start by editing the [topology.json](https://raw.githubusercontent.com/Azure/video-analyzer/main/pipelines/live/topologies/evr-hubMessage-video-sink/topology.json). Modify the parameters for the signalgateProcessor node, and then follow the rest of the tutorial. Review the video recordings to analyze the effect of the parameters.
+Some system defined variables that you can use are:
+
+| System Variable | Description | Example |
+| : | :-- | :- |
+| System.Runtime.DateTime | UTC date time in ISO8601 file compliant format (basic representation YYYYMMDDThhmmss). | 20200222T173200Z |
+| System.Runtime.PreciseDateTime | UTC date time in ISO8601 file compliant format with milliseconds (basic representation YYYYMMDDThhmmss.sss). | 20200222T173200.123Z |
+| System.TopologyName | User provided name of the executing pipeline topology. | IngestAndRecord |
+| System.PipelineName | User provided name of the executing live pipeline. | camera001 |
+
+> [!Tip]
+> System.Runtime.PreciseDateTime and System.Runtime.DateTime cannot be used when naming videos in the cloud.
+
+## Next steps
+Try out the [Event-based video recording tutorial](record-event-based-live-video.md). Start by editing the [topology.json](https://raw.githubusercontent.com/Azure/video-analyzer/main/pipelines/live/topologies/evr-hubMessage-video-sink/topology.json). Modify the parameters for the signalgateProcessor node, and then follow the rest of the tutorial. Review the video recordings to analyze the effect of the parameters.
azure-video-analyzer Monitor Log Edge https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/monitor-log-edge.md
As with other IoT Edge modules, you can also [examine the container logs](../../
* `MediaPipeline`: Low-level logs that might offer insight when you're troubleshooting problems, like difficulties establishing a connection with an RTSP-capable camera. ### Generating debug logs
+In certain cases, to help Azure support resolve a problem, you might need to generate more detailed logs than the ones described previously. To generate these logs:
-In certain cases, to help Azure support resolve a problem, you might need to generate more detailed logs than the ones described previously. To generate these logs:
+1. Sign in to the [Azure portal](https://portal.azure.com), and go to your IoT hub.
+1. On the left pane, select **IoT Edge**.
+1. In the list of devices, select the ID of the target device.
+1. At the top of the pane, select **Set Modules**.
-1. [Link the module storage to the device storage](../../iot-edge/how-to-access-host-storage-from-module.md#link-module-storage-to-device-storage) via `createOptions`. If you look at a [deployment manifest template](https://github.com/Azure-Samples/azure-video-analyzer-iot-edge-csharp/blob/master/src/edge/deployment.template.json) from the quickstarts, you'll see this code:
+ ![Screenshot of the "Set Modules" button in the Azure portal.](media/troubleshoot/set-modules.png)
- ```json
- "createOptions": {
- …
- "Binds": [
- "/var/local/videoAnalyzer/:/var/lib/videoAnalyzer/"
- ]
- }
- ```
+1. In the **IoT Edge Modules** section, look for and select **avaedge**.
+1. Select **Module Identity Twin**. An editable pane opens.
+1. Under **desired key**, add the following key/value pair:
+
+ `"DebugLogsDirectory": "/var/lib/videoanalyzer/logs"`
+
+ > [!NOTE]
+ > This command binds the logs folders between the Edge device and the container. If you want to collect the logs in a different location on the device:
+ >
+ > 1. Create a binding for the Debug Log location in the **Binds** section, replacing the **$DEBUG_LOG_LOCATION_ON_EDGE_DEVICE** and **$DEBUG_LOG_LOCATION** with the location you want:
+ > `/var/$DEBUG_LOG_LOCATION_ON_EDGE_DEVICE:/var/$DEBUG_LOG_LOCATION`
+ > 2. Use the following command, replacing **$DEBUG_LOG_LOCATION** with the location used in the previous step:
+ > `"DebugLogsDirectory": "/var/$DEBUG_LOG_LOCATION"`
- This code lets the Edge module write logs to the device storage path `/var/local/videoAnalyzer/`.
+1. Select **Save**
- 1. Add the following `desired` property to the module:
+The module will now write debug logs in a binary format to the device storage path `/var/local/videoAnalyzer/debuglogs/`. You can share these logs with Azure support.
- `"debugLogsDirectory": "/var/lib/videoAnalyzer/debuglogs/"`
+You can stop log collection by setting the value in **Module Identity Twin** to _null_. Go back to the **Module Identity Twin** page and update the following parameter as:
-The module will now write debug logs in a binary format to the device storage path `/var/local/videoAnalyzer/debuglogs/`. You can share these logs with Azure support.
+ `"DebugLogsDirectory": ""`
## FAQ
azure-video-analyzer Production Readiness https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/production-readiness.md
If you look at the sample pipelines for the quickstart, and tutorials such as [c
Also note that `allowedUnsecuredEndpoints` is set to `true`, as recommended for production environments where you will use TLS encryption to secure traffic.
-### Naming video or files
-
-Pipelines allows for recording of videos to the cloud, or as MP4 files on the edge device. These can be generated by [continuous video recording](use-continuous-video-recording.md) or by [event-based video recording](record-event-based-live-video.md).
-
-The recommended naming structure for recording to the cloud is to name the video resource as "<anytext>-${System.TopologyName}-${System.PipelineName}". A given live pipeline can only connect to one RTSP-capable IP camera, and you should record the input from that camera to one video resource. As an example, you can set the `VideoName` on the Video Sink as follows:
-
-```
-"VideoName": "sampleVideo-${System.TopologyName}-${System.PipelineName}"
-```
-Note that the substitution pattern is defined by the `$` sign followed by braces: **${variableName}**.
-
-When recording to MP4 files on the edge device using event-based recording, you can use:
-
-```
-"fileNamePattern": "sampleFilesFromEVR-${System.TopologyName}-${System.PipelineName}-${fileSinkOutputName}-${System.Runtime.DateTime}"
-```
-
-> [!Note]
-> In the example above, the variable **fileSinkOutputName** is a sample variable name that you define when creating the live pipeline. This is **not** a system variable. Note how the use of **DateTime** ensures a unique MP4 file name for each event.
-
-#### System variables
-
-Some system defined variables that you can use are:
-
-| System Variable | Description | Example |
-| : | :-- | :- |
-| System.Runtime.DateTime | UTC date time in ISO8601 file compliant format (basic representation YYYYMMDDThhmmss). | 20200222T173200Z |
-| System.Runtime.PreciseDateTime | UTC date time in ISO8601 file compliant format with milliseconds (basic representation YYYYMMDDThhmmss.sss). | 20200222T173200.123Z |
-| System.TopologyName | User provided name of the executing pipeline topology. | IngestAndRecord |
-| System.PipelineName | User provided name of the executing live pipeline. | camera001 |
-
-> [!Tip]
-> System.Runtime.PreciseDateTime and System.Runtime.DateTime cannot be used when naming videos in the cloud.
- ### Tips about maintaining your edge device > [!Note]
azure-video-analyzer Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/troubleshoot.md
To configure the Video Analyzer module to generate debug logs, do the following:
![Screenshot of the "Set Modules" button in the Azure portal.](media/troubleshoot/set-modules.png) 1. In the **IoT Edge Modules** section, look for and select **avaedge**.
-1. Select **Container Create Options**.
-1. In the **Binds** section, add the following command:
-
- `/var/local/videoanalyzer/logs:/var/lib/videoanalyzer/logs`
-
- > [!NOTE]
- > This command binds the logs folders between the Edge device and the container. If you want to collect the logs in a different location, use the following command, replacing **$LOG_LOCATION_ON_EDGE_DEVICE** with the location you want to use:
- > `/var/$LOG_LOCATION_ON_EDGE_DEVICE:/var/lib/videoanalyzer/logs`
-
-1. Select **Update**.
-1. Select **Review + Create**. A successful validation message is posted under a green banner.
-1. Select **Create**.
-1. Update **Module Identity Twin** to point to the DebugLogsDirectory parameter, which points to the directory in which the logs are collected:
-
- a. Under the **Modules** table, select **avaedge**.
- b. At the top of the pane, select **Module Identity Twin**. An editable pane opens.
- c. Under **desired key**, add the following key/value pair:
+1. Select **Module Identity Twin**. An editable pane opens.
+1. Under **desired key**, add the following key/value pair:
`"DebugLogsDirectory": "/var/lib/videoanalyzer/logs"`
To configure the Video Analyzer module to generate debug logs, do the following:
> 2. Use the following command, replacing **$DEBUG_LOG_LOCATION** with the location used in the previous step: > `"DebugLogsDirectory": "/var/$DEBUG_LOG_LOCATION"`
- d. Select **Save**.
+1. Select **Save**.
1. You can stop log collection by setting the value in **Module Identity Twin** to _null_. Go back to the **Module Identity Twin** page and update the following parameter as:
backup Backup Azure Sql Automation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-sql-automation.md
Title: SQL DB in Azure VM backup & restore via PowerShell description: Back up and restore SQL Databases in Azure VMs using Azure Backup and PowerShell. Previously updated : 03/15/2019 Last updated : 06/30/2019 ms.assetid: 57854626-91f9-4677-b6a2-5d12b6a866e1
$AnotherInstanceWithFullConfig = Get-AzRecoveryServicesBackupWorkloadRecoveryCon
##### Alternate restore with log point-in-time ```powershell
-$AnotherInstanceWithLogConfig = Get-AzRecoveryServicesBackupWorkloadRecoveryConfig -PointInTime $PointInTime -Item $bkpItem -AlternateWorkloadRestore -VaultId $targetVault.ID
+$AnotherInstanceWithLogConfig = Get-AzRecoveryServicesBackupWorkloadRecoveryConfig -PointInTime $PointInTime -Item $bkpItem -TargetItem $TargetInstance -AlternateWorkloadRestore -VaultId $targetVault.ID
``` ##### Restore as Files
PointInTime : 1/1/0001 12:00:00 AM
> [!IMPORTANT] > Make sure that the final recovery config object has all the necessary and proper values since the restore operation will be based on the config object.
+#### Alternate workload restore to a vault in secondary region
+
+> [!IMPORTANT]
+> Support for secondary region restores for SQL from Powershell is available from Az 4.1.0
+
+If you have enabled cross region restore, then the recovery points will be replicated to the secondary, paired region as well. Then, you can fetch those recovery points and trigger a restore to a machine, present in that paired region. As with the normal restore, the target machine should be registered to the target vault in the secondary region. The following sequence of steps should clarify the end-to-end process.
+
+* Fetch the backup items which are replicated to the secondary region
+* For such an item, fetch the recovery points (distinct and/or logs) which are replicated to the secondary region
+* Then choose a target server, registered to a vault within the secondary paired region
+* Trigger the restore to that server and track it using the JobId.
+
+#### Fetch backup items from secondary region
+
+Fetch all the SQL backup items from the secondary region with the usual command but with an extra parameter to indicate that these items should be fetched from secondary region.
+
+```powershell
+$secondaryBkpItems = Get-AzRecoveryServicesBackupItem -BackupManagementType AzureWorkload -WorkloadType MSSQL -VaultId $targetVault.ID -UseSecondaryRegion
+```
+
+##### Fetch distinct recovery points from secondary region
+
+Use [Get-AzRecoveryServicesBackupRecoveryPoint](/powershell/module/az.recoveryservices/get-azrecoveryservicesbackuprecoverypoint) to fetch distinct (Full/differential) recovery points for a backed-up SQL DB and add a parameter to indicate that these are recovery points fetched from the secondary region
+
+```powershell
+$startDate = (Get-Date).AddDays(-7).ToUniversalTime()
+$endDate = (Get-Date).ToUniversalTime()
+Get-AzRecoveryServicesBackupRecoveryPoint -Item $secondaryBkpItems[0] -VaultId $targetVault.ID -StartDate $startdate -EndDate $endDate -UseSecondaryRegion
+```
+
+The output is similar to the following example
+
+```output
+RecoveryPointId RecoveryPointType RecoveryPointTime ItemName BackupManagemen
+ tType
+ -- -- --
+6660368097802 Full 3/18/2019 8:09:35 PM MSSQLSERVER;model AzureWorkload
+```
+
+Use the 'RecoveryPointId' filter or an array filter to fetch the relevant recovery point.
+
+```powershell
+$FullRPFromSec = Get-AzRecoveryServicesBackupRecoveryPoint -Item $secondaryBkpItems[0] -VaultId $targetVault.ID -RecoveryPointId "6660368097802" -UseSecondaryRegion
+```
+
+##### Fetch log recovery points from secondary region
+
+Use [Get-AzRecoveryServicesBackupRecoveryLogChain](/powershell/module/az.recoveryservices/get-azrecoveryservicesbackuprecoverylogchain) PowerShell cmdlet with the parameter '*-UseSecondaryRegion*' which will return start and end times of an unbroken, continuous log chain for that SQL backup item from the secondary region. The desired point-in-time should be within this range.
+
+```powershell
+Get-AzRecoveryServicesBackupRecoveryLogChain -Item $secondaryBkpItems[0] -VaultId $targetVault.ID -UseSecondaryRegion
+```
+
+The output will be similar to the following example.
+
+```output
+ItemName StartTime EndTime
+-- -
+SQLDataBase;MSSQLSERVER;azu... 3/18/2019 8:09:35 PM 3/19/2019 12:08:32 PM
+```
+
+The above output means that you can restore to any point-in-time between the displayed start time and end time. The times are in UTC. Construct any point-in-time in PowerShell that's within the range shown above.
+
+#### Fetch target server from secondary region
+
+From the secondary region, we need a vault and a target server registered to that vault. Once we have the secondary region target container and the SQL instance, we can re-use the existing cmdlets to generate a restore workload configuration.
+
+First, we fetch the relevant vault present in the secondary region and then get the registered containers within that vault.
+
+```powershell
+$PairedRegionVault = Get-AzRecoveryServicesVault -ResourceGroupName SecondaryRG -Name PairedVault
+$seccontainer = Get-AzRecoveryServicesBackupContainer -ContainerType AzureVMAppContainer -Status Registered -VaultId $PairedRegionVault.ID
+```
+
+Once the registered container is chosen, then we fetch the SQL instances within the container to which the DB should be restored to.
+
+```powershell
+Get-AzRecoveryServicesBackupProtectableItem -WorkloadType MSSQL -ItemType SQLInstance -VaultId $PairedRegionVault.ID -Container $seccontainer
+```
+
+From the output, choose the SQL server name and assign the output to a variable which will be used later for restore.
+
+```powershell
+$secSQLInstance = Get-AzRecoveryServicesBackupProtectableItem -WorkloadType MSSQL -ItemType SQLInstance -VaultId $PairedRegionVault.ID -Container $seccontainer -ServerName "sqlserver-0.corp.contoso.com"
+```
+
+#### Prepare the recovery configuration
+
+As documented [above](#determine-recovery-configuration) for the normal SQL restore, the same command can be re-used to generate the relevant recovery configuration.
+
+##### For full restores from secondary region
+
+```powershell
+Get-AzRecoveryServicesBackupWorkloadRecoveryConfig -RecoveryPoint $FullRPFromSec[0] -TargetItem $secSQLInstance -AlternateWorkloadRestore -VaultId $vault.ID
+```
+
+##### For log point in time restores from secondary region
+
+```powershell
+Get-AzRecoveryServicesBackupWorkloadRecoveryConfig -PointInTime $PointInTime -Item $secondaryBkpItems[0] -TargetItem $secSQLInstance -AlternateWorkloadRestore -VaultId $vault.ID
+```
+
+Once the relevant configuration is obtained for primary region restore or secondary region restore, the same restore command can be used to trigger restores and later tracked using the jobIDs.
+ ### Restore with relevant configuration Once the relevant recovery Config object is obtained and verified, use the [Restore-AzRecoveryServicesBackupItem](/powershell/module/az.recoveryservices/restore-azrecoveryservicesbackupitem) PowerShell cmdlet to start the restore process.
backup Selective Disk Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/selective-disk-backup-restore.md
For example:
```azurepowershell $disks = ("0","1") $targetVault = Get-AzRecoveryServicesVault -ResourceGroupName "rg-p-recovery_vaults" -Name "rsv-p-servers"
+Set-AzRecoveryServicesVaultContext -Vault $targetVault
Get-AzRecoveryServicesBackupProtectionPolicy $pol = Get-AzRecoveryServicesBackupProtectionPolicy -Name "P-Servers" ```
bastion Quickstart Host Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/bastion/quickstart-host-portal.md
Previously updated : 02/18/2021 Last updated : 06/29/2021 # Customer intent: As someone with a networking background, I want to connect to a virtual machine securely via RDP/SSH using a private IP address through my browser.
# Quickstart: Connect to a VM securely through a browser via private IP address
-You can connect to a virtual machine (VM) through your browser using the Azure portal and Azure Bastion. This quickstart article shows you how to configure Azure Bastion based on your VM settings, and then connect to your VM through the portal. The VM doesn't need a public IP address, client software, agent, or a special configuration. Once the service is provisioned, the RDP/SSH experience is available to all of the virtual machines in the same virtual network. For more information about Azure Bastion, see [What is Azure Bastion?](bastion-overview.md).
+You can connect to a virtual machine (VM) through your browser using the Azure portal and Azure Bastion. This quickstart article shows you how to configure Azure Bastion based on your VM settings. Once the service is provisioned, the RDP/SSH experience is available to all of the virtual machines in the same virtual network. The VM doesn't need a public IP address, client software, agent, or a special configuration. If you don't need the public IP address on your VM for anything else, you can remove it. You then connect to your VM through the portal using the private IP address. For more information about Azure Bastion, see [What is Azure Bastion?](bastion-overview.md)
## <a name="prereq"></a>Prerequisites
There are a few different ways to configure a bastion host. In the following ste
* **Public IP address name:** The name of the Public IP address resource. * **Public IP address SKU:** Pre-configured as **Standard** * **Assignment:** Pre-configured to **Static**. You can't use a Dynamic assignment for Azure Bastion.
- * **Resource group**: The same resource group as the VM.
+ * **Resource group:** The same resource group as the VM.
:::image type="content" source="./media/quickstart-host-portal/create-bastion.png" alt-text="Screenshot of Step 3."::: 1. After completing the values, select **Create Azure Bastion using defaults**. Azure validates your settings, then creates the host. The host and its resources take about 5 minutes to create and deploy.
-## <a name="connect"></a>Connect
+## <a name="remove"></a>Remove VM public IP address
++
+## <a name="connect"></a>Connect to a VM
After Bastion has been deployed to the virtual network, the screen changes to the connect page. 1. Type the username and password for your virtual machine. Then, select **Connect**. :::image type="content" source="./media/quickstart-host-portal/connect.png" alt-text="Screenshot shows the Connect using Azure Bastion dialog.":::
-1. The RDP connection to this virtual machine will open directly in the Azure portal (over HTML5) using port 443 and the Bastion service.
+1. The RDP connection to this virtual machine via Bastion will open directly in the Azure portal (over HTML5) using port 443 and the Bastion service.
+
+ * When you connect, the desktop of the VM may look different than the example screenshot.
+ * Using keyboard shortcut keys while connected to a VM may not result in the same behavior as shortcut keys on a local computer. For example, when connected to a Windows VM from a Windows client, CTRL+ALT+END is the keyboard shortcut for CTRL+ALT+Delete on a local computer. To do this from a Mac while connected to a Windows VM, the keyboard shortcut is Fn+CTRL+ALT+Backspace.
:::image type="content" source="./media/quickstart-host-portal/connected.png" alt-text="RDP connect":::
bastion Tutorial Create Host Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/bastion/tutorial-create-host-portal.md
Previously updated : 04/27/2021 Last updated : 06/29/2021 # Tutorial: Configure Bastion and connect to a Windows VM through a browser
-This tutorial shows you how to connect to a virtual machine through your browser using Azure Bastion and the Azure portal. In the Azure portal, you deploy Bastion to your virtual network. After deploying Bastion, you connect to a VM via its private IP address using the Azure portal. Your VM does not need a public IP address or special software. Once the service is provisioned, the RDP/SSH experience is available to all of the virtual machines in the same virtual network. For more information about Azure Bastion, see [What is Azure Bastion?](bastion-overview.md).
+This tutorial shows you how to connect to a virtual machine through your browser using Azure Bastion and the Azure portal. In this tutorial, using the Azure portal, you deploy Bastion to your virtual network. Once the service is provisioned, the RDP/SSH experience is available to all of the virtual machines in the same virtual network. When you use Bastion to connect, the VM does not need a public IP address or special software. After deploying Bastion, you can remove the public IP address from your VM if it is not needed for anything else. Next, you connect to a VM via its private IP address using the Azure portal. For more information about Azure Bastion, see [What is Azure Bastion?](bastion-overview.md).
In this tutorial, you'll learn how to:
If you donΓÇÖt have an Azure subscription, create a [free account](https://azure
## Prerequisites * A virtual network.
-* A Windows virtual machine in the virtual network.
-* The following required roles:
- * Reader role on the virtual machine.
- * Reader role on the NIC with private IP of the virtual machine.
- * Reader role on the Azure Bastion resource.
+* A Windows virtual machine in the virtual network. If you don't have a VM, create one using [Quickstart: Create a VM](../virtual-machines/windows/quick-create-portal.md).
+* The following required roles for your resources:
+ * Required VM roles:
+ * Reader role on the virtual machine.
+ * Reader role on the NIC with private IP of the virtual machine.
* Ports: To connect to the Windows VM, you must have the following ports open on your Windows VM: * Inbound ports: RDP (3389)
This section helps you create the bastion object in your VNet. This is required
1. Review your settings. Next, at the bottom of the page, select **Create**. 1. You will see a message letting you know that your deployment is underway. Status will display on this page as the resources are created. It takes about 5 minutes for the Bastion resource to be created and deployed.
-## Remove a VM public IP address
+## Remove VM public IP address
[!INCLUDE [Remove a public IP address from a VM](../../includes/bastion-remove-ip.md)]
This section helps you create the bastion object in your VNet. This is required
[!INCLUDE [Connect to a Windows VM](../../includes/bastion-vm-rdp.md)] - ## Clean up resources If you're not going to continue to use this application, delete
cognitive-services Concept Apprentice Mode https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/personalizer/concept-apprentice-mode.md
Due to the nature of **real-world** Reinforcement Learning, a Personalizer model
## What is Apprentice mode?
-Similar to how an apprentice learns from a master, and with experience can get better; Apprentice mode is a _behavior_ that lets Personalizer learn by observing the results obtained from existing application logic.
+Similar to how an apprentice learns a craft from an expert, and with experience can get better; Apprentice mode is a _behavior_ that lets Personalizer learn by observing the results obtained from existing application logic.
Personalizer trains by mimicking the same output as the application. As more events flow, Personalizer can _catch up_ to the existing application without impacting the existing logic and outcomes. Metrics, available from the Azure portal and the API, help you understand the performance as the model learns.
Apprentice Mode attempts to train the Personalizer model by attempting to imitat
### Scenarios where Apprentice Mode May Not be Appropriate:
-* **Editorially chosen Content**: In some scenarios such as news or entertainment, the baseline item could be manually assigned by an editorial team. This means humans are using their knowledge about the broader world, and understanding of what may be appealing content, to choose specific articles or media out of a pool, and flagging them as "preferred" or "hero" articles. Because these editors are not an algorithm, and the factors considered by editors can be nuanced and not included as features of the context and actions, Apprentice mode is unlikely to be able to predict the next baseline action. In these situations:
-** Test Personalizer in Online Mode: Apprentice mode not predicting baselines does not imply Personalizer can't achieve as-good or even better results. Consider putting Personalizer in Online Mode for a period of time or in an A/B test if you have the infrastructure, and then run an Offline Evaluation to assess the difference.
-** Add editorial considerations and recommendations as features: Ask your editors what factors influence their choices, and see if you can add those as features in your context and action. For example, editors in a media company may highlight content while a certain celebrity is in the news: This knowledge could be added as a Context feature.
+#### Editorially chosen Content:
+In some scenarios such as news or entertainment, the baseline item could be manually assigned by an editorial team. This means humans are using their knowledge about the broader world, and understanding of what may be appealing content, to choose specific articles or media out of a pool, and flagging them as "preferred" or "hero" articles. Because these editors are not an algorithm, and the factors considered by editors can be nuanced and not included as features of the context and actions, Apprentice mode is unlikely to be able to predict the next baseline action. In these situations you can:
+
+* Test Personalizer in Online Mode: Apprentice mode not predicting baselines does not imply Personalizer can't achieve as-good or even better results. Consider putting Personalizer in Online Mode for a period of time or in an A/B test if you have the infrastructure, and then run an Offline Evaluation to assess the difference.
+* Add editorial considerations and recommendations as features: Ask your editors what factors influence their choices, and see if you can add those as features in your context and action. For example, editors in a media company may highlight content while a certain celebrity is in the news: This knowledge could be added as a Context feature.
### Factors that will improve and accelerate Apprentice Mode If apprentice mode is learning and attaining Matched rewards above zero but seems to be growing slowly (not getting to 60%..80% matched rewards within 2 weeks), it is possible that the challenge is having too little data. Taking the following steps could accelerate the learning.
-1. Adding more events with positive rewards over time: Apprentice mode will perform better in use cases where your application gets more than 100 positive rewards per day. For example, if a website rewarding a click has 2% clickthrough, it should be having at least 5,000 visits per day to have noticeable learning. You can also experiment with a reward that is simpler and happens more frequently. For example going from "Did users finish reading the article" to "Did users start reading the article".
-2. Adding differentiating features: You can do a visual inspection of the actions in a Rank call and their features. Does the baseline action have features that are differentiated from other actions? If they look mostly the same, add more features that will make them less similar.
-3. Reducing Actions per Event: Personalizer will use the Explore % setting to discover preferences and trends. When a Rank call has more actions, the chance of an Action being xhosen for exploration becomes lower. Reduce the number of actions sent in each Rank call to a smaller number, to less than 10. This can be a temporary adjustement to show that Apprentice Mode has the right data to match rewards.
-
+1. Adding more events with positive rewards over time: Apprentice mode will perform better in use cases where your application gets more than 100 positive rewards per day. For example, if a website rewarding a click has 2% clickthrough, it should be having at least 5,000 visits per day to have noticeable learning.
+2. Try a reward score that is simpler and happens more frequently. For example going from "Did users finish reading the article" to "Did users start reading the article".
+3. Adding differentiating features: You can do a visual inspection of the actions in a Rank call and their features. Does the baseline action have features that are differentiated from other actions? If they look mostly the same, add more features that will make them less similar.
+4. Reducing Actions per Event: Personalizer will use the Explore % setting to discover preferences and trends. When a Rank call has more actions, the chance of an Action being chosen for exploration becomes lower. Reduce the number of actions sent in each Rank call to a smaller number, to less than 10. This can be a temporary adjustement to show that Apprentice Mode has the right data to match rewards.
## Using Apprentice mode to train with historical data
communication-services Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/authentication.md
Previously updated : 03/10/2021 Last updated : 06/30/2021
communication-services Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/best-practices.md
Previously updated : 06/18/2021 Last updated : 06/30/2021
communication-services Call Flows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/call-flows.md
Previously updated : 03/10/2021 Last updated : 06/30/2021
communication-services Concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/chat/concepts.md
Previously updated : 09/30/2020 Last updated : 06/30/2021
communication-services Sdk Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/chat/sdk-features.md
Previously updated : 09/30/2020 Last updated : 06/30/2021
communication-services Client And Server Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/client-and-server-architecture.md
Previously updated : 03/10/2021 Last updated : 06/30/2021 # Client and Server Architecture
-Every Azure Communication Services application will have **client applications** that use **services** to facilitate person-to-person connectivity. This page illustrates common architectural elements in a variety of scenarios.
+This page illustrates typical architectural components and dataflows in various Azure Communication Service scenarios. Relevant components include:
-## User access management
-
-Azure Communication Services SDKs require `user access tokens` to access Communication Services resources securely. `User access tokens` should be generated and managed by a trusted service due to the sensitive nature of the token and the connection string necessary to generate them. Failure to properly manage access tokens can result in additional charges due to misuse of resources. It is highly recommended to make use of a trusted service for user management. The trusted service will generate the tokens and pass them back to the client using proper encryption. A sample architecture flow can be found below:
+1. **Client Application.** This website or native application is leveraged by end-users to communicate. Azure Communication Services provides [SDK client libraries](sdk-options.md) for multiple browsers and application platforms. In addition to our core SDKs, [a UI toolkit](https://aka.ms/acsstorybook) is available to accelerate browser app development.
+1. **Identity Management Service.** This service capability you build to map users and other concepts in your business logic to Azure Communication Services and also to create tokens for those users when required.
+1. **Call Management Service.** This service capability you build to manage and monitor voice and video calls. This service can create calls, invite users, call phone numbers, play audio, listen to DMTF tones and leverage many other call features through the Calling Automation SDK and REST APIs.
-For additional information review [best identity management practices](../../security/fundamentals/identity-management-best-practices.md)
-
-## Browser communication
+## User access management
-Azure Communications JavaScript SDKs can enable web applications with rich text, voice, and video interaction. The application directly interacts with Azure Communication Services through the SDK to access the data plane and deliver real-time text, voice, and video communication. A sample architecture flow can be found below:
+Azure Communication Services clients must present `user access tokens` to access Communication Services resources securely. `User access tokens` should be generated and managed by a trusted service due to the sensitive nature of the token and the connection string or managed identity necessary to generate them. Failure to properly manage access tokens can result in additional charges due to misuse of resources.
-## Native app communication
+### Dataflows
+1. The user starts the client application. The design of this application and user authentication scheme is in your control.
+2. The client application contacts your identity management service. The identity management service maintains a mapping between your users and other addressable objects (for example services or bots) to Azure Communication Service identities.
+3. The identity management service creates a user access token for the applicable identity. If no Azure Communication Services identity has been allocated the past, a new identity is created.
-Many scenarios are best served with native applications. Azure Communication Services supports both browser-to-app and app-to-app communication. When building a native application experience, having push notifications will enable users to receive calls even when the application is not running. Azure Communication Services makes this easy with integrated push notifications to Google Firebase, Apple Push Notification Service, and Windows Push Notifications. A sample architecture flow can be found below:
+### Resources
+- **Concept:** [User Identity](identity-model.md)
+- **Quickstart:** [Create and manage access tokens](../quickstarts/access-tokens.md)
+- **Tutorial:** [Build a identity management services use Azure Functions](../tutorials/trusted-service-tutorial.md)
+> [!IMPORTANT]
+> For simplicity, we do not show user access management and token distribution in subsequent architecture flows.
-## Voice and SMS over the public switched telephony network (PSTN)
-Communicating over the phone system can dramatically increase the reach of your application. To support PSTN voice and SMS scenarios, Azure Communication Services helps you [acquire phone numbers](../quickstarts/telephony-sms/get-phone-number.md) directly from the Azure portal or using REST APIs and SDKs. Once phone numbers are acquired, they can be used to reach customers using both PSTN calling and SMS in both inbound and outbound scenarios. A sample architecture flow can be found below:
+## Calling a user without push notifications
+The simplest voice and video calling scenarios involves a user calling another, in the foreground without push notifications.
-> [!Note]
-> During public preview, the provisioning of US phone numbers is available to customers with billing addresses located within the US and Canada.
+### Dataflows
-For more information on PSTN phone numbers, see [Phone number types](../concepts/telephony-sms/plan-solution.md)
+1. The accepting user initializes the Call client, allowing them to receive incoming phone calls.
+2. The initiating user needs the Azure Communication Services identity of the person they want to call. A typical experience may have a *friend's list* maintained by the identity management service that collates the user's friends and associated Azure Communication Service identities.
+3. The initiating user initializes their Call client and calls the remote user.
+4. The accepting user is notified of the incoming call through the Calling SDK.
+5. The users communicate with each other using voice and video in a call.
-## Humans communicating with bots and other services
+### Resources
+- **Concept:** [Calling Overview](voice-video-calling/calling-sdk-features.md)
+- **Quickstart:** [Add voice calling to your app](../quickstarts/voice-video-calling/getting-started-with-calling.md)
+- **Quickstart:** [Add video calling to your app](../quickstarts/voice-video-calling/get-started-with-video-calling.md)
+- **Hero Sample:** [Group Calling for Web, iOS, and Android](../samples/calling-hero-sample.md)
-Azure Communication Services supports human-to-system communication though text and voice channels, with services that directly access the Azure Communication Services data plane. For example, you can have a bot answer incoming phone calls or participate in a web chat. Azure Communication Services provides SDKs that enable these scenarios for calling and chat. A sample architecture flow can be found below:
+## Joining a user-created group call
+You may want users to join a call without an explicit invitation. For example you may have a *social space* with an associated call, and users join that call at their leisure. In this first dataflow, we show a call that is initially created by a client.
-## Networking
-You may want to exchange arbitrary data between users, for example to synchronize a shared mixed reality or gaming experience. The real-time data plane used for text, voice, and video communication is available to you directly in two ways:
+### Dataflows
+1. Initiating user initializes their Call client and makes a group call.
+2. The initiating user shares the group call ID with a Call management service.
+3. The Call Management Service shares the call ID with other users. For example, if the application orients around scheduled events, the group call ID might be an attribute of the scheduled event's data model.
+4. Other users join the call using the group call ID.
+5. The users communicate with each other using voice and video in a call.
-- **Calling SDK** - Devices in a call have access to APIs for sending and receiving data over the call channel. This is the easiest way to add data communication to an existing interaction.-- **STUN/TURN** - Azure Communication Services makes standards-compliant STUN and TURN services available to you. This allows you to build a heavily customized transport layer on top of these standardized primitives. You can author your own standards-compliant client or use open-source libraries such as [WinRTC](https://github.com/microsoft/winrtc).
-## Next steps
+## Joining a scheduled Teams call
+Azure Communication Service applications can join Teams calls. This is ideal for many business-to-consumer scenarios, where the consumer is leveraging a custom application and custom identity, while the business-side is using Teams.
-> [!div class="nextstepaction"]
-> [Creating user access tokens](../quickstarts/access-tokens.md)
-For more information, see the following articles:
-- Learn about [authentication](../concepts/authentication.md)-- Learn about [Phone number types](../concepts/telephony-sms/plan-solution.md)
+### Dataflows
+1. The Call Management Service creates a group call with [Graph APIs](https://docs.microsoft.com/graph/api/resources/onlinemeeting?view=graph-rest-1.0). Another pattern involves end users creating the group call using [Bookings](https://www.microsoft.com/microsoft-365/business/scheduling-and-booking-app), Outlook, Teams, or another scheduling experience in the Microsoft 365 ecosystem.
+2. The Call Management Service shares the Teams call details with Azure Communication Service clients.
+3. Typically, a Teams user must join the call and allow external users to join through the lobby. However this experience is sensitive to the Teams tenant configuration and specific meeting settings.
+4. Azure Communication Service users initialize their Call client and join the Teams meeting, using the details received in Step 2.
+5. The users communicate with each other using voice and video in a call.
-- [Add chat to your app](../quickstarts/chat/get-started.md)-- [Add voice calling to your app](../quickstarts/voice-video-calling/getting-started-with-calling.md)
+### Resources
+- **Concept:** [Teams Interoperability](teams-interop.md)
+- **Quickstart:** [Join a Teams meeting](../quickstarts/voice-video-calling/get-started-teams-interop.md)
communication-services Detailed Call Flows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/detailed-call-flows.md
Previously updated : 03/10/2021 Last updated : 06/30/2021
communication-services Identity Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/identity-model.md
Previously updated : 03/10/2021 Last updated : 06/30/2021
communication-services Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/known-issues.md
Previously updated : 03/10/2021 Last updated : 06/30/2021
This section provides information about known issues associated with the Azure C
If a user is in a call and decides to refresh the page, the Communication Services media service won't remove this user immediately from the call. It will wait for the user to rejoin. The user will be removed from the call after the media service times out.
-It's best to build user experiences that don't require end-users to refresh the page of your application while in a call. If a user refreshes the page, reuse the same Communication Services user ID after they return back to the application.
+It's best to build user experiences that don't require end users to refresh the page of your application while in a call. If a user refreshes the page, reuse the same Communication Services user ID after they return back to the application.
+
+From the perspective of other participants in the call, the user will remain in the call for a minute or two.
-From the perspective of other participants in the call, the user will remain in the call for period of time (1-2 minutes).
If the user rejoins with the same Communication Services user ID, they'll be represented as the same, existing object in the `remoteParticipants` collection. If the user was sending video before refreshing, the `videoStreams` collection will keep the previous stream information until the service times out and removes it. In this scenario, the application may decide to observe any new streams added to the collection and render one with the highest `id`. ### It's not possible to render multiple previews from multiple devices on web
-This is a known limitation. For more information, refer to the [calling SDK overview](./voice-video-calling/calling-sdk-features.md).
+This is a known limitation. For more information, see the [calling SDK overview](./voice-video-calling/calling-sdk-features.md).
### Enumerating devices isn't possible in Safari when the application runs on iOS or iPadOS Applications can't enumerate/select mic/speaker devices (like Bluetooth) on Safari iOS/iPad. This is a known operating system limitation.
-If you're using Safari on macOS, your app will not be able to enumerate/select speakers through the Communication Services Device Manager. In this scenario, devices must be selected via the OS. If you use Chrome on macOS, the app can enumerate/select devices through the Communication Services Device Manager.
+If you're using Safari on macOS, your app won't be able to enumerate/select speakers through the Communication Services Device Manager. In this scenario, devices must be selected via the OS. If you use Chrome on macOS, the app can enumerate/select devices through the Communication Services Device Manager.
### Audio connectivity is lost when receiving SMS messages or calls during an ongoing VoIP call This problem may occur due to multiple reasons:
Switching between video devices may cause your video stream to pause while the s
Switching between devices frequently can cause performance degradation. Developers are encouraged to stop one device stream before starting another. ### Bluetooth headset microphone is not detected therefore is not audible during the call on Safari on iOS
-Bluetooth headsets aren't supported by Safari on iOS. Your Bluetooth device will not be listed in available microphone options and other participants will not be able to hear you if you try using Bluetooth over Safari.
+Bluetooth headsets aren't supported by Safari on iOS. Your Bluetooth device won't be listed in available microphone options and other participants won't be able to hear you if you try using Bluetooth over Safari.
#### Possible causes This is a known macOS/iOS/iPadOS operating system limitation.
communication-services Logging And Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/logging-and-diagnostics.md
Previously updated : 03/10/2021 Last updated : 06/30/2021
communication-services Metrics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/metrics.md
Previously updated : 03/10/2021 Last updated : 06/30/2021
communication-services Notifications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/notifications.md
Previously updated : 03/10/2021 Last updated : 06/30/2021
communication-services Pricing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/pricing.md
Previously updated : 03/10/2021 Last updated : 06/30/2021
communication-services Privacy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/privacy.md
Previously updated : 03/10/2021 Last updated : 06/30/2021
communication-services Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/reference.md
Previously updated : 03/10/2021 Last updated : 06/30/2021
communication-services Sdk Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/sdk-options.md
Previously updated : 03/25/2021 Last updated : 06/30/2021
communication-services Teams Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/teams-endpoint.md
Previously updated : 05/31/2021 Last updated : 06/30/2021
communication-services Teams Interop https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/teams-interop.md
Previously updated : 03/10/2021 Last updated : 06/30/2021
communication-services Concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/telephony-sms/concepts.md
Previously updated : 03/10/2021 Last updated : 06/30/2021
communication-services Messaging Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/telephony-sms/messaging-policy.md
Previously updated : 03/19/2021 Last updated : 06/30/2021
communication-services Plan Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/telephony-sms/plan-solution.md
Previously updated : 03/10/2021 Last updated : 06/30/2021
communication-services Sdk Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/telephony-sms/sdk-features.md
Previously updated : 03/26/2021 Last updated : 06/30/2021
communication-services Sip Interface Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/telephony-sms/sip-interface-infrastructure.md
Previously updated : 03/10/2021 Last updated : 06/30/2021
communication-services Sms Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/telephony-sms/sms-faq.md
Previously updated : 03/26/2021 Last updated : 06/30/2021
communication-services Telephony Concept https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/telephony-sms/telephony-concept.md
Previously updated : 03/10/2021 Last updated : 06/30/2021
communication-services Troubleshooting Info https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/troubleshooting-info.md
Previously updated : 03/10/2021 Last updated : 06/30/2021
communication-services Teams Embed https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/ui-library/teams-embed.md
description: In this document, you'll learn how the Azure Communication Services UI Library Teams Embed capability can be used to build turnkey calling experiences. Previously updated : 11/16/2020 Last updated : 06/30/2021
communication-services Ui Library Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/ui-library/ui-library-overview.md
Previously updated : 05/11/2021 Last updated : 06/30/2021
communication-services Ui Library Use Cases https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/ui-library/ui-library-use-cases.md
Previously updated : 05/11/2021 Last updated : 06/30/2021
communication-services About Call Types https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/voice-video-calling/about-call-types.md
Previously updated : 03/25/2021 Last updated : 06/30/2021 # Voice and video concepts + You can use Azure Communication Services to make and receive one to one or group voice and video calls. Your calls can be made to other Internet-connected devices and to plain-old telephones. You can use the Communication Services JavaScript, Android, or iOS SDKs to build applications that allow your users to speak to one another in private conversations or in group discussions. Azure Communication Services supports calls to and from services or Bots. ## Call types in Azure Communication Services
communication-services Call Automation Apis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/voice-video-calling/call-automation-apis.md
Previously updated : 04/16/2021 Last updated : 06/30/2021
communication-services Call Recording https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/voice-video-calling/call-recording.md
Previously updated : 04/13/2021 Last updated : 06/30/2021
communication-services Calling Sdk Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/voice-video-calling/calling-sdk-features.md
Previously updated : 03/10/2021 Last updated : 06/30/2021 # Calling SDK overview + The Calling SDK enables end-user devices to drive voice and video communication experiences. This page provides detailed descriptions of Calling features, including platform and browser support information. To get started right away, please check out [Calling quickstarts](../../quickstarts/voice-video-calling/getting-started-with-calling.md) or [Calling hero sample](../../samples/calling-hero-sample.md). Once you've started development, check out the [known issues page](../known-issues.md) to find bugs we're working on.
communication-services Network Requirements https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/voice-video-calling/network-requirements.md
Previously updated : 3/23/2021 Last updated : 06/30/2021
communication-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/overview.md
Previously updated : 03/10/2021 Last updated : 06/30/2021
# What is Azure Communication Services?
-Azure Communication Services allows you to easily add real-time voice, video, and telephone communication to your applications. Communication Services SDKs also allow you to add SMS functionality to your communications solutions. Azure Communication Services is identity agnostic and you have complete control over how end users are identified and authenticated. You can connect humans to the communication data plane or services (bots).
+Azure Communication Services are cloud-based services with REST APIs and client library SDKs available to help you integrate communication into your applications. You can add communication features to your applications without being an expert in communication technologies such as media encoding and real-time networking. Azure Communication Services supports various communication formats:
-Applications include:
+1. Voice and Video Calling
+1. Rich Text Chat
+1. SMS
-- **Business to Consumer (B2C).** A business' employees and services can interact with consumers using voice, video, and rich text chat in a custom browser or mobile application. An organization can send and receive SMS messages, or operate an interactive voice response system (IVR) using a phone number you acquire through Azure. [Integration with Microsoft Teams](./quickstarts/voice-video-calling/get-started-teams-interop.md) allows consumers to join Teams meetings hosted by employees; ideal for remote healthcare, banking, and product support scenarios where employees might already be familiar with Teams.-- **Consumer to Consumer.** Build engaging social spaces for consumer-to-consumer interaction with voice, video, and rich text chat. Any type of user interface can be built on Azure Communication Services SDKs, but complete application samples and UI assets are available to help you get started quickly.
+Voice and video calling applications can interact with the publicly switched telephony network (PSTN). You can acquire phone numbers directly through Azure Communication Services REST APIs, SDKs, or the Azure portal. Azure Communication Services direct routing allows you to use SIP and session border controllers to connect your own PSTN carriers and bring your own phone numbers.
+
+In addition to REST APIs, [Azure Communication Services client libraries](./concepts/sdk-options.md) are available for various platforms and languages, including Web browsers (JavaScript), iOS (Swift), Java (Android), Windows (.NET). Azure Communication Services is identity agnostic and you control how end users are identified and authenticated.
+
+Scenarios for Azure Communication Services include:
+
+- **Business to Consumer (B2C).** A business' employees and services interact with consumers using voice, video, and rich text chat in a custom browser or mobile application. An organization can send and receive SMS messages, or [operate an interactive voice response system (IVR)](https://github.com/microsoft/botframework-telephony/blob/main/EnableTelephony.md) using a phone number you acquire through Azure. [Integration with Microsoft Teams](./quickstarts/voice-video-calling/get-started-teams-interop.md) can be used to connect consumers to Teams meetings hosted by employees; ideal for remote healthcare, banking, and product support scenarios where employees might already be familiar with Teams.
+- **Consumer to Consumer (C2C).** Build engaging social spaces for consumer-to-consumer interaction with voice, video, and rich text chat. Any type of user interface can be built on Azure Communication Services SDKs, or use complete application samples and an open-source UI toolkit to help you get started quickly.
To learn more, check out our [Microsoft Mechanics video](https://www.youtube.com/watch?v=apBX7ASurgM) or the resources linked below.
To learn more, check out our [Microsoft Mechanics video](https://www.youtube.com
| Resource |Description | | | | |**[Create a Communication Services resource](./quickstarts/create-communication-resource.md)**|Begin using Azure Communication Services by using the Azure portal or Communication Services SDK to provision your first Communication Services resource. Once you have your Communication Services resource connection string, you can provision your first user access tokens.|
-|**[Get a phone number](./quickstarts/telephony-sms/get-phone-number.md)**|You can use Azure Communication Services to provision and release telephone numbers. These telephone numbers can be used to initiate or receive phone calls and build SMS solutions.|
-|**[Send an SMS from your app](./quickstarts/telephony-sms/send.md)**|The Azure Communication Services SMS SDK is used send and receive SMS messages from service applications.|
+|**[Get a phone number](./quickstarts/telephony-sms/get-phone-number.md)**|Use Azure Communication Services to provision and release telephone numbers. These telephone numbers can be used to initiate or receive phone calls and build SMS solutions.|
+|**[Send an SMS from your app](./quickstarts/telephony-sms/send.md)**| Azure Communication Services SMS REST APIs and SDKs is used send and receive SMS messages from service applications.|
After creating a Communication Services resource you can start building client scenarios, such as voice and video calling or text chat: | Resource |Description | | | |
-|**[Create your first user access token](./quickstarts/access-tokens.md)**|User access tokens are used to authenticate clients against your Azure Communication Services resource. These tokens are provisioned and reissued using the Communication Services SDK.|
+|**[Create your first user access token](./quickstarts/access-tokens.md)**|User access tokens authenticate clients against your Azure Communication Services resource. These tokens are provisioned and reissued using Communication Services Identity APIs and SDKs.|
|**[Get started with voice and video calling](./quickstarts/voice-video-calling/getting-started-with-calling.md)**| Azure Communication Services allows you to add voice and video calling to your browser or native apps using the Calling SDK. | |**[Add telephony calling to your app](./quickstarts/voice-video-calling/pstn-call.md)**|With Azure Communication Services you can add telephony calling capabilities to your application.| |**[Join your calling app to a Teams meeting](./quickstarts/voice-video-calling/get-started-teams-interop.md)**|Azure Communication Services can be used to build custom meeting experiences that interact with Microsoft Teams. Users of your Communication Services solution(s) can interact with Teams participants over voice, video, chat, and screen sharing.| |**[Get started with chat](./quickstarts/chat/get-started.md)**|The Azure Communication Services Chat SDK is used to add rich real-time text chat into your applications.|
+|**[Connect a Microsoft Bot to a phone number](https://github.com/microsoft/botframework-telephony)**|Telephony channel is a channel in Microsoft Bot Framework that enables the bot to interact with users over the phone. It leverages the power of Microsoft Bot Framework combined with the Azure Communication Services and the Azure Speech Services. |
+ ## Samples
communication-services Access Tokens https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/access-tokens.md
Previously updated : 03/10/2021 Last updated : 06/30/2021 zone_pivot_groups: acs-js-csharp-java-python
communication-services Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/chat/get-started.md
Previously updated : 03/10/2021 Last updated : 06/30/2021 zone_pivot_groups: acs-js-csharp-java-python-swift-android
communication-services Meeting Interop https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/chat/meeting-interop.md
description: In this quickstart, you'll learn how to join a Teams meeting with the Azure Communication Chat SDK Previously updated : 03/10/2021 Last updated : 06/30/2021 zone_pivot_groups: acs-web-ios
communication-services Create Communication Resource https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/create-communication-resource.md
Previously updated : 03/10/2021 Last updated : 06/30/2021 zone_pivot_groups: acs-plat-azp-azcli-net-ps
Get started with Azure Communication Services by provisioning your first Communi
> [!WARNING]
-> Note that while Communication Services is available in multiple geographies, in order to get a phone number the resource must have a data location set to 'US'. Also note that resource moves are not currently supported, but will be available soon.
+> Note that while Communication Services is available in multiple geographies, in order to get a phone number the resource must have a data location set to 'US'.
> Also note it is not possible to create a resource group at the same time as a resource for Azure Communication Services. When creating a resource, a resource group that has been created already must be used. ::: zone pivot="platform-azp"
communication-services Manage Teams Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/manage-teams-identity.md
Previously updated : 05/31/2021 Last updated : 06/30/2021
communication-services Managed Identity From Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/managed-identity-from-cli.md
Previously updated : 03/10/2021 Last updated : 06/30/2021
communication-services Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/managed-identity.md
Previously updated : 05/27/2021 Last updated : 06/30/2021 zone_pivot_groups: acs-js-csharp-java-python
communication-services Getting Started With Teams Embed https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/meeting/getting-started-with-teams-embed.md
description: In this quickstart, you'll learn how to add join Teams meeting capabilities to your app using Azure Communication Services. Previously updated : 01/25/2021 Last updated : 06/30/2021
communication-services Samples For Teams Embed https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/meeting/samples-for-teams-embed.md
Title: Using the Azure Communication Services Teams Embed Library
description: Learn about the Communication Services Teams Embed library capabilities. Previously updated : 02/24/2021 Last updated : 06/30/2021
communication-services Relay Token https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/relay-token.md
Previously updated : 05/21/2021 Last updated : 06/30/2021
communication-services Telemetry Application Insights https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/telemetry-application-insights.md
Previously updated : 06/01/2021 Last updated : 06/30/2021 zone_pivot_groups: acs-js-csharp-java-python
communication-services Get Phone Number https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/telephony-sms/get-phone-number.md
Previously updated : 03/10/2021 Last updated : 06/30/2021
communication-services Handle Sms Events https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/telephony-sms/handle-sms-events.md
Previously updated : 03/10/2021 Last updated : 06/30/2021
communication-services Logic App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/telephony-sms/logic-app.md
Previously updated : 03/10/2021 Last updated : 06/30/2021
communication-services Port Phone Number https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/telephony-sms/port-phone-number.md
Previously updated : 03/20/2021 Last updated : 06/30/2021
communication-services Send https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/telephony-sms/send.md
Previously updated : 03/10/2021 Last updated : 06/30/2021
communication-services Call Automation Api Sample https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/voice-video-calling/call-automation-api-sample.md
Previously updated : 06/08/2021 Last updated : 06/30/2021 zone_pivot_groups: acs-csharp-java
communication-services Call Recording Sample https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/voice-video-calling/call-recording-sample.md
Previously updated : 06/21/2021 Last updated : 06/30/2021 zone_pivot_groups: acs-csharp-java # Call Recording API Quickstart
-Get started with Azure Communication Services by using the Communication Services Calling server SDKs to develop call recording features.
+This quickstart gets you started recording voice and video calls. This quickstart assumes you've already used the [Calling client SDK](get-started-with-video-calling.md) to build the end-user calling experience. Using the **Calling Server APIs and SDKs** you can enable and manage recordings.
::: zone pivot="programming-language-csharp" [!INCLUDE [Build Call Recording server sample with C#](./includes/call-recording-samples/recording-server-csharp.md)]
communication-services Calling Client Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/voice-video-calling/calling-client-samples.md
Previously updated : 03/10/2021 Last updated : 06/30/2021 zone_pivot_groups: acs-plat-web-ios-android-windows
communication-services Download Recording File Sample https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/voice-video-calling/download-recording-file-sample.md
Previously updated : 04/14/2021 Last updated : 06/30/2021
communication-services Get Started Teams Interop https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/voice-video-calling/get-started-teams-interop.md
description: In this quickstart, you'll learn how to join an Teams meeting with the Azure Communication Calling SDK. Previously updated : 03/10/2021 Last updated : 06/30/2021
communication-services Get Started With Video Calling https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/voice-video-calling/get-started-with-video-calling.md
description: In this quickstart, you'll learn how to add video calling capabilities to your app using Azure Communication Services. Previously updated : 06/01/2021 Last updated : 06/30/2021
zone_pivot_groups: acs-plat-web-ios-android-windows
# QuickStart: Add 1:1 video calling to your app + ::: zone pivot="platform-web" [!INCLUDE [Video calling with JavaScript](./includes/video-calling/video-calling-javascript.md)] ::: zone-end
communication-services Getting Started With Calling https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/voice-video-calling/getting-started-with-calling.md
description: In this quickstart, you'll learn how to add calling capabilities to your app using Azure Communication Services. Previously updated : 03/10/2021 Last updated : 06/30/2021
zone_pivot_groups: acs-plat-web-ios-android-windows
Get started with Azure Communication Services by using the Communication Services Calling SDK to add voice and video calling to your app. + [!INCLUDE [Emergency Calling Notice](../../includes/emergency-calling-notice-include.md)] ::: zone pivot="platform-windows"
communication-services Pstn Call https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/voice-video-calling/pstn-call.md
description: In this quickstart, you'll learn how to add PSTN calling capabilities to your app using Azure Communication Services. Previously updated : 03/10/2021 Last updated : 06/30/2021 zone_pivot_groups: acs-plat-web-ios-android
communication-services Calling Hero Sample https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/samples/calling-hero-sample.md
Previously updated : 03/10/2021 Last updated : 06/30/2021 zone_pivot_groups: acs-web-ios-android
communication-services Chat Hero Sample https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/samples/chat-hero-sample.md
Previously updated : 03/10/2021 Last updated : 06/30/2021
communication-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/samples/overview.md
Previously updated : 03/12/2021 Last updated : 06/30/2021
communication-services Web Calling Sample https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/samples/web-calling-sample.md
Previously updated : 03/10/2021 Last updated : 06/30/2021
communication-services Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/support.md
Previously updated : 03/10/2021 Last updated : 06/30/2021
communication-services Building App Start https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/tutorials/building-app-start.md
description: Learn how to create a baseline web application that supports Azure
Previously updated : 03/10/2021 Last updated : 06/30/2021
communication-services Hmac Header Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/tutorials/hmac-header-tutorial.md
Previously updated : 03/10/2021 Last updated : 06/30/2021
communication-services Postman Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/tutorials/postman-tutorial.md
description: Learn how to sign and makes requests for ACS with Postman to send a
Previously updated : 03/10/2021 Last updated : 06/30/2021
communication-services Trusted Service Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/tutorials/trusted-service-tutorial.md
Previously updated : 03/10/2021 Last updated : 06/30/2021
confidential-computing Quick Create Marketplace https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/confidential-computing/quick-create-marketplace.md
Previously updated : 04/06/2020 Last updated : 06/13/2021 # Quickstart: Deploy an Azure Confidential Computing VM in the Marketplace
-Get started with Azure confidential computing by using the Azure Marketplace to create a virtual machine (VM) backed by Intel SGX. You'll then install the Open Enclave Software Development Kit (SDK) to set up your development environment.
+Get started with Azure confidential computing by using the Azure portal to create a virtual machine (VM) backed by Intel SGX. Optionally you can test an enclave application built with the Open Enclave Software (SDK) .
-This tutorial is recommended if you want to quickly start deploying a confidential computing virtual machine. The VMs are run on specialty hardware and require specific configuration inputs to run as intended. The marketplace offering described in this quickstart makes it easier to deploy, by restricting user input.
-
-If you're interested in deploying a confidential compute virtual machine with more custom configuration, follow the [Azure portal Confidential Compute virtual machine deployment steps](quick-create-portal.md).
+This tutorial is recommended for you if you're interested in deploying a confidential compute virtual machine with a template configuration. Otherwise, we recommend following standard Azure Virtual Machine deployment flow [using portal or CLI.](quick-create-portal.md)
## Prerequisites
If you don't have an Azure subscription, [create an account](https://azure.micro
> [!NOTE] > Free trial accounts do not have access to the virtual machines used in this tutorial. Please upgrade to a Pay-As-You-Go subscription. + ## Sign in to Azure 1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. At the top, type **Azure confidential computing** into the search bar.
+1. At the top, select **Create a resource**.
+
+1. In the **Get Started** default pane, search **Azure Confidential Computing (Virtual Machine)** .
+
+1. Click the **Azure Confidential Computing (Virtual Machine)** template.
-1. Select **Azure confidential computing (Virtual Machine)** in the **Marketplace** section.
+ ![Deploy a VM](media/quick-create-marketplace/portal-search-marketplace.png)
- ![Select Marketplace](media/quick-create-marketplace/portal-search-marketplace.png)
+1. On the Virtual machine landing page, select **Create**.
-1. On the Azure confidential compute deployment landing page, select **Create**.
-
-## Configure your virtual machine
+## Configure a confidential computing virtual machine
-1. In the **Basics** tab, select your **Subscription** and **Resource Group**. Your resource group must be empty to deploy a virtual machine from this template into it.
+1. In the **Basics** tab, select your **Subscription** and **Resource Group** (group needs to be empty to deploy this template).
+
+1. For **Virtual machine name**, enter a name for your new VM.
1. Type or select the following values:
If you don't have an Azure subscription, [create an account](https://azure.micro
> [!NOTE] > Confidential compute virtual machines only run on specialized hardware available in specific regions. For the latest available regions for DCsv2-Series VMs, see [available regions](https://azure.microsoft.com/global-infrastructure/services/?products=virtual-machines).
-
- * **Choose Image**: Select any image. If you would like to complete this specific tutorial, select Ubuntu 18.04 (Gen 2). Otherwise, you'll be redirected at the appropriate steps below.
- * **Virtual machine name**, enter a name for your new VM.
+1. Configure the operating system image that you would like to use for your virtual machine. This setup only support Gen 2 VM and image deployments
- * **Authentication type**:
- Select **SSH public key** if you're creating a Linux VM.
+ * **Choose Image**: For this tutorial, select Ubuntu 18.04 LTS (Gen 2). You may also select Windows Server Datacenter 2019, Windows Server Datacenter 2016, or and Ubuntu 16.04 LTS. If you choose to do so, you'll be redirected in this tutorial accordingly.
+
+1. Fill in the following information in the Basics tab:
- > [!NOTE]
- > You have the choice of using an SSH public key or a Password for authentication. SSH is more secure. For instructions on how to generate an SSH key, see [Create SSH keys on Linux and Mac for Linux VMs in Azure](../virtual-machines/linux/mac-create-ssh-keys.md).
+ * **Authentication type**: Select **SSH public key** if you're creating a Linux VM.
+
+ > [!NOTE]
+ > You have the choice of using an SSH public key or a Password for authentication. SSH is more secure. For instructions on how to generate an SSH key, see [Create SSH keys on Linux and Mac for Linux VMs in Azure](../virtual-machines/linux/mac-create-ssh-keys.md).
* **Username**: Enter the Administrator name for the VM. * **SSH public key**: If applicable, enter your RSA public key. * **Password**: If applicable, enter your password for authentication.
-
-1. Select the **Next: Virtual machine settings** button at the bottom of your screen.
-
- > [!IMPORTANT]
- > Wait for the page to update. You *should not* see a message that says "Confidential Computing DCsv2-series VMs are available in a limited number of regions." If this message persists, return to the previous page and select an available DCsv2-Series region.
-
-1. For **change size**, choose a VM with confidential compute capabilities in the size selector.
-
- > [!TIP]
- > You should see sizes **DC1s_v2**, **DC2s_v2**, **DC4s_V2**, and **DC8_v2**. These are the only virtual machine sizes that currently support confidential computing. [Learn more](virtual-machine-solutions.md).
+
+1. Fill in the following information in the "Virtual Machine Settings" tab:
-1. For **OS Disk Type**, select a disk type.
+ * Choose the VM SKU Size
+ * If you chose a **DC1s_v2**, **DC2s_v2**, **DC4s_V2** virtual machine, choose a disk type that is either **Standard SSD** or **Premium SSD**. For **DC8_v2** virtual machine you can only choose choose **Standard SSD** as your disk type.
-1. For **Virtual Network**, create a new one or choose from an existing resource.
+ * **Public inbound ports**: Choose **Allow selected ports** and select **SSH (22)** and **HTTP (80)** in the **Select public inbound ports** list. If you're deploying a Windows VM, select **HTTP (80)** and **RDP (3389)**. In this quickstart, this step is necessary to connect to the VM.
+
+ >[!Note]
+ > Allowing RDP/SSH ports is not recommended for production deployments.
-1. For **Subnet**, create a new one or choose from an existing resource.
+ ![Inbound ports](media/quick-create-portal/inbound-port-virtual-machine.png)
-1. For **Select public inbound ports**, choose **SSH(Linux)/RDP(Windows)**. In this quickstart, this step is necessary to connect to the VM and complete the Open Enclave SDK configuration.
-1. For **Boot Diagnostics**, leave it disabled for this quickstart.
+1. Choose the **Monitoring** option if necessary
1. Select **Review + create**. 1. In the **Review + create** pane, select **Create**. > [!NOTE]
-> Proceed to the next section and continue with this tutorial if you deployed a Linux VM. If you deployed a Windows VM, [follow these steps to connect to your Windows VM](../virtual-machines/windows/connect-logon.md) and then [install the OE SDK on Windows](https://github.com/openenclave/openenclave/blob/master/docs/GettingStartedDocs/install_oe_sdk-Windows.md).
+> Proceed to the next section and continue with this tutorial if you deployed a Linux VM. If you deployed a Windows VM, [follow these steps to connect to your Windows VM](../virtual-machines/windows/connect-logon.md)
## Connect to the Linux VM
ssh azureadmin@40.55.55.555
You can find the Public IP address of your VM in the Azure portal, under the Overview section of your virtual machine.
-![IP address in Azure portal](media/quick-create-portal/public-ip-virtual-machine.png)
If you're running on Windows and don't have a BASH shell, install an SSH client, such as PuTTY.
For more information about connecting to Linux VMs, see [Create a Linux VM on Az
> [!NOTE] > If you see a PuTTY security alert about the server's host key not being cached in the registry, choose from the following options. If you trust this host, select **Yes** to add the key to PuTTy's cache and continue connecting. If you want to carry on connecting just once, without adding the key to the cache, select **No**. If you don't trust this host, select **Cancel** to abandon the connection.
-## Install the Open Enclave SDK (OE SDK) <a id="Install"></a>
-
-Follow the step-by-step instructions to install the [OE SDK](https://github.com/openenclave/openenclave) on your DCsv2-Series virtual machine running an Ubuntu 18.04 LTS Gen 2 image.
-
-If your virtual machine runs on Ubuntu 18.04 LTS Gen 2, you'll need to follow [installation instructions for Ubuntu 18.04](https://github.com/openenclave/openenclave/blob/master/docs/GettingStartedDocs/install_oe_sdk-Ubuntu_18.04.md).
-
-#### 1. Configure the Intel and Microsoft APT Repositories
+## Intel SGX Drivers
-```bash
-echo 'deb [arch=amd64] https://download.01.org/intel-sgx/sgx_repo/ubuntu bionic main' | sudo tee /etc/apt/sources.list.d/intel-sgx.list
-wget -qO - https://download.01.org/intel-sgx/sgx_repo/ubuntu/intel-sgx-deb.key | sudo apt-key add -
-
-echo "deb http://apt.llvm.org/bionic/ llvm-toolchain-bionic-7 main" | sudo tee /etc/apt/sources.list.d/llvm-toolchain-bionic-7.list
-wget -qO - https://apt.llvm.org/llvm-snapshot.gpg.key | sudo apt-key add -
-
-echo "deb [arch=amd64] https://packages.microsoft.com/ubuntu/18.04/prod bionic main" | sudo tee /etc/apt/sources.list.d/msprod.list
-wget -qO - https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add -
-```
-
-#### 2. Install the Intel SGX DCAP Driver
-Some versions of Ubuntu may already have the Intel SGX driver installed. Check using the following command:
-
-```bash
-dmesg | grep -i sgx
-[ 106.775199] sgx: intel_sgx: Intel SGX DCAP Driver {version}
-```
-If the output is blank, install the driver:
-
-```bash
-sudo apt update
-sudo apt -y install dkms
-wget https://download.01.org/intel-sgx/sgx-dcap/1.7/linux/distro/ubuntu18.04-server/sgx_linux_x64_driver_1.35.bin -O sgx_linux_x64_driver.bin
-chmod +x sgx_linux_x64_driver.bin
-sudo ./sgx_linux_x64_driver.bin
-```
-
-> [!WARNING]
-> Please use the latest Intel SGX DCAP driver from [Intel's SGX site](https://01.org/intel-software-guard-extensions/downloads).
+> [!NOTE]
+> Intel SGX drivers as already part of the Ubuntu & Windows Azure Gallery Images. No special installation of the drivers is required. Optionally you can also update the existing drivers shipped in the images by visiting the [Intel SGX DCAP drivers list](https://01.org/intel-software-guard-extensions/downloads).
-#### 3. Install the Intel and Open Enclave packages and dependencies
+## Optional: Testing enclave apps built with Open Enclave SDK (OE SDK) <a id="Install"></a>
-```bash
-sudo apt -y install clang-8 libssl-dev gdb libsgx-enclave-common libprotobuf10 libsgx-dcap-ql libsgx-dcap-ql-dev az-dcap-client open-enclave
-```
+Follow the step-by-step instructions to install the [OE SDK](https://github.com/openenclave/openenclave) on your DCsv2-Series virtual machine running an Ubuntu 18.04 LTS Gen 2 image.
-> [!NOTE]
-> This step also installs the [az-dcap-client](https://github.com/microsoft/azure-dcap-client) package which is necessary for performing remote attestation in Azure.
+If your virtual machine runs on Ubuntu 18.04 LTS Gen 2, you'll need to follow [installation instructions for Ubuntu 18.04](https://github.com/openenclave/openenclave/blob/master/docs/GettingStartedDocs/install_oe_sdk-Ubuntu_18.04.md).
-#### 4. **Verify the Open Enclave SDK install**
-See [Using the Open Enclave SDK](https://github.com/openenclave/openenclave/blob/master/docs/GettingStartedDocs/Linux_using_oe_sdk.md) on GitHub for verifying and using the installed SDK.
+> [!NOTE]
+> Intel SGX drivers as already part of the Ubuntu & Windows Azure Gallery Images. No special installation of the drivers is required. Optionally you can also update the existing drivers shipped in the images.
## Clean up resources
confidential-computing Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/confidential-computing/quick-create-portal.md
Title: Quickstart - Create an Azure confidential computing virtual machine in the Azure portal description: Get started with your deployments by learning how to quickly create a confidential computing virtual machine in the Azure portal. - Previously updated : 04/23/2020- -+ -
- - mode-portal
+ Last updated : 06/13/2021+ # Quickstart: Deploy an Azure confidential computing VM in the Azure portal
-Get started with Azure confidential computing by using the Azure portal to create a virtual machine (VM) backed by Intel SGX. You'll then install the Open Enclave Software Development Kit (SDK) to set up your development environment.
+Get started with Azure confidential computing by using the Azure portal to create a virtual machine (VM) backed by Intel SGX. You'll then be able to run enclave applications.
This tutorial is recommended for you if you're interested in deploying a confidential compute virtual machine with custom configuration. Otherwise, we recommend following the [confidential Computing virtual machine deployment steps for the Microsoft commercial marketplace](quick-create-marketplace.md).
If you don't have an Azure subscription, [create an account](https://azure.micro
1. Configure the operating system image that you would like to use for your virtual machine.
- * **Choose Image**: For this tutorial, select Ubuntu 18.04 LTS. You may also select Windows Server 2019, Windows Server 2016, or and Ubuntu 20.04 LTS. If you choose to do so, you'll be redirected in this tutorial accordingly.
+ * **Choose Image**: For this tutorial, select Ubuntu 18.04 LTS. You may also select Windows Server 2019, Windows Server 2016, or and Ubuntu 16.04 LTS. If you choose to do so, you'll be redirected in this tutorial accordingly.
* **Toggle the image for Gen 2**: Confidential compute virtual machines only run on [Generation 2](../virtual-machines/generation-2.md) images. Ensure the image you select is a Gen 2 image. Click the **Advanced** tab above where you're configuring the virtual machine. Scroll down until you find the section labeled "VM Generation". Select Gen 2 and then go back to the **Basics** tab.
If you don't have an Azure subscription, [create an account](https://azure.micro
![DCsv2-Series VMs](media/quick-create-portal/dcsv2-virtual-machines.png) > [!TIP]
- > You should see sizes **DC1s_v2**, **DC2s_v2**, **DC4s_V2**, and **DC8_v2**. These are the only virtual machine sizes that currently support Intel SGX confidential computing. [Learn more](virtual-machine-solutions.md).
+ > You should see sizes **DC1s_v2**, **DC2s_v2**, **DC4s_V2**, and **DC8_v2**. These are the only virtual machine sizes that currently support confidential computing. [Learn more](virtual-machine-solutions.md).
1. Fill in the following information:
If you don't have an Azure subscription, [create an account](https://azure.micro
* **Password**: If applicable, enter your password for authentication.
- * **Public inbound ports**: Choose **Allow selected ports** and select **SSH (22)** and **HTTP (80)** in the **Select public inbound ports** list. If you're deploying a Windows VM, select **HTTP (80)** and **RDP (3389)**. In this quickstart, this step is necessary to connect to the VM and complete the Open Enclave SDK configuration.
+ * **Public inbound ports**: Choose **Allow selected ports** and select **SSH (22)** and **HTTP (80)** in the **Select public inbound ports** list. If you're deploying a Windows VM, select **HTTP (80)** and **RDP (3389)**.
+
+ >[!Note]
+ > Allowing RDP/SSH ports is not recommended for production deployments.
![Inbound ports](media/quick-create-portal/inbound-port-virtual-machine.png)
For more information about connecting to Linux VMs, see [Create a Linux VM on Az
> [!NOTE] > If you see a PuTTY security alert about the server's host key not being cached in the registry, choose from the following options. If you trust this host, select **Yes** to add the key to PuTTy's cache and continue connecting. If you want to carry on connecting just once, without adding the key to the cache, select **No**. If you don't trust this host, select **Cancel** to abandon the connection.
-## Install the Open Enclave SDK (OE SDK) <a id="Install"></a>
-
-Follow the step-by-step instructions to install the [OE SDK](https://github.com/openenclave/openenclave) on your DCsv2-Series virtual machine running an Ubuntu 18.04 LTS Gen 2 image.
-
-If your virtual machine runs on Ubuntu 18.04 LTS Gen 2, you'll need to follow [installation instructions for Ubuntu 18.04](https://github.com/openenclave/openenclave/blob/master/docs/GettingStartedDocs/install_oe_sdk-Ubuntu_18.04.md).
-
-#### 1. Configure the Intel and Microsoft APT Repositories
-
-```bash
-echo 'deb [arch=amd64] https://download.01.org/intel-sgx/sgx_repo/ubuntu bionic main' | sudo tee /etc/apt/sources.list.d/intel-sgx.list
-wget -qO - https://download.01.org/intel-sgx/sgx_repo/ubuntu/intel-sgx-deb.key | sudo apt-key add -
-
-echo "deb http://apt.llvm.org/bionic/ llvm-toolchain-bionic-7 main" | sudo tee /etc/apt/sources.list.d/llvm-toolchain-bionic-7.list
-wget -qO - https://apt.llvm.org/llvm-snapshot.gpg.key | sudo apt-key add -
-
-echo "deb [arch=amd64] https://packages.microsoft.com/ubuntu/18.04/prod bionic main" | sudo tee /etc/apt/sources.list.d/msprod.list
-wget -qO - https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add -
-```
+## Intel SGX Drivers
-#### 2. Install the Intel SGX DCAP Driver
-Some versions of Ubuntu may already have the Intel SGX driver installed. Check using the following command:
-
-```bash
-dmesg | grep -i sgx
-[ 106.775199] sgx: intel_sgx: Intel SGX DCAP Driver {version}
-```
-If the output is blank, install the driver:
-
-```bash
-sudo apt update
-sudo apt -y install dkms
-wget https://download.01.org/intel-sgx/sgx-dcap/1.7/linux/distro/ubuntu18.04-server/sgx_linux_x64_driver_1.35.bin -O sgx_linux_x64_driver.bin
-chmod +x sgx_linux_x64_driver.bin
-sudo ./sgx_linux_x64_driver.bin
-```
-
-> [!WARNING]
-> Please use the latest Intel SGX DCAP driver from [Intel's SGX site](https://01.org/intel-software-guard-extensions/downloads).
-
-#### 3. Install the Intel and Open Enclave packages and dependencies
--
-```bash
-sudo apt -y install clang-8 libssl-dev gdb libsgx-enclave-common libprotobuf10 libsgx-dcap-ql libsgx-dcap-ql-dev az-dcap-client open-enclave
-```
+> [!NOTE]
+> Intel SGX drivers as already part of the Ubuntu & Windows Azure Gallery Images. No special installation of the drivers is required. Optionally you can also update the existing drivers shipped in the images by visiting the [Intel SGX DCAP drivers list](https://01.org/intel-software-guard-extensions/downloads).
-> [!NOTE]
-> This step also installs the [az-dcap-client](https://github.com/microsoft/azure-dcap-client) package which is necessary for performing remote attestation in Azure.
+## Optional: Testing enclave apps built with Open Enclave SDK (OE SDK) <a id="Install"></a>
-#### 4. **Verify the Open Enclave SDK install**
+Follow the step-by-step instructions to install the [OE SDK](https://github.com/openenclave/openenclave) on your DCsv2-Series virtual machine running an Ubuntu 18.04 LTS Gen 2 image.
-See [Using the Open Enclave SDK](https://github.com/openenclave/openenclave/blob/master/docs/GettingStartedDocs/Linux_using_oe_sdk.md) on GitHub for verifying and using the installed SDK.
+If your virtual machine runs on Ubuntu 18.04 LTS Gen 2, you'll need to follow [installation instructions for Ubuntu 18.04](https://github.com/openenclave/openenclave/blob/master/docs/GettingStartedDocs/install_oe_sdk-Ubuntu_18.04.md).
## Clean up resources
cosmos-db Analytical Store Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/analytical-store-introduction.md
If you have a globally distributed Azure Cosmos DB account, after you enable ana
* **Network isolation using private endpoints** - You can control network access to the data in the transactional and analytical stores independently. Network isolation is done using separate managed private endpoints for each store, within managed virtual networks in Azure Synapse workspaces. To learn more, see how to [Configure private endpoints for analytical store](analytical-store-private-endpoints.md) article.
-* **Data encryption with customer-managed keys** - You can seamlessly encrypt the data across transactional and analytical stores using the same customer-managed keys in an automatic and transparent manner. Azure Synapse Link only supports configuring customer-managed keys using your Azure Cosmos DB account's managed identity. You must configure your account's managed identity in your Azure Key Vault access policy before enabling Azure Synapse Link](configure-synapse-link.md#enable-synapse-link) on your account. To learn more, see how to [Configure customer-managed keys using Azure Cosmos DB accounts' managed identities](how-to-setup-cmk.md#using-managed-identity) article.
+* **Data encryption with customer-managed keys** - You can seamlessly encrypt the data across transactional and analytical stores using the same customer-managed keys in an automatic and transparent manner. Azure Synapse Link only supports configuring customer-managed keys using your Azure Cosmos DB account's managed identity. You must configure your account's managed identity in your Azure Key Vault access policy before [enabling Azure Synapse Link](configure-synapse-link.md#enable-synapse-link) on your account. To learn more, see how to [Configure customer-managed keys using Azure Cosmos DB accounts' managed identities](how-to-setup-cmk.md#using-managed-identity) article.
## Support for multiple Azure Synapse Analytics runtimes
cosmos-db Queries Mongo https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/queries-mongo.md
For [resource-specific tables](cosmosdb-monitor-resource-logs.md#create-setting-
| project requestCharge_s , TimeGenerated, activityId_g; AzureDiagnostics | where Category == "MongoRequests"
- | project piiCommandText_s, activityId_g, databasename_s , collectionname_s
+ | project piiCommandText_s, activityId_g, databaseName_s , collectionName_s
| join kind=inner topRequestsByRUcharge on activityId_g
- | project databasename_s , collectionname_s , piiCommandText_s , requestCharge_s, TimeGenerated
+ | project databaseName_s , collectionName_s , piiCommandText_s , requestCharge_s, TimeGenerated
| order by requestCharge_s desc | take 10 ```
For [resource-specific tables](cosmosdb-monitor-resource-logs.md#create-setting-
| project OperationName , TimeGenerated, activityId_g; AzureDiagnostics | where Category == "MongoRequests"
- | project piiCommandText_s, activityId_g, databasename_s , collectionname_s
+ | project piiCommandText_s, activityId_g, databaseName_s , collectionName_s
| join kind=inner throttledRequests on activityId_g
- | project databasename_s , collectionname_s , piiCommandText_s , OperationName, TimeGenerated
+ | project databaseName_s , collectionName_s , piiCommandText_s , OperationName, TimeGenerated
```
For [resource-specific tables](cosmosdb-monitor-resource-logs.md#create-setting-
| project OperationName , TimeGenerated, activityId_g; AzureDiagnostics | where Category == "MongoRequests"
- | project piiCommandText_s, activityId_g, databasename_s , collectionname_s
+ | project piiCommandText_s, activityId_g, databaseName_s , collectionName_s
| join kind=inner throttledRequests on activityId_g
- | project databasename_s , collectionname_s , piiCommandText_s , OperationName, TimeGenerated
+ | project databaseName_s , collectionName_s , piiCommandText_s , OperationName, TimeGenerated
```
For [resource-specific tables](cosmosdb-monitor-resource-logs.md#create-setting-
AzureDiagnostics | where Category == "MongoRequests" //specify collection and database
- //| where databasename_s == "DBNAME" and collectioname_s == "COLLECTIONNAME"
+ //| where databaseName_s == "DBNAME" and collectionName_s == "COLLECTIONNAME"
| join kind=inner operationsbyUserAgent on activityId_g | summarize max(responseLength_s1) by piiCommandText_s | order by max_responseLength_s1 desc
For [resource-specific tables](cosmosdb-monitor-resource-logs.md#create-setting-
| where TimeGenerated >= now(-1d) | where Category == 'PartitionKeyRUConsumption' //specify collection and database
- //| where databasename_s == "DBNAME" and collectioname_s == "COLLECTIONNAME"
+ //| where databaseName_s == "DBNAME" and collectionName_s == "COLLECTIONNAME"
// filter by operation type //| where operationType_s == 'Create' | summarize sum(todouble(requestCharge_s)) by toint(partitionKeyRangeId_s)
For [resource-specific tables](cosmosdb-monitor-resource-logs.md#create-setting-
| where TimeGenerated >= now(-1d) | where Category == 'PartitionKeyRUConsumption' //specify collection and database
- //| where databasename_s == "DBNAME" and collectioname_s == "COLLECTIONNAME"
+ //| where databaseName_s == "DBNAME" and collectionName_s == "COLLECTIONNAME"
// filter by operation type //| where operationType_s == 'Create' | summarize sum(todouble(requestCharge_s)) by partitionKey_s, partitionKeyRangeId_s
For [resource-specific tables](cosmosdb-monitor-resource-logs.md#create-setting-
## Next steps * For more information on how to create diagnostic settings for Cosmos DB see [Creating Diagnostics settings](cosmosdb-monitor-resource-logs.md) article.
-* For detailed information about how to create a diagnostic setting by using the Azure portal, CLI, or PowerShell, see [create diagnostic setting to collect platform logs and metrics in Azure](../azure-monitor/essentials/diagnostic-settings.md) article.
+* For detailed information about how to create a diagnostic setting by using the Azure portal, CLI, or PowerShell, see [create diagnostic setting to collect platform logs and metrics in Azure](../azure-monitor/essentials/diagnostic-settings.md) article.
cosmos-db Read Change Feed https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/read-change-feed.md
Previously updated : 10/27/2020 Last updated : 06/30/2021
You can parallelize the processing of changes across multiple clients, just as y
There is no built-in "at-least-once" delivery guarantee with the pull model. The pull model gives you low level control to decide how you would like to handle errors.
-> [!NOTE]
-> The change feed pull model is currently in [preview in the Azure Cosmos DB .NET SDK](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/3.15.0-preview) only. The preview is not yet available for other SDK versions.
- ## Change feed in APIs for Cassandra and MongoDB Change feed functionality is surfaced as change streams in MongoDB API and Query with predicate in Cassandra API. To learn more about the implementation details for MongoDB API, see the [Change streams in the Azure Cosmos DB API for MongoDB](mongodb-change-streams.md).
cosmos-db Troubleshoot Forbidden https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/troubleshoot-forbidden.md
Request is blocked. Please check your authorization token and Cosmos DB account
Verify that your current [firewall settings](how-to-configure-firewall.md) are correct and include the IPs or networks you are trying to connect from. If you recently updated them, keep in mind that changes can take **up to 15 minutes to apply**.
+## Non-data operations are not allowed
+This scenario happens when non-data [operations are disallowed in the account](how-to-restrict-user-data.md#disallow-the-execution-of-non-data-operations). On this scenario, it's common to see errors like the ones below:
+
+```
+Operation 'POST' on resource 'calls' is not allowed through Azure Cosmos DB endpoint
+```
+
+### Solution
+Perform the operation through Azure Resource Manager, Azure portal, Azure CLI, or Azure PowerShell. Or reallow execution of non-data operations.
+ ## Next steps * Configure [IP Firewall](how-to-configure-firewall.md). * Configure access from [virtual networks](how-to-configure-vnet-service-endpoint.md).
databox-online Azure Stack Edge Gpu Back Up Virtual Machine Disks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-back-up-virtual-machine-disks.md
Previously updated : 04/12/2021 Last updated : 06/25/2021 #Customer intent: As an IT admin, I need to understand how to create and upload Azure VM images that I can use with my Azure Stack Edge Pro device so that I can deploy VMs on the device.
Before you back up VMs, make sure that:
## Back up a VM Disk
+### [Az](#tab/az)
+
+1. Get a list of the VMs running on your device. Identify the VM that you want to stop and the corresponding resource group.
+
+ ```powershell
+ Get-AzVM
+ ```
+
+ Here is an example output:
+
+ ```powershell
+ PS C:\Users\user> Get-AzVM
+
+ ResourceGroupName Name Location VmSize OsType NIC
+ -- - --
+ MYASEAZRG myazvm dbelocal Standard_D1_v2 Linux myaznic1
+ MYASERG myasewindowsvm2 dbelocal Standard_D1_v2 Linux myasewindowsvm2nic
+
+ PS C:\Users\user>
+ ```
+
+1. Set some parameters.
+
+ ```powershell
+ $ResourceGroupName = "<Resource group name>"
+ $VmName = "<VM name>"
+ ```
+1. Stop the VM.
+
+ ```powershell
+ Stop-AzVM ΓÇôResourceGroupName $ResourceGroupName -Name $VmName
+ ```
+
+ Here is an example output:
+
+ ```powershell
+ PS C:\Users\user> Stop-AzVM -ResourceGroupName myaserg -Name myasewindowsvm2
+ Virtual machine stopping operation
+ This cmdlet will stop the specified virtual machine. Do you want to continue?
+ [Y] Yes [N] No [S] Suspend [?] Help (default is "Y"): Y
+
+ OperationId : 8a2fa7ea-99d0-4f9f-b8ca-e37389cd8413
+ Status : Succeeded
+ StartTime : 6/28/2021 11:51:33 AM
+ EndTime : 6/28/2021 11:51:50 AM
+ Error :
+
+ PS C:\Users\user>
+ ```
+
+ You can also stop the VM from the Azure portal.
+
+
+2. Take a snapshot of the VM disk and save the snapshot to a local resource group. You can use this procedure for both OS and data disks.
+
+ 1. Get the list of disks on your device, or in a specific resource group. Make a note of the name of the disk to back up.
+
+ ```powershell
+ $Disk = Get-AzDisk -ResourceGroupName $ResourceGroupName
+ $Disk.Name
+ ```
+ Here is an example output:
+
+ ```output
+ PS C:\Users\user> $Disk = Get-AzDisk -ResourceGroupName myaserg
+ PS C:\Users\user> $Disk.Name
+ myasewindowsvm2_disk1_2a066432056446669368969835d5e3b3
+ myazdisk1
+ myvmdisk2
+ PS C:\Users\user>
+ ```
+ 1. Create a local resource group to serve as the destination for the VM snapshot. Location is set as `dbelocal`.
+
+ ```powershell
+ New-AzResourceGroup -ResourceGroupName <Resource group name> -Location dbelocal
+ ```
+
+ ```output
+ PS C:\Users\user> New-AzResourceGroup -ResourceGroupName myaseazrg1 -Location dbelocal
+
+ ResourceGroupName : myaseazrg1
+ Location : dbelocal
+ ProvisioningState : Succeeded
+ Tags :
+ ResourceId : /subscriptions/.../resourceGroups/myaseazrg1
+
+ PS C:\Users\user>
+ ```
+
+ 1. Set some parameters.
+
+ ```powershell
+ $DiskResourceGroup = <Disk resource group>
+ $DiskName = <Disk name>
+ $SnapshotName = <Snapshot name>
+ $DestinationRG = <Snapshot destination resource group>
+ ```
+
+ 3. Set the snapshot configuration and take the snapshot.
+
+ ```powershell
+ $Disk = Get-AzDisk -ResourceGroupName $DiskResourceGroup -DiskName $DiskName
+ $SnapshotConfig = New-AzSnapshotConfig -SourceUri $Disk.Id -CreateOption Copy -Location 'dbelocal'
+ $Snapshot = New-AzSnapshot -Snapshot $SnapshotConfig -SnapshotName $SnapshotName -ResourceGroupName $DestinationRG
+ ```
+ Verify that the snapshot is created in the destination resource group.
+
+ ```powershell
+ Get-AzSnapshot -ResourceGroupName $DestinationRG
+ ```
+ Here is an example output:
+
+ ```output
+ PS C:\Users\user> $DiskResourceGroup = "myaserg"
+ PS C:\Users\user> $DiskName = "myazdisk1"
+ PS C:\Users\user> $SnapshotName = "myasdisk1ss"
+ PS C:\Users\user> $DestinationRG = "myaseazrg1"
+ PS C:\Users\user> $Disk = Get-AzDisk -ResourceGroupName $DiskResourceGroup -DiskName $DiskName
+ PS C:\Users\user> $SnapshotConfig = New-AzSnapshotConfig -SourceUri $Disk.Id -CreateOption Copy -Location 'dbelocal'
+ PS C:\Users\user> $Snapshot=New-AzSnapshot -Snapshot $SnapshotConfig -SnapshotName $SnapshotName -ResourceGroupName $DestinationRG
+ PS C:\Users\user> Get-AzSnapshot -ResourceGroupName $DestinationRG
+
+ ResourceGroupName : myaseazrg1
+ ManagedBy :
+ Sku : Microsoft.Azure.Management.Compute.Models.SnapshotSku
+ TimeCreated : 6/28/2021 6:57:40 PM
+ OsType :
+ HyperVGeneration :
+ CreationData : Microsoft.Azure.Management.Compute.Models.CreationDat
+ a
+ DiskSizeGB : 10
+ DiskSizeBytes : 10737418240
+ UniqueId : fbc1cfac-8bbb-44d8-8aa4-9e8811950fcc
+ EncryptionSettingsCollection :
+ ProvisioningState : Succeeded
+ Incremental : False
+ Encryption : Microsoft.Azure.Management.Compute.Models.Encryption
+ Id : /subscriptions/.../r
+ esourceGroups/myaseazrg1/providers/Microsoft.Compute/
+ snapshots/myasdisk1ss
+ Name : myasdisk1ss
+ Type : Microsoft.Compute/snapshots
+ Location : DBELocal
+ Tags : {}
+
+ PS C:\Users\user>
+ ```
+
+### [AzureRM](#tab/azurerm)
+ 1. Get a list of the VMs running on your device. Identify the VM that you want to stop. ```powershell
Before you back up VMs, make sure that:
Location : dbelocal ProvisioningState : Succeeded Tags :
- ResourceId : /subscriptions/992601bc-b03d-4d72-598e-d24eac232122/resourceGroups/myaserg3
+ ResourceId : /subscriptions/.../resourceGroups/myaserg3
PS C:\Users\user> ```
Before you back up VMs, make sure that:
DiskSizeGB : 10 EncryptionSettings : ProvisioningState : Succeeded
- Id : /subscriptions/992601bc-b03d-4d72-598e-d24eac232122/resourceGroups/myaserg3/providers/Microsoft.Compute/snapshots/myasetestdisk1_ss
+ Id : /subscriptions/.../resourceGroups/myaserg3/providers/Microsoft.Compute/snapshots/myasetestdisk1_ss
Name : myasetestdisk1_ss Type : Microsoft.Compute/snapshots Location : DBELocal
Before you back up VMs, make sure that:
PS C:\Users\user> ```+ ## Copy the snapshot into a local storage account
- Copy the snapshots to a local storage account on your device.
+Copy the snapshots to a local storage account on your device.
+
+### [Az](#tab/az)
1. Set some parameters.
Before you back up VMs, make sure that:
1. Create a local storage account on your device. ```powershell
- New-AzureRmStorageAccount -Name <Storage account name> -ResourceGroupName <Storage account resource group> -Location DBELocal -SkuName Standard_LRS
+ New-AzStorageAccount -Name $StorageAccountName -ResourceGroupName $StorageAccountRG -Location DBELocal -SkuName Standard_LRS
``` Here is an example output:
+ ```output
+ PS C:\Users\user> New-AzStorageAccount -Name $StorageAccountName -ResourceGroupName $StorageAccountRG -Location DBELocal -SkuName Standard_LRS
+
+ StorageAccountName ResourceGroupName PrimaryLocation SkuName Kind AccessTier
+ -- - - -
+ myaseazsa1 myaseazrg2 DBELocal Standard_LRS Storage
+ PS C:\Users\user>
+ ```
+
+1. Create a container in the local storage account that you created.
+ ```powershell
+ $keys = Get-AzStorageAccountKey -ResourceGroupName $StorageAccountRG -Name $StorageAccountName
+ $keyValue = $keys[0].Value
+ $storageContext = New-AzStorageContext -StorageAccountName $StorageAccountName -StorageAccountKey $keyValue -Protocol Http -Endpoint $StorageEndpointSuffix;
+ $container = New-AzStorageContainer -Name $DestStorageContainer -Context $storageContext -Permission Container -ErrorAction Ignore;
+ ```
+ Here is an example output:
+
+ ```output
+ PS C:\Users\user> $StorageAccountRG = "myaseazrg2"
+ PS C:\Users\user> $StorageAccountName = "myaseazsa1"
+ PS C:\Users\user> $StorageEndpointSuffix = "myasegpu.wdshcsso.com"
+ PS C:\Users\user> $DestStorageContainer = "testcont1"
+ PS C:\Users\user> $DestFileName = "testfile1"
+
+ PS C:\Users\user> $keys = Get-AzStorageAccountKey -ResourceGroupName $StorageAccountRG -Name $StorageAccountName
+ PS C:\Users\user> $keyValue = $keys[0].Value
+ PS C:\Users\user> $storageContext = New-AzStorageContext -StorageAccountName $StorageAccountName -StorageAccountKey $keyValue -Protocol Http -Endpoint $StorageEndpointSuffix;
+ PS C:\Users\user> $storagecontext
+
+ StorageAccountName : myaseazsa1
+ BlobEndPoint : http://myaseazsa1.blob.myasegpu.wdshcsso.com/
+ TableEndPoint : http://myaseazsa1.table.myasegpu.wdshcsso.com/
+ QueueEndPoint : http://myaseazsa1.queue.myasegpu.wdshcsso.com/
+ FileEndPoint : http://myaseazsa1.file.myasegpu.wdshcsso.com/
+ Context : Microsoft.WindowsAzure.Commands.Storage.AzureStorageContext
+ Name :
+ StorageAccount : BlobEndpoint=http://myaseazsa1.blob.myasegpu.wdshcsso.com/;Que
+ ueEndpoint=http://myaseazsa1.queue.myasegpu.wdshcsso.com/;Tabl
+ eEndpoint=http://myaseazsa1.table.myasegpu.wdshcsso.com/;FileE
+ ndpoint=http://myaseazsa1.file.myasegpu.wdshcsso.com/;AccountN
+ ame=myaseazsa1;AccountKey=[key hidden]
+ TableStorageAccount : BlobEndpoint=http://myaseazsa1.blob.myasegpu.wdshcsso.com/;Que
+ ueEndpoint=http://myaseazsa1.queue.myasegpu.wdshcsso.com/;Tabl
+ eEndpoint=http://myaseazsa1.table.myasegpu.wdshcsso.com/;FileE
+ ndpoint=http://myaseazsa1.file.myasegpu.wdshcsso.com/;DefaultE
+ ndpointsProtocol=https;AccountName=myaseazsa1;AccountKey=[key
+ hidden]
+ Track2OauthToken :
+ EndPointSuffix : myasegpu.wdshcsso.com/
+ ConnectionString : BlobEndpoint=http://myaseazsa1.blob.myasegpu.wdshcsso.com/;Que
+ ueEndpoint=http://myaseazsa1.queue.myasegpu.wdshcsso.com/;Tabl
+ eEndpoint=http://myaseazsa1.table.myasegpu.wdshcsso.com/;FileE
+ ndpoint=http://myaseazsa1.file.myasegpu.wdshcsso.com/;AccountN
+ ame=myaseazsa1;AccountKey=itOn5Awjh3hnoGKL7EDQ681zhIKG/szCt05Z
+ IWAxP/T22gwEXb5l0sKjI833Hqpc0MsBiSH2rM6NuuwnJyEO1Q==
+ ExtendedProperties : {}
+
+
+
+ PS C:\Users\user> $container = New-AzStorageContainer -Name $DestStorageContainer -Context $storageContext -Permission Container -ErrorAction Ignore;
+ PS C:\Users\user> $container
+ Blob End Point: http://myaseazsa1.blob.myasegpu.wdshcsso.com/
+
+ Name PublicAccess LastModified
+ -
+ testcont1 Container 6/28/2021 2:46:03 PM +00:00
+
+ PS C:\Users\user>
+ ```
+
+ You can also use Azure Storage Explorer to [Create a local storage account](azure-stack-edge-gpu-deploy-virtual-machine-templates.md#create-a-storage-account) and then [Create a container in the local storage account](azure-stack-edge-gpu-deploy-virtual-machine-templates.md#use-storage-explorer-for-upload) on your device.
+++
+1. Download the snapshot into the local storage account.
+
+ ```powershell
+ $sassnapshot = Grant-AzSnapshotAccess -ResourceGroupName $DestinationRG -SnapshotName $SnapshotName -Access 'Read' -DurationInSecond 3600
+ $destContext = New-AzStorageContext ΓÇôStorageAccountName $StorageAccountName -StorageAccountKey $keyValue
+ Start-AzStorageBlobCopy -AbsoluteUri $sassnapshot.AccessSAS -DestContainer $DestStorageContainer -DestContext $destContext -DestBlob $DestFileName
+ ```
+
+ Here is an example output:
+
+ ```output
+ PS C:\Users\user> $sassnapshot
+
+ AccessSAS : https://md-2.blob.myasegpu.wdshcsso.com/22615edc48654bb8b77e383d3a7649ac
+ /abcd.vhd?sv=2017-04-17&sr=b&si=43ca8395-6942-496b-92d7-f0d6dc68ab63&sk=system-1&sig
+ =K%2Bc34uq7%2BLcTetG%2Bj9loOH440e03vDkD24Ug0Gf%2Bex8%3D
+
+ PS C:\Users\user> $destContext = New-AzStorageContext -StorageAccountName $StorageAccountName -StorageAccountKey $keyValue
+ PS C:\Users\user> $destContext
+
+
+ StorageAccountName : myaseazsa1
+ BlobEndPoint : https://myaseazsa1.blob.myasegpu.wdshcsso.com/
+ TableEndPoint : https://myaseazsa1.table.myasegpu.wdshcsso.com/
+ QueueEndPoint : https://myaseazsa1.queue.myasegpu.wdshcsso.com/
+ FileEndPoint : https://myaseazsa1.file.myasegpu.wdshcsso.com/
+ Context : Microsoft.WindowsAzure.Commands.Storage.AzureStorageContext
+ Name :
+ StorageAccount : BlobEndpoint=https://myaseazsa1.blob.myasegpu.wdshcsso.com/;Qu
+ eueEndpoint=https://myaseazsa1.queue.myasegpu.wdshcsso.com/;Ta
+ bleEndpoint=https://myaseazsa1.table.myasegpu.wdshcsso.com/;Fi
+ leEndpoint=https://myaseazsa1.file.myasegpu.wdshcsso.com/;Acco
+ untName=myaseazsa1;AccountKey=[key hidden]
+ TableStorageAccount : BlobEndpoint=https://myaseazsa1.blob.myasegpu.wdshcsso.com/;Qu
+ eueEndpoint=https://myaseazsa1.queue.myasegpu.wdshcsso.com/;Ta
+ bleEndpoint=https://myaseazsa1.table.myasegpu.wdshcsso.com/;Fi
+ leEndpoint=https://myaseazsa1.file.myasegpu.wdshcsso.com/;Defa
+ ultEndpointsProtocol=https;AccountName=myaseazsa1;AccountKey=[
+ key hidden]
+ Track2OauthToken :
+ EndPointSuffix : myasegpu.wdshcsso.com/
+ ConnectionString : BlobEndpoint=https://myaseazsa1.blob.myasegpu.wdshcsso.com/;Qu
+ eueEndpoint=https://myaseazsa1.queue.myasegpu.wdshcsso.com/;Ta
+ bleEndpoint=https://myaseazsa1.table.myasegpu.wdshcsso.com/;Fi
+ leEndpoint=https://myaseazsa1.file.myasegpu.wdshcsso.com/;Acco
+ untName=myaseazsa1;AccountKey=itOn5Awjh3hnoGKL7EDQ681zhIKG/szC
+ t05ZIWAxP/T22gwEXb5l0sKjI833Hqpc0MsBiSH2rM6NuuwnJyEO1Q==
+ ExtendedProperties : {}
+
+ PS C:\Users\user> Start-AzStorageBlobCopy -AbsoluteUri $sassnapshot.AccessSAS -DestContainer $DestStorageContainer -DestContext $destContext -DestBlob $DestFileName
+
+ AccountName: myaseazsa1, ContainerName: testcont1
+
+ Name BlobType Length ContentType LastMo
+ dified
+ - -- --
+ testfile1 BlockBlob -1 202...
+
+ PS C:\Users\user>
+ ```
+
+ You can also use Storage Explorer to verify that the snapshot was copied correctly to the storage account.
+
+ ![Storage Explorer showing the backup in the container in local storage account](media/azure-stack-edge-gpu-back-up-virtual-machine-disks/back-up-virtual-machine-disk-2.png)
+
+### [AzureRM](#tab/azurerm)
+
+Copy the snapshots to a local storage account on your device.
+
+1. Set some parameters.
+
+ ```powershell
+ $StorageAccountRG = <Local storage account resource group>
+ $StorageAccountName = <Storage account name>
+ $StorageEndpointSuffix = <Connection string in format: DeviceName.DnsDomain.com>
+ $DestStorageContainer = <Destination storage container>
+ $DestFileName = <Blob file name>
+ ```
+
+1. Create a local storage account on your device.
+
+ ```powershell
+ New-AzureRmStorageAccount -Name <Storage account name> -ResourceGroupName <Storage account resource group> -Location DBELocal -SkuName Standard_LRS
+ ```
+
+ Here is an example output:
+
+ ```output
PS C:\Users\user> New-AzureRmStorageAccount -Name myasesa4 -ResourceGroupName myaserg4 -Location DBELocal -SkuName Standard_LRS StorageAccountName ResourceGroupName Location SkuName Kind AccessTier CreationTime ProvisioningState EnableHttpsTrafficOnly -- -- - - - --
Before you back up VMs, make sure that:
``` Here is an example output:
- ```powershell
+ ```output
PS C:\Users\user> $StorageAccountName = "myasesa4" PS C:\Users\user> $StorageAccountRG = "myaserg4" PS C:\Users\user> $DestStorageContainer = "myasecont2"
Before you back up VMs, make sure that:
Here is an example output:
- ```powershell
+ ```output
PS C:\Users\user> $sassnapshot= Grant-AzureRmSnapshotAccess -ResourceGroupName $DestinationRG -SnapshotName $SnapshotName -Access 'Read' -DurationInSecond 3600 PS C:\Users\user> $sassnapshot
Before you back up VMs, make sure that:
![Storage Explorer showing the backup in the container in local storage account](media/azure-stack-edge-gpu-back-up-virtual-machine-disks/back-up-virtual-machine-disk-1.png) ++ ## Download VHD to external target To move your backups to an external location, you can use Azure Storage Explorer or AzCopy.
databox-online Azure Stack Edge Gpu Connect Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-connect-resource-manager.md
Title: Connect to Azure Resource Manager on your Azure Stack Edge Pro GPU device
+ Title: Connect to Azure Resource Manager on your Azure Stack Edge GPU device
description: Describes how to connect to the Azure Resource Manager running on your Azure Stack Edge Pro GPU using Azure PowerShell.
Previously updated : 06/08/2021- Last updated : 06/09/2021+ #Customer intent: As an IT admin, I need to understand how to connect to Azure Resource Manager on my Azure Stack Edge Pro device so that I can manage resources.
-# Connect to Azure Resource Manager on your Azure Stack Edge Pro device
+# Connect to Azure Resource Manager on your Azure Stack Edge device
[!INCLUDE [applies-to-GPU-and-pro-r-and-mini-r-skus](../../includes/azure-stack-edge-applies-to-gpu-pro-r-mini-r-sku.md)]
-Azure Resource Manager provides a management layer that enables you to create, update, and delete resources in your Azure subscription. The Azure Stack Edge Pro device supports the same Azure Resource Manager APIs to create, update, and delete VMs in a local subscription. This support lets you manage the device in a manner consistent with the cloud.
+Azure Resource Manager provides a management layer that enables you to create, update, and delete resources in your Azure subscription. The Azure Stack Edge device supports the same Azure Resource Manager APIs to create, update, and delete VMs in a local subscription. This support lets you manage the device in a manner consistent with the cloud.
-This tutorial describes how to connect to the local APIs on your Azure Stack Edge Pro device via Azure Resource Manager using Azure PowerShell.
+This article describes how to connect to the local APIs on your Azure Stack Edge device via Azure Resource Manager using Azure PowerShell.
-## About Azure Resource Manager
-Azure Resource Manager provides a consistent management layer to call the Azure Stack Edge Pro device API and perform operations such as create, update, and delete VMs. The architecture of the Azure Resource Manager is detailed in the following diagram.
-
-![Diagram for Azure Resource Manager](media/azure-stack-edge-gpu-connect-resource-manager/edge-device-flow.svg)
--
-## Endpoints on Azure Stack Edge Pro device
+## Endpoints on Azure Stack Edge device
The following table summarizes the various endpoints exposed on your device, the supported protocols, and the ports to access those endpoints. Throughout the article, you will find references to these endpoints.
The following table summarizes the various endpoints exposed on your device, the
| | | | | | | 1. | Azure Resource Manager | https | 443 | To connect to Azure Resource Manager for automation | | 2. | Security token service | https | 443 | To authenticate via access and refresh tokens |
-| 3. | Blob | https | 443 | To connect to Blob storage via REST |
-
+| 3. | Blob* | https | 443 | To connect to Blob storage via REST |
+\* *Connection to blob storage endpoint is not required to connect to Azure Resource Manager.*
+
## Connecting to Azure Resource Manager workflow The process of connecting to local APIs of the device using Azure Resource Manager requires the following steps: | Step # | You'll do this step ... | .. on this location. | | | | |
-| 1. | [Configure your Azure Stack Edge Pro device](#step-1-configure-azure-stack-edge-pro-device) | Local web UI |
+| 1. | [Configure your Azure Stack Edge device](#step-1-configure-azure-stack-edge-device) | Local web UI |
| 2. | [Create and install certificates](#step-2-create-and-install-certificates) | Windows client/local web UI | | 3. | [Review and configure the prerequisites](#step-3-install-powershell-on-the-client) | Windows client | | 4. | [Set up Azure PowerShell on the client](#step-4-set-up-azure-powershell-on-the-client) | Windows client |
The following sections detail each of the above steps in connecting to Azure Res
## Prerequisites
-Before you begin, make sure that the client used for connecting to device via Azure Resource Manager is using TLS 1.2. For more information, go to [Configure TLS 1.2 on Windows client accessing Azure Stack Edge Pro device](azure-stack-edge-gpu-configure-tls-settings.md).
+Before you begin, make sure that the client used for connecting to device via Azure Resource Manager is using TLS 1.2. For more information, go to [Configure TLS 1.2 on Windows client accessing Azure Stack Edge device](azure-stack-edge-gpu-configure-tls-settings.md).
-## Step 1: Configure Azure Stack Edge Pro device
+## Step 1: Configure Azure Stack Edge device
-Take the following steps in the local web UI of your Azure Stack Edge Pro device.
+Take the following steps in the local web UI of your Azure Stack Edge device.
-1. Complete the network settings for your Azure Stack Edge Pro device.
+1. Complete the network settings for your Azure Stack Edge device.
![Local web UI "Network settings" page](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/compute-network-2.png)
Take the following steps in the local web UI of your Azure Stack Edge Pro device
## Step 2: Create and install certificates
-Certificates ensure that your communication is trusted. On your Azure Stack Edge Pro device, self-signed appliance, blob, and Azure Resource Manager certificates are automatically generated. Optionally, you can bring in your own signed blob and Azure Resource Manager certificates as well.
+Certificates ensure that your communication is trusted. On your Azure Stack Edge device, self-signed appliance, blob, and Azure Resource Manager certificates are automatically generated. Optionally, you can bring in your own signed blob and Azure Resource Manager certificates as well.
When you bring in a signed certificate of your own, you also need the corresponding signing chain of the certificate. For the signing chain, Azure Resource Manager, and the blob certificates on the device, you will need the corresponding certificates on the client machine also to authenticate and communicate with the device. To connect to Azure Resource Manager, you will need to create or get signing chain and endpoint certificates, import these certificates on your Windows client, and finally upload these certificates on the device.
-### Create certificates (Optional)
+### Create certificates
For test and development use only, you can use Windows PowerShell to create certificates on your local system. While creating the certificates for the client, follow these guidelines: 1. You first need to create a root certificate for the signing chain. For more information, see See steps to [Create signing chain certificates](azure-stack-edge-gpu-manage-certificates.md#create-signing-chain-certificate).
-2. You can next create the endpoint certificates for the blob and Azure Resource Manager. You can get these endpoints from the **Device** page in the local web UI. See the steps to [Create endpoint certificates](azure-stack-edge-gpu-manage-certificates.md#create-signed-endpoint-certificates).
+2. You can next create the endpoint certificates for Azure Resource Manager and blob (optional). You can get these endpoints from the **Device** page in the local web UI. See the steps to [Create endpoint certificates](azure-stack-edge-gpu-manage-certificates.md#create-signed-endpoint-certificates).
3. For all these certificates, make sure that the subject name and subject alternate name conform to the following guidelines: |Type |Subject name (SN) |Subject alternative name (SAN) |Subject name example | ||||| |Azure Resource Manager|`management.<Device name>.<Dns Domain>`|`login.<Device name>.<Dns Domain>`<br>`management.<Device name>.<Dns Domain>`|`management.mydevice1.microsoftdatabox.com` |
- |Blob storage|`*.blob.<Device name>.<Dns Domain>`|`*.blob.< Device name>.<Dns Domain>`|`*.blob.mydevice1.microsoftdatabox.com` |
+ |Blob storage*|`*.blob.<Device name>.<Dns Domain>`|`*.blob.< Device name>.<Dns Domain>`|`*.blob.mydevice1.microsoftdatabox.com` |
|Multi-SAN single certificate for both endpoints|`<Device name>.<dnsdomain>`|`login.<Device name>.<Dns Domain>`<br>`management.<Device name>.<Dns Domain>`<br>`*.blob.<Device name>.<Dns Domain>`|`mydevice1.microsoftdatabox.com` |
+\* Blob storage is not required to connect to Azure Resource Manager. It is listed here in case you are creating local storage accounts on your device.
+ For more information on certificates, go to how to [Manage certificates](azure-stack-edge-gpu-manage-certificates.md). ### Upload certificates on the device
The Windows client where you will invoke the Azure Resource Manager APIs needs t
## Step 3: Install PowerShell on the client
+### [Az](#tab/Az)
+ Your Windows client must meet the following prerequisites:
-1. Run PowerShell Version 5.0. You must have PowerShell version 5.0. PowerShell core is not supported. To check the version of PowerShell on your system, run the following cmdlet:
+
+1. Run PowerShell Version 5.0. You must have PowerShell version 5.0. To check the version of PowerShell on your system, run the following cmdlet:
```powershell $PSVersionTable.PSVersion
Your Windows client must meet the following prerequisites:
If you don\'t have PowerShell 5.0, follow [Installing Windows PowerShell](/powershell/scripting/install/installing-windows-powershell?view=powershell-6&preserve-view=true).
- A sample output is shown below.
+ An example output is shown below.
+
+ ```output
+ Windows PowerShell
+ Copyright (C) Microsoft Corporation. All rights reserved.
+ Try the new cross-platform PowerShell https://aka.ms/pscore6
+ PS C:\windows\system32> $PSVersionTable.PSVersion
+ Major Minor Build Revision
+ -- -- -- --
+ 5 1 19041 906
+ ```
+
+1. You can access the PowerShell Gallery.
+
+ Run PowerShell as administrator. Verify that PowerShellGet version is older than 2.2.3. Additionally, verify if the `PSGallery` is registered as a repository.
```powershell
+ Install-Module PowerShellGet ΓÇôMinimumVersion 2.2.3
+ Import-Module -Name PackageManagement -ErrorAction Stop
+ Get-PSRepository -Name "PSGallery"
+ ```
+
+ An example output is shown below.
+
+ ```output
+ PS C:\windows\system32> Install-Module PowerShellGet ΓÇôMinimumVersion 2.2.3
+ PS C:\windows\system32> Import-Module -Name PackageManagement -ErrorAction Stop
+ PS C:\windows\system32> Get-PSRepository -Name "PSGallery"
+ Name InstallationPolicy SourceLocation
+ - --
+ PSGallery Trusted https://www.powershellgallery.com/api/v2
+ ```
+
+### [AzureRM](#tab/AzureRM)
+
+Your Windows client must meet the following prerequisites:
++
+1. Run PowerShell Version 5.0. You must have PowerShell version 5.0. PowerShell core is not supported. To check the version of PowerShell on your system, run the following cmdlet:
+
+ ```powershell
+ $PSVersionTable.PSVersion
+ ```
+
+ Compare the **Major** version and ensure that it is 5.0 or later.
+
+ If you have an outdated version, see [Upgrading existing Windows PowerShell](/powershell/scripting/install/installing-windows-powershell?view=powershell-6&preserve-view=true#upgrading-existing-windows-powershell).
+
+ If you don\'t have PowerShell 5.0, follow [Installing Windows PowerShell](/powershell/scripting/install/installing-windows-powershell?view=powershell-6&preserve-view=true).
+
+ An example output is shown below.
+
+ ```output
Windows PowerShell Copyright (C) Microsoft Corporation. All rights reserved. Try the new cross-platform PowerShell https://aka.ms/pscore6
Your Windows client must meet the following prerequisites:
Get-PSRepository -Name "PSGallery" ```
- A sample output is shown below.
+ An example output is shown below.
- ```powershell
+ ```output
PS C:\windows\system32> Import-Module -Name PowerShellGet -ErrorAction Stop PS C:\windows\system32> Import-Module -Name PackageManagement -ErrorAction Stop PS C:\windows\system32> Get-PSRepository -Name "PSGallery"
Your Windows client must meet the following prerequisites:
- -- PSGallery Trusted https://www.powershellgallery.com/api/v2 ```
-
+
If your repository is not trusted or you need more information, see [Validate the PowerShell Gallery accessibility](/azure-stack/operator/azure-stack-powershell-install?view=azs-1908&preserve-view=true&preserve-view=true#2-validate-the-powershell-gallery-accessibility). ## Step 4: Set up Azure PowerShell on the client
-<!--1. Verify the API profile of the client and identify which version of the Azure PowerShell modules and libraries to include on your client. In this example, the client system will be running Azure Stack 1904 or later. For more information, see [Azure Resource Manager API profiles](/azure-stack/user/azure-stack-version-profiles?view=azs-1908#azure-resource-manager-api-profiles).-->
+### [Az](#tab/Az)
+
+You will install Azure PowerShell modules on your client that will work with your device.
+
+1. Run PowerShell as an administrator. You need access to PowerShell gallery.
++
+1. First verify that there are no existing versions of `AzureRM` and `Az` modules on your client. To check, run the following commands:
+
+ ```powershell
+ # Check existing versions of AzureRM modules
+ Get-InstalledModule -Name AzureRM -AllVersions
+
+ # Check existing versions of Az modules
+ Get-InstalledModule -Name Az -AllVersions
+ ```
+
+ If there are existing versions, use the `Uninstall-Module` cmdlet to uninstall. For more information, see
+ - [Uninstall AzureRM modules](/powershell/azure/uninstall-az-ps?view=azps-6.0.0&preserve-view=true#uninstall-the-az-module).
+ - [Uninstall Az modules](/powershell/azure/uninstall-az-ps?view=azps-6.0.0&preserve-view=true#uninstall-the-azurerm-module).
-1. You will install Azure PowerShell modules on your client that will work with your device.
+1. To install the required Azure PowerShell modules from the PowerShell Gallery, run the following command:
- a. Run PowerShell as an administrator. You need access to PowerShell gallery.
+ - If your client is using PowerShell Core version 7.0 and later:
+ ```powershell
+ # Install the Az.BootStrapper module. Select Yes when prompted to install NuGet.
+
+ Install-Module -Name Az.BootStrapper
+
+ # Install and import the API Version Profile into the current PowerShell session.
+
+ Use-AzProfile -Profile 2020-09-01-hybrid -Force
+
+ # Confirm the installation of PowerShell
+ Get-Module -Name "Az*" -ListAvailable
+ ```
+
+ - If your client is using PowerShell 5.1 and later:
+
+ ```powershell
+ #Install the Az module version 1.10.0
+
+ Install-Module ΓÇôName Az ΓÇôRequiredVersion 1.10.0
+ ```
- b. To install the required Azure PowerShell modules from the PowerShell Gallery, run the following command:
+3. Make sure that you have Az module version 1.10.0 running at the end of the installation.
+
+
+ If you used PowerShell core 7.0 and later, the example output below indicates that the Az version 1.10.0 modules were installed successfully.
+
+ ```output
+ <!-- this doesn't look correct. Neeraj to provide one for PS core-->
+ PS C:\windows\system32> Install-Module -Name Az.BootStrapper
+ PS C:\windows\system32> Use-AzProfile -Profile 2020-09-01-hybrid -Force
+ Loading Profile 2020-09-01-hybrid
+ PS C:\windows\system32> Get-Module -Name "Az*" -ListAvailable
+ ```
+
+ If you used PowerShell 5.1 and later, the example output below indicates that that the Az version 1.10.0 modules were installed successfully.
+
+ ```powershell
+ PS C:\WINDOWS\system32> Get-InstalledModule -Name Az -AllVersions
+ Version Name Repository Description
+ - - -
+ 1.10.0 Az PSGallery Mic...
+
+ PS C:\WINDOWS\system32>
+ ```
+
+### [AzureRM](#tab/AzureRM)
+
+You will install Azure PowerShell modules on your client that will work with your device.
+
+1. Run PowerShell as an administrator. You need access to PowerShell gallery.
++
+2. To install the required Azure PowerShell modules from the PowerShell Gallery, run the following command:
```powershell # Install the AzureRM.BootStrapper module. Select Yes when prompted to install NuGet. Install-Module -Name AzureRM.BootStrapper
- # Install and import the API Version Profile into the current PowerShell session.
+ # Install and import the API Version Profile into the current PowerShell session.
Use-AzureRmProfile -Profile 2019-03-01-hybrid -Force
If your repository is not trusted or you need more information, see [Validate th
You will now need to install the required version again.
-A sample output is shown below that indicates the AzureRM version 2.5.0 modules were installed successfully.
-
-```powershell
-PS C:\windows\system32> Install-Module -Name AzureRM.BootStrapper
-PS C:\windows\system32> Use-AzureRmProfile -Profile 2019-03-01-hybrid -Force
-Loading Profile 2019-03-01-hybrid
-PS C:\windows\system32> Get-Module -Name "Azure*" -ListAvailable
-
- Directory: C:\Program Files\WindowsPowerShell\Modules
-
-ModuleType Version Name ExportedCommands
-- - - -
-Script 4.5.0 Azure.Storage {Get-AzureStorageTable, New-AzureStorageTableSASToken, New...
-Script 2.5.0 AzureRM
-Script 0.5.0 AzureRM.BootStrapper {Update-AzureRmProfile, Uninstall-AzureRmProfile, Install-...
-Script 4.6.1 AzureRM.Compute {Remove-AzureRmAvailabilitySet, Get-AzureRmAvailabilitySet...
-Script 3.5.1 AzureRM.Dns {Get-AzureRmDnsRecordSet, New-AzureRmDnsRecordConfig, Remo...
-Script 5.1.5 AzureRM.Insights {Get-AzureRmMetricDefinition, Get-AzureRmMetric, Remove-Az...
-Script 4.2.0 AzureRM.KeyVault {Add-AzureKeyVaultCertificate, Set-AzureKeyVaultCertificat...
-Script 5.0.1 AzureRM.Network {Add-AzureRmApplicationGatewayAuthenticationCertificate, G...
-Script 5.8.3 AzureRM.profile {Disable-AzureRmDataCollection, Disable-AzureRmContextAuto...
-Script 6.4.3 AzureRM.Resources {Get-AzureRmProviderOperation, Remove-AzureRmRoleAssignmen...
-Script 5.0.4 AzureRM.Storage {Get-AzureRmStorageAccount, Get-AzureRmStorageAccountKey, ...
-Script 4.0.2 AzureRM.Tags {Remove-AzureRmTag, Get-AzureRmTag, New-AzureRmTag}
-Script 4.0.3 AzureRM.UsageAggregates Get-UsageAggregates
-Script 5.0.1 AzureRM.Websites {Get-AzureRmAppServicePlan, Set-AzureRmAppServicePlan, New...
-
-
- Directory: C:\Program Files (x86)\Microsoft Azure Information Protection\Powershell
-
-ModuleType Version Name ExportedCommands
-- - - -
-Binary 1.48.204.0 AzureInformationProtection {Clear-RMSAuthentication, Get-RMSFileStatus, Get-RMSServer...
-```
-
+ An example output shown below indicates that the AzureRM version 2.5.0 modules were installed successfully.
+
+ ```powershell
+ PS C:\windows\system32> Install-Module -Name AzureRM.BootStrapper
+ PS C:\windows\system32> Use-AzureRmProfile -Profile 2019-03-01-hybrid -Force
+ Loading Profile 2019-03-01-hybrid
+ PS C:\windows\system32> Get-Module -Name "Azure*" -ListAvailable
+
+ Directory: C:\Program Files\WindowsPowerShell\Modules
+
+ ModuleType Version Name ExportedCommands
+ - - - -
+ Script 4.5.0 Azure.Storage {Get-AzureStorageTable, New-AzureStorageTableSASToken, New...
+ Script 2.5.0 AzureRM
+ Script 0.5.0 AzureRM.BootStrapper {Update-AzureRmProfile, Uninstall-AzureRmProfile, Install-...
+ Script 4.6.1 AzureRM.Compute {Remove-AzureRmAvailabilitySet, Get-AzureRmAvailabilitySet...
+ Script 3.5.1 AzureRM.Dns {Get-AzureRmDnsRecordSet, New-AzureRmDnsRecordConfig, Remo...
+ Script 5.1.5 AzureRM.Insights {Get-AzureRmMetricDefinition, Get-AzureRmMetric, Remove-Az...
+ Script 4.2.0 AzureRM.KeyVault {Add-AzureKeyVaultCertificate, Set-AzureKeyVaultCertificat...
+ Script 5.0.1 AzureRM.Network {Add-AzureRmApplicationGatewayAuthenticationCertificate, G...
+ Script 5.8.3 AzureRM.profile {Disable-AzureRmDataCollection, Disable-AzureRmContextAuto...
+ Script 6.4.3 AzureRM.Resources {Get-AzureRmProviderOperation, Remove-AzureRmRoleAssignmen...
+ Script 5.0.4 AzureRM.Storage {Get-AzureRmStorageAccount, Get-AzureRmStorageAccountKey, ...
+ Script 4.0.2 AzureRM.Tags {Remove-AzureRmTag, Get-AzureRmTag, New-AzureRmTag}
+ Script 4.0.3 AzureRM.UsageAggregates Get-UsageAggregates
+ Script 5.0.1 AzureRM.Websites {Get-AzureRmAppServicePlan, Set-AzureRmAppServicePlan, New...
+
+
+ Directory: C:\Program Files (x86)\Microsoft Azure Information Protection\Powershell
+
+ ModuleType Version Name ExportedCommands
+ - - - -
+ Binary 1.48.204.0 AzureInformationProtection {Clear-RMSAuthentication, Get-RMSFileStatus, Get-RMSServer...
+ ```
+
## Step 5: Modify host file for endpoint name resolution
-You will now add the Azure consistent VIP that you defined on the local web UI of device to:
+You will now add the device IP address to:
- The host file on the client, OR, - The DNS server configuration
On your Windows client that you are using to connect to the device, take the fol
``` > [!IMPORTANT]
- > The entry in the hosts file should match exactly that provided to connect to Azure Resource Manager at a later step. Make sure that the DNS Domain entry here is all in the lowercase.
+ > The entry in the hosts file should match exactly that provided to connect to Azure Resource Manager at a later step. Make sure that the DNS Domain entry here is all in the lowercase. To get the values for the `<appliance name>` and `<DNS domain>`, go to the **Device** page in the local UI of your device.
You saved the device IP from the local web UI in an earlier step.
- The login.\<appliance name\>.\<DNS domain\> entry is the endpoint for Security Token Service (STS). STS is responsible for creation, validation, renewal, and cancellation of security tokens. The security token service is used to create the access token and refresh token that are used for continuous communication between the device and the client.
+ The `login.<appliance name>.<DNS domain>` entry is the endpoint for Security Token Service (STS). STS is responsible for creation, validation, renewal, and cancellation of security tokens. The security token service is used to create the access token and refresh token that are used for continuous communication between the device and the client.
+
+ The endpoint for blob storage is optional when connecting to Azure Resource Manager. This endpoint is needed when transferring data to Azure via storage accounts.
3. For reference, use the following image. Save the **hosts** file.
On your Windows client that you are using to connect to the device, take the fol
## Step 6: Verify endpoint name resolution on the client
-Check if the endpoint name is resolved on the client that you are using to connect to the Azure consistent VIP.
+Check if the endpoint name is resolved on the client that you are using to connect to the device.
-1. You can use the ping.exe command-line utility to check that the endpoint name is resolved. Given an IP address, the ping command will return the TCP/IP host name of the computer you\'re tracing.
+1. You can use the `ping.exe` command-line utility to check that the endpoint name is resolved. Given an IP address, the `ping` command will return the TCP/IP host name of the computer you\'re tracing.
Add the `-a` switch to the command line as shown in the example below. If the host name is returnable, it will also return this potentially valuable information in the reply.
Check if the endpoint name is resolved on the client that you are using to conne
## Step 7: Set Azure Resource Manager environment
+### [Az](#tab/Az)
+
+Set the Azure Resource Manager environment and verify that your device to client communication via Azure Resource Manager is working fine. Take the following steps for this verification:
++
+1. Use the `Add-AzEnvironment` cmdlet to further ensure that the communication via Azure Resource Manager is working properly and the API calls are going through the port dedicated for Azure Resource Manager - 443.
+
+ The `Add-AzEnvironment` cmdlet adds endpoints and metadata to enable Azure Resource Manager cmdlets to connect with a new instance of Azure Resource Manager.
++
+ > [!IMPORTANT]
+ > The Azure Resource Manager endpoint URL that you provide in the following cmdlet is case-sensitive. Make sure the endpoint URL is all in lowercase and matches what you provided in the hosts file. If the case doesn't match, then you will see an error.
+
+ ```powershell
+ Add-AzEnvironment -Name <Environment Name> -ARMEndpoint "https://management.<appliance name>.<DNSDomain>/"
+ ```
+
+ A sample output is shown below:
+
+ ```output
+ PS C:\WINDOWS\system32> Add-AzEnvironment -Name AzASE -ARMEndpoint "https://management.myasegpu.wdshcsso.com/"
+
+ Name Resource Manager Url ActiveDirectory Authority
+ - -- -
+ AzASE https://management.myasegpu.wdshcsso.com/ https://login.myasegpu.wdshcsso.c...
+ ```
+
+2. Set the environment as Azure Stack Edge and the port to be used for Azure Resource Manager calls as 443. You define the environment in two ways:
+
+ - Set the environment. Type the following command:
+
+ ```powershell
+ Set-AzEnvironment -Name <Environment Name>
+ ```
+
+ Here is an example output.
+
+ ```output
+ PS C:\WINDOWS\system32> Set-AzEnvironment -Name AzASE
+
+ Name Resource Manager Url ActiveDirectory Authority
+ - -- -
+ AzASE https://management.myasegpu.wdshcsso.com/ https://login.myasegpu.wdshcsso.c...
+ ```
+ For more information, go to [Set-AzEnvironment](/powershell/module/azurerm.profile/set-azurermenvironment?view=azurermps-6.13.0&preserve-view=true).
+
+ - Define the environment inline for every cmdlet that you execute. This ensures that all the API calls are going through the correct environment. By default, the calls would go through the Azure public but you want these to go through the environment that you set for Azure Stack Edge device.
+
+ - See more information on how to [Switch Az environments](#switch-environments).
+
+2. Call local device APIs to authenticate the connections to Azure Resource Manager.
+
+ 1. These credentials are for a local machine account and are solely used for API access.
+
+ 2. You can connect via `login-AzAccount` or via `Connect-AzAccount` command.
+
+ 1. To sign in, type the following command.
+
+ ```powershell
+ $pass = ConvertTo-SecureString "<Your password>" -AsPlainText -Force;
+ $cred = New-Object System.Management.Automation.PSCredential("EdgeArmUser", $pass)
+ Connect-AzAccount -EnvironmentName AzASE -TenantId c0257de7-538f-415c-993a-1b87a031879d -credential $cred
+ ```
+
+ Use the tenant ID c0257de7-538f-415c-993a-1b87a031879d as in this instance it is hard coded.
+ Use the following username and password.
+
+ - **Username** - *EdgeArmUser*
+
+ - **Password** - [Set the password for Azure Resource Manager](azure-stack-edge-gpu-set-azure-resource-manager-password.md) and use this password to sign in.
+
++
+ Here is an example output for the `Connect-AzAccount`:
+
+ ```output
+ PS C:\windows\system32> $pass = ConvertTo-SecureString "<Your password>" -AsPlainText -Force;
+ PS C:\windows\system32> $cred = New-Object System.Management.Automation.PSCredential("EdgeArmUser", $pass)
+ PS C:\windows\system32> Connect-AzAccount -EnvironmentName AzASE -TenantId c0257de7-538f-415c-993a-1b87a031879d -credential $cred
+
+ Account SubscriptionName TenantId Environment
+ - - -- --
+ EdgeArmUser@localhost Default Provider Subscription c0257de7-538f-415c-993a-1b87a031879d AzASE
+
+ PS C:\windows\system32>
+ ```
+
+ An alternative way to log in is to use the `login-AzAccount` cmdlet.
+
+ `login-AzAccount -EnvironmentName <Environment Name> -TenantId c0257de7-538f-415c-993a-1b87a031879d`
+
+ Here is an example output.
+
+ ```output
+ PS C:\WINDOWS\system32> login-AzAccount -EnvironmentName AzASE -TenantId c0257de7-538f-415c-993a-1b87a031879d
+
+ Account SubscriptionName TenantId
+ - - --
+ EdgeArmUser@localhost Default Provider Subscription c0257de7-538f-415c-993a-1b87a...
+
+ PS C:\WINDOWS\system32>
+ ```
+3. To verify that the connection to the device is working, use the `Get-AzResource` command. This command should return all the resources that exist locally on the device.
+
+ Here is an example output.
+
+ ```output
+ PS C:\WINDOWS\system32> Get-AzResource
+
+ Name : aseimagestorageaccount
+ ResourceGroupName : ase-image-resourcegroup
+ ResourceType : Microsoft.Storage/storageaccounts
+ Location : dbelocal
+ ResourceId : /subscriptions/.../resourceGroups/ase-image-resourcegroup/providers/Microsoft.Storage/storageac
+ counts/aseimagestorageaccount
+ Tags :
+
+ Name : myaselinuxvmimage1
+ ResourceGroupName : ASERG
+ ResourceType : Microsoft.Compute/images
+ Location : dbelocal
+ ResourceId : /subscriptions/.../resourceGroups/ASERG/providers/Microsoft.Compute/images/myaselinuxvmimage1
+ Tags :
+
+ Name : ASEVNET
+ ResourceGroupName : ASERG
+ ResourceType : Microsoft.Network/virtualNetworks
+ Location : dbelocal
+ ResourceId : /subscriptions/.../resourceGroups/ASERG/providers/Microsoft.Network/virtualNetworks/ASEVNET
+ Tags :
+
+ PS C:\WINDOWS\system32>
+ ```
+
+
+
+### [AzureRM](#tab/AzureRM)
+ Set the Azure Resource Manager environment and verify that your device to client communication via Azure Resource Manager is working fine. Take the following steps for this verification:
Set the Azure Resource Manager environment and verify that your device to client
A sample output is shown below:
- ```powershell
+ ```output
PS C:\windows\system32> Add-AzureRmEnvironment -Name AzDBE -ARMEndpoint https://management.dbe-n6hugc2ra.microsoftdatabox.com/ Name Resource Manager Url ActiveDirectory Authority
Set the Azure Resource Manager environment and verify that your device to client
AzDBE https://management.dbe-n6hugc2ra.microsoftdatabox.com https://login.dbe-n6hugc2ra.microsoftdatabox.com/adfs/ ```
-2. Set the environment as Azure Stack Edge Pro and the port to be used for Azure Resource Manager calls as 443. You define the environment in two ways:
+2. Set the environment as Azure Stack Edge and the port to be used for Azure Resource Manager calls as 443. You define the environment in two ways:
- Set the environment. Type the following command:
Set the Azure Resource Manager environment and verify that your device to client
For more information, go to [Set-AzureRMEnvironment](/powershell/module/azurerm.profile/set-azurermenvironment?view=azurermps-6.13.0&preserve-view=true).
- - Define the environment inline for every cmdlet that you execute. This ensures that all the API calls are going through the correct environment. By default, the calls would go through the Azure public but you want these to go through the environment that you set for Azure Stack Edge Pro device.
+ - Define the environment inline for every cmdlet that you execute. This ensures that all the API calls are going through the correct environment. By default, the calls would go through the Azure public but you want these to go through the environment that you set for Azure Stack Edge device.
- See more information on [how to switch AzureRM environments](#switch-environments).
Set the Azure Resource Manager environment and verify that your device to client
- **Password** - [Set the password for Azure Resource Manager](azure-stack-edge-gpu-set-azure-resource-manager-password.md) and use this password to sign in.
- ```powershell
+ ```output
PS C:\windows\system32> $pass = ConvertTo-SecureString "<Your password>" -AsPlainText -Force; PS C:\windows\system32> $cred = New-Object System.Management.Automation.PSCredential("EdgeArmUser", $pass) PS C:\windows\system32> Connect-AzureRmAccount -EnvironmentName AzDBE -TenantId c0257de7-538f-415c-993a-1b87a031879d -credential $cred
Set the Azure Resource Manager environment and verify that your device to client
Here is a sample output of the command.
- ```powershell
+ ```output
PS C:\Users\Administrator> login-AzureRMAccount -EnvironmentName AzDBE -TenantId c0257de7-538f-415c-993a-1b87a031879d Account SubscriptionName TenantId Environment
Set the Azure Resource Manager environment and verify that your device to client
EdgeArmUser@localhost Default Provider Subscription c0257de7-538f-415c-993a-1b87a031879d AzDBE PS C:\Users\Administrator> ```+ If you run into issues with your Azure Resource Manager connections, see [Troubleshoot Azure Resource Manager issues](azure-stack-edge-gpu-troubleshoot-azure-resource-manager.md) for guidance. > [!IMPORTANT]
-> The connection to Azure Resource Manager expires every 1.5 hours or if your Azure Stack Edge Pro device restarts. If this happens, any cmdlets that you execute, will return error messages to the effect that you are not connected to Azure anymore. You will need to sign in again.
+> The connection to Azure Resource Manager expires every 1.5 hours or if your Azure Stack Edge device restarts. If this happens, any cmdlets that you execute, will return error messages to the effect that you are not connected to Azure anymore. You will need to sign in again.
## Switch environments
-Run `Disconnect-AzureRmAccount` command to switch to a different `AzureRmEnvironment`.
+You may need to switch between two environments.
+
+### [Az](#tab/Az)
-If you use `Set-AzureRmEnvironment` and `Login-AzureRmAccount` without using `Disconnect-AzureRmAccount`, the environment is not actually switched.
+Run `Disconnect-AzAccount` command to switch to a different `AzEnvironment`. If you use `Set-AzEnvironment` and `Login-AzAccount` without using `Disconnect-AzAccount`, the environment is not actually switched.
+
+The following examples show how to switch between two environments, `AzASE1` and `AzASE2`.
+
+First, list all the existing environments on your client.
++
+```output
+PS C:\WINDOWS\system32> Get-AzEnvironmentΓÇï
+Name Resource Manager Url ActiveDirectory AuthorityΓÇï
+- -- -ΓÇï
+AzureChinaCloud https://management.chinacloudapi.cn/ https://login.chinacloudapi.cn/ΓÇï
+AzureCloud https://management.azure.com/ https://login.microsoftonline.com/ΓÇï
+AzureGermanCloud https://management.microsoftazure.de/ https://login.microsoftonline.de/ΓÇï
+AzDBE1 https://management.HVTG1T2-Test.microsoftdatabox.com https://login.hvtg1t2-test.microsoftdatabox.com/adfs/ΓÇï
+AzureUSGovernment https://management.usgovcloudapi.net/ https://login.microsoftonline.us/ΓÇï
+AzDBE2 https://management.CVV4PX2-Test.microsoftdatabox.com https://login.cvv4px2-test.microsoftdatabox.com/adfs/ΓÇï
+```
+ΓÇï
+Next, get which environment you are currently connected to via your Azure Resource Manager.
+
+```output
+PS C:\WINDOWS\system32> Get-AzContext |fl *ΓÇï
+ΓÇïΓÇï
+Name : Default Provider Subscription (...) - EdgeArmUser@localhostΓÇï
+Account : EdgeArmUser@localhostΓÇï
+Environment : AzDBE2ΓÇï
+Subscription : ...ΓÇï
+Tenant : c0257de7-538f-415c-993a-1b87a031879dΓÇï
+TokenCache : Microsoft.Azure.Commands.Common.Authentication.ProtectedFileTokenCacheΓÇï
+VersionProfile :ΓÇï
+ExtendedProperties : {}ΓÇï
+```
+
+You should now disconnect from the current environment before you switch to the other environment.ΓÇï
+ΓÇï
+ΓÇï
+```output
+PS C:\WINDOWS\system32> Disconnect-AzAccountΓÇï
+ΓÇïΓÇï
+Id : EdgeArmUser@localhostΓÇï
+Type : UserΓÇï
+Tenants : {c0257de7-538f-415c-993a-1b87a031879d}ΓÇï
+AccessToken :ΓÇï
+Credential :ΓÇï
+TenantMap : {}ΓÇï
+CertificateThumbprint :ΓÇï
+ExtendedProperties : {[Subscriptions, ...], [Tenants, c0257de7-538f-415c-993a-1b87a031879d]}
+```
+
+Log into the other environment. The sample output is shown below.
+
+```output
+PS C:\WINDOWS\system32> Login-AzAccount -Environment "AzDBE1" -TenantId $ArmTenantIdΓÇï
+ΓÇï
+Account SubscriptionName TenantId EnvironmentΓÇï
+- - -- --ΓÇï
+EdgeArmUser@localhost Default Provider Subscription c0257de7-538f-415c-993a-1b87a031879d AzDBE1
+```
+ΓÇï
+Run this cmdlet to confirm which environment you are connected to.
+
+```output
+PS C:\WINDOWS\system32> Get-AzContext |fl *ΓÇï
+ΓÇïΓÇï
+Name : Default Provider Subscription (...) - EdgeArmUser@localhostΓÇï
+Account : EdgeArmUser@localhostΓÇï
+Environment : AzDBE1ΓÇï
+Subscription : ...
+Tenant : c0257de7-538f-415c-993a-1b87a031879dΓÇï
+TokenCache : Microsoft.Azure.Commands.Common.Authentication.ProtectedFileTokenCacheΓÇï
+VersionProfile :ΓÇï
+ExtendedProperties : {}
+```
+ΓÇïYou have now switched to the intended environment.
+
+### [AzureRM](#tab/AzureRM)
+
+Run `Disconnect-AzureRmAccount` command to switch to a different `AzureRmEnvironment`. If you use `Set-AzureRmEnvironment` and `Login-AzureRmAccount` without using `Disconnect-AzureRmAccount`, the environment is not actually switched.
The following examples show how to switch between two environments, `AzDBE1` and `AzDBE2`. First, list all the existing environments on your client.
-```azurepowershell
+```output
PS C:\WINDOWS\system32> Get-AzureRmEnvironmentΓÇï Name Resource Manager Url ActiveDirectory AuthorityΓÇï - -- -ΓÇï
AzDBE2 https://management.CVV4PX2-Test.microsoftdatabox.com https://l
ΓÇï Next, get which environment you are currently connected to via your Azure Resource Manager.
-```azurepowershell
+```output
PS C:\WINDOWS\system32> Get-AzureRmContext |fl *ΓÇï ΓÇïΓÇï Name : Default Provider Subscription (A4257FDE-B946-4E01-ADE7-674760B8D1A3) - EdgeArmUser@localhostΓÇï Account : EdgeArmUser@localhostΓÇï Environment : AzDBE2ΓÇï
-Subscription : a4257fde-b946-4e01-ade7-674760b8d1a3ΓÇï
+Subscription : ...ΓÇï
Tenant : c0257de7-538f-415c-993a-1b87a031879dΓÇï TokenCache : Microsoft.Azure.Commands.Common.Authentication.ProtectedFileTokenCacheΓÇï VersionProfile :ΓÇï
ExtendedProperties : {}ΓÇï
You should now disconnect from the current environment before you switch to the other environment.ΓÇï ΓÇï ΓÇï
-```azurepowershell
+```output
PS C:\WINDOWS\system32> Disconnect-AzureRmAccountΓÇï ΓÇïΓÇï Id : EdgeArmUser@localhostΓÇï
AccessToken :ΓÇï
Credential :ΓÇï TenantMap : {}ΓÇï CertificateThumbprint :ΓÇï
-ExtendedProperties : {[Subscriptions, A4257FDE-B946-4E01-ADE7-674760B8D1A3], [Tenants, c0257de7-538f-415c-993a-1b87a031879d]}
+ExtendedProperties : {[Subscriptions, ...], [Tenants, c0257de7-538f-415c-993a-1b87a031879d]}
``` Log into the other environment. The sample output is shown below.
-```azurepowershell
+```output
PS C:\WINDOWS\system32> Login-AzureRmAccount -Environment "AzDBE1" -TenantId $ArmTenantIdΓÇï ΓÇï Account SubscriptionName TenantId EnvironmentΓÇï
EdgeArmUser@localhost Default Provider Subscription c0257de7-538f-415c-993a-1b87
ΓÇï Run this cmdlet to confirm which environment you are connected to.
-```azurepowershell
+```output
PS C:\WINDOWS\system32> Get-AzureRmContext |fl *ΓÇï ΓÇïΓÇï
-Name : Default Provider Subscription (A4257FDE-B946-4E01-ADE7-674760B8D1A3) - EdgeArmUser@localhostΓÇï
+Name : Default Provider Subscription (...) - EdgeArmUser@localhostΓÇï
Account : EdgeArmUser@localhostΓÇï Environment : AzDBE1ΓÇï
-Subscription : a4257fde-b946-4e01-ade7-674760b8d1a3ΓÇï
+Subscription : ...ΓÇï
Tenant : c0257de7-538f-415c-993a-1b87a031879dΓÇï TokenCache : Microsoft.Azure.Commands.Common.Authentication.ProtectedFileTokenCacheΓÇï VersionProfile :ΓÇï ExtendedProperties : {} ```
-ΓÇïYou have now switched to the intended environment.
+ΓÇï
+
+You have now switched to the intended environment.
## Next steps
databox-online Azure Stack Edge Gpu Create Virtual Switch Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-create-virtual-switch-powershell.md
Previously updated : 04/06/2021 Last updated : 06/25/2021
Before you begin, make sure that:
``` Here is an example output:
- ```powershell
+ ```output
[10.100.10.10]: PS>Get-NetAdapter -Physical Name InterfaceDescription ifIndex Status MacAddress LinkSpeed
Before you begin, make sure that:
2. Choose a network interface that is: - In the **Up** status.
- - Not used by any existing virtual switches. Currently, only one vswitch can be configured per network interface.
+ - Not used by any existing virtual switches. Currently, only one virtual switch can be configured per network interface.
To check the existing virtual switch and network interface association, run the `Get-HcsExternalVirtualSwitch` command. Here is an example output.
- ```powershell
+ ```output
[10.100.10.10]: PS>Get-HcsExternalVirtualSwitch Name : vSwitch1
Use the `Get-HcsExternalVirtualSwitch` command to identify the newly created swi
Here is an example output:
-```powershell
-[10.100.10.10]: P> Add-HcsExternalVirtualSwitch -InterfaceAlias Port5 -WaitForSwitchCreation $true
+```output
+[10.100.10.10]: PS> Add-HcsExternalVirtualSwitch -InterfaceAlias Port5 -WaitForSwitchCreation $true
[10.100.10.10]: PS>Get-HcsExternalVirtualSwitch Name : vSwitch1
Type : External
[10.100.10.10]: PS> ```
-## Verify network, subnet
+## Verify network, subnet for switch
After you have created the new virtual switch, Azure Stack Edge Pro GPU automatically creates a virtual network and subnet that corresponds to it. You can use this virtual network when creating VMs.
-<!--To identify the virtual network and subnet associated with the new switch that you created, use the `Get-HcsVirtualNetwork` command. This cmdlet will be released in April some time. -->
+To identify the virtual network and the subnet associated with the new switch that you created, use the `Get-HcsVirtualNetwork` cmdlet.
+
+## Create virtual LANs
+
+To add a virtual local area network (LAN) configuration on a virtual switch, use the following cmdlet.
+
+```powershell
+Add-HcsVirtualNetwork-VirtualSwitchName <Virtual Switch name> -VnetName <Virtual Network Name> ΓÇôVlanId <Vlan Id> ΓÇôAddressSpace <Address Space> ΓÇôGatewayIPAddress <Gateway IP>ΓÇôDnsServers <Dns Servers List> -DnsSuffix <Dns Suffix name>
+```
+
+The following parameters can be used with the `Add-HcsVirtualNetwork-VirtualSwitchName` cmdlet.
++
+|Parameters |Description |
+|||
+|VNetName |Name for the virtual LAN network |
+|VirtualSwitchName |Virtual switch name where you want to add virtual LAN config |
+|AddressSpace |Subnet address space for the virtual LAN network |
+|GatewayIPAddress |Gateway for the virtual network |
+|DnsServers |List of Dns Server IP addresses |
+|DnsSuffix |Dns name without the host part for the virtual LAN network subnet |
+++
+Here is an example output.
+
+```output
+[10.100.10.10]: PS> Add-HcsVirtualNetwork -VirtualSwitchName vSwitch1 -VnetName vlanNetwork100 -VlanId 100 -AddressSpace 5.5.0.0/16 -GatewayIPAddress 5.5.0.1 -DnsServers "5.5.50.50","5.5.50.100" -DnsSuffix "name.domain.com"
+
+[10.100.10.10]: PS> Get-HcsVirtualNetwork
+
+Name : vnet2015
+AddressSpace : 10.128.48.0/22
+SwitchName : vSwitch1
+GatewayIPAddress : 10.128.48.1
+DnsServers : {}
+DnsSuffix :
+VlanId : 2015
+
+Name : vnet3011
+AddressSpace : 10.126.64.0/22
+SwitchName : vSwitch1
+GatewayIPAddress : 10.126.64.1
+DnsServers : {}
+DnsSuffix :
+VlanId : 3011
+```
+
+> [!NOTE]
+> - You can configure multiple virtual LANs on the same virtual switch.
+> - The gateway IP address must in the same subnet as the parameter passed in as address space.
+> - You can't remove a virtual switch if there are virtual LANs configured. To delete this virtual switch, you first need to delete the virtual LAN and then delete the virtual switch.
+
+## Verify network, subnet for virtual LAN
+
+After you've created the virtual LAN, a virtual network and a corresponding subnet are automatically created. You can use this virtual network when creating VMs.
+
+To identify the virtual network and the subnet associated with the new switch that you created, use the `Get-HcsVirtualNetwork` cmdlet.
+ ## Next steps
databox-online Azure Stack Edge Gpu Deploy Virtual Machine Cli Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-cli-python.md
Previously updated : 05/19/2021 Last updated : 06/30/2021 #Customer intent: As an IT admin, I need to understand how to create and manage virtual machines (VMs) on my Azure Stack Edge Pro device using APIs so that I can efficiently manage my VMs.
For a detailed explanation of the workflow diagram, see [Deploy VMs on your Azur
Before you begin creating and managing a VM on your Azure Stack Edge Pro device using Azure CLI and Python, you need to make sure you have completed the prerequisites listed in the following steps:
-1. You completed the network settings on your Azure Stack Edge Pro device as described in [Step 1: Configure Azure Stack Edge Pro device](azure-stack-edge-gpu-connect-resource-manager.md#step-1-configure-azure-stack-edge-pro-device).
+1. You completed the network settings on your Azure Stack Edge Pro device as described in [Step 1: Configure Azure Stack Edge Pro device](azure-stack-edge-gpu-connect-resource-manager.md#step-1-configure-azure-stack-edge-device).
2. You enabled a network interface for compute. This network interface IP is used to create a virtual switch for the VM deployment. The following steps walk you through the process:
Before you begin creating and managing a VM on your Azure Stack Edge Pro device
Follow these steps to sign in as a *user*:
- You can either specify the username and password directly within the `az login` command, or authenticate by using a browser. You must do the latter if your account has multi-factor authentication enabled.
+ You can either specify the username and password directly within the `az login` command, or authenticate by using a browser. You must do the latter if your account has multifactor authentication enabled.
The following shows sample usage of `az login`:
Before you begin creating and managing a VM on your Azure Stack Edge Pro device
$ENV:ARM_TENANT_ID = "c0257de7-538f-415c-993a-1b87a031879d" $ENV:ARM_CLIENT_ID = "cbd868c5-7207-431f-8d16-1cb144b50971" $ENV:ARM_CLIENT_SECRET - "<Your Azure Resource Manager password>"
- $ENV:ARM_SUBSCRIPTION_ID = "A4257FDE-B946-4E01-ADE7-674760B8D1A3"
+ $ENV:ARM_SUBSCRIPTION_ID = "<Your subscription ID>"
``` Your Azure Resource Manager Client ID is hard-coded. Your Azure Resource Manager Tenant ID and Azure Resource Manager Subscription ID are both present in the output of the `az login` command you ran earlier. The Azure Resource Manager Client secret is the Azure Resource Manager password that you set.
A Python script is provided to you to create a VM. Depending on whether you are
ubuntu13.vhd VM image resource id:
- /subscriptions/a4257fde-b946-4e01-ade7-674760b8d1a3/resourceGroups/azure-sample-group-virtual-machines118/providers/Microsoft.Compute/images/UbuntuImage
+ /subscriptions/.../resourceGroups/azure-sample-group-virtual-machines118/providers/Microsoft.Compute/images/UbuntuImage
Create Vnet Create Subnet
databox-online Azure Stack Edge Gpu Deploy Virtual Machine Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-portal.md
The high-level summary of the deployment workflow is as follows:
Before you begin to create and manage VMs on your device via the Azure portal, make sure that:
-1. You've completed the network settings on your Azure Stack Edge Pro GPU device as described in [Step 1: Configure an Azure Stack Edge Pro GPU device](./azure-stack-edge-gpu-connect-resource-manager.md#step-1-configure-azure-stack-edge-pro-device).
+1. You've completed the network settings on your Azure Stack Edge Pro GPU device as described in [Step 1: Configure an Azure Stack Edge Pro GPU device](./azure-stack-edge-gpu-connect-resource-manager.md#step-1-configure-azure-stack-edge-device).
1. You've enabled a network interface for compute. This network interface IP is used to create a virtual switch for the VM deployment. In the local UI of your device, go to **Compute**. Select the network interface that you'll use to create a virtual switch.
databox-online Azure Stack Edge Gpu Deploy Virtual Machine Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-powershell.md
Previously updated : 05/13/2021- Last updated : 06/25/2021+ #Customer intent: As an IT admin, I need to understand how to create and manage virtual machines (VMs) on my Azure Stack Edge Pro device. I want to use APIs so that I can efficiently manage my VMs.
This article describes how to create and manage a virtual machine (VM) on your A
## VM deployment workflow
-The deployment workflow is displayed in the following diagram:
+The high-level deployment workflow of the VM deployment is as follows:
-![Diagram of the VM deployment workflow.](media/azure-stack-edge-gpu-deploy-virtual-machine-powershell/vm-workflow-r.svg)
+1. Connect to the local Azure Resource Manager of your device.
+1. Identify the built-in subscription on the device.
+1. Bring your VM image.
+1. Create a resource group in the built-in subscription. The resource group will contain the VM and all the related resources.
+1. Create a local storage account on the device to store the VHD that will be used to create a VM image.
+1. Upload a Windows/Linux source image into the storage account to create a managed disk.
+1. Use the managed disk to create a VM image.
+1. Enable compute on a device port to create a virtual switch.
+1. This creates a virtual network using the virtual switch attached to the port on which you enabled compute.
+1. Create a VM using the previously created VM image, virtual network, and virtual network interface(s) to communicate within the virtual network and assign a public IP address to remotely access the VM. Optionally include data disks to provide more storage for your VM.
## Prerequisites
The subscription contains all the resources that are required for VM creation.
The subscription is used to deploy the VMs.
+### [Az](#tab/az)
+ 1. To list the subscription, run the following command: ```powershell
- Get-AzureRmSubscription
+ Get-AzSubscription
+ ```
+
+ Here's some example output:
+
+ ```output
+ PS C:\WINDOWS\system32> Get-AzSubscription
+
+ Name Id TenantId
+ - -- --
+ Default Provider Subscription ... ...
+
+
+ PS C:\WINDOWS\system32>
```
+
+1. Get a list of the registered resource providers that are running on the device. The list ordinarily includes compute, network, and storage.
+
+ ```powershell
+ Get-AzResourceProvider
+ ```
+
+ > [!NOTE]
+ > The resource providers are pre-registered, and they can't be modified or changed.
Here's some example output:
+ ```output
+ PS C:\WINDOWS\system32> Get-AzResourceProvider
+
+ ProviderNamespace : Microsoft.AzureBridge
+ RegistrationState : Registered
+ ResourceTypes : {locations, operations, locations/ingestionJobs}
+ Locations : {DBELocal}
+
+ ProviderNamespace : Microsoft.Compute
+ RegistrationState : Registered
+ ResourceTypes : {virtualMachines, virtualMachines/extensions, locations, operations...}
+ Locations : {DBELocal}
+
+ ProviderNamespace : Microsoft.Network
+ RegistrationState : Registered
+ ResourceTypes : {operations, locations, locations/operations, locations/usages...}
+ Locations : {DBELocal}
+
+ ProviderNamespace : Microsoft.Resources
+ RegistrationState : Registered
+ ResourceTypes : {tenants, locations, providers, checkresourcename...}
+ Locations : {DBELocal}
+
+ ProviderNamespace : Microsoft.Storage
+ RegistrationState : Registered
+ ResourceTypes : {storageaccounts, storageAccounts/blobServices, storageAccounts/tableServices,
+ storageAccounts/queueServices...}
+ Locations : {DBELocal}
+
+ PS C:\WINDOWS\system32>
+ ```
+
+### [AzureRM](#tab/azure-rm)
+
+1. To list the subscription, run the following command:
+ ```powershell
+ Get-AzureRmSubscription
+ ```
+
+ Here's some example output:
+
+ ```output
PS C:\windows\system32> Get-AzureRmSubscription Name Id TenantId State - -- -- --
- Default Provider Subscription A4257FDE-B946-4E01-ADE7-674760B8D1A3 c0257de7-538f-415c-993a-1b87a031879d Enabled
+ Default Provider Subscription ... c0257de7-538f-415c-993a-1b87a031879d Enabled
PS C:\windows\system32> ```
The subscription is used to deploy the VMs.
Here's some example output:
- ```powershell
- Get-AzureRmResourceProvider
+ ```output
+ PS C:\Windows\system32> Get-AzureRmResourceProvider
ProviderNamespace : Microsoft.Compute RegistrationState : Registered ResourceTypes : {virtualMachines, virtualMachines/extensions, locations, operations...}
The subscription is used to deploy the VMs.
Locations : {DBELocal} ZoneMappings : ```
-
+
+
## Create a resource group
-Create an Azure resource group with [New-AzureRmResourceGroup](/powershell/module/az.resources/new-azresourcegroup). A resource group is a logical container into which Azure resources, such as a storage account, disk, and managed disk, are deployed and managed.
+Start by creating a new Azure resource group and use this as a logical container for all the VM related resources, such as storage account, disk, network interface, and managed disk.
> [!IMPORTANT] > All the resources are created in the same location as that of the device, and the location is set to **DBELocal**.
+### [Az](#tab/az)
+
+1. Set some parameters.
+
+ ```powershell
+ $ResourceGroupName = "<Resource group name>"
+ ```
+1. Create a resource group for the resources that you'll create for the VM.
+
+ ```powershell
+ New-AzResourceGroup -Name $ResourceGroupName -Location DBELocal
+ ```
+
+ Here's some example output:
+
+ ```output
+ PS C:\WINDOWS\system32> New-AzResourceGroup -Name myaseazrg -Location DBELocal
+
+ ResourceGroupName : myaseazrg
+ Location : dbelocal
+ ProvisioningState : Succeeded
+ Tags :
+ ResourceId : /subscriptions/.../resourceGroups/myaseazrg
+
+ PS C:\WINDOWS\system32>
+ ```
+
+### [AzureRM](#tab/azure-rm)
+ ```powershell New-AzureRmResourceGroup -Name <Resource group name> -Location DBELocal ``` Here's some example output:
-```powershell
+```output
New-AzureRmResourceGroup -Name rg191113014333 -Location DBELocal Successfully created Resource Group:rg191113014333 ```+ ## Create a storage account Create a new storage account by using the resource group that you created in the preceding step. This is a local storage account that you use to upload the virtual disk image for the VM.
+### [Az](#tab/az)
+
+1. Set some parameters.
+
+ ```powershell
+ $StorageAccountName = "<Storage account name>"
+ ```
+
+1. Create a new local storage account on your device.
+
+ ```powershell
+ New-AzStorageAccount -Name $StorageAccountName -ResourceGroupName $ResourceGroupName -Location DBELocal -SkuName Standard_LRS
+ ```
+
+ > [!NOTE]
+ > By using Azure Resource Manager, you can create only local storage accounts, such as locally redundant storage (standard or premium). To create tiered storage accounts, see [Tutorial: Transfer data via storage accounts with Azure Stack Edge Pro with GPU](azure-stack-edge-gpu-deploy-add-storage-accounts.md).
+
+ Here's an example output:
+
+ ```output
+ PS C:\WINDOWS\system32> New-AzStorageAccount -Name myaseazsa -ResourceGroupName myaseazrg -Location DBELocal -SkuName Standard_LRS
+
+ StorageAccountName ResourceGroupName PrimaryLocation SkuName Kind AccessTier CreationTime
+ -- - - -
+ myaseazsa myaseazrg DBELocal Standard_LRS Storage 6/10/2021 11:45...
+
+ PS C:\WINDOWS\system32>
+ ```
+
+1. Get the storage account key for the account that you created in the earlier step. When prompted, provide the resource group name and the storage account name.
+
+ ```powershell
+ Get-AzStorageAccountKey
+ ```
+
+ Here's an example output:
+
+ ```output
+ PS C:\WINDOWS\system32> Get-AzStorageAccountKey
+
+ cmdlet Get-AzStorageAccountKey at command pipeline position 1
+ Supply values for the following parameters:
+ (Type !? for Help.)
+ ResourceGroupName: myaseazrg
+ Name: myaseazsa
+
+ KeyName Value Permissions
+ - --
+ key1 gv3OF57tuPDyzBNc1M7fhil2UAiiwnhTT6zgiwE3TlF/CD217Cvw2YCPcrKF47joNKRvzp44leUe5HtVkGx8RQ== Full
+ key2 kmEynIs3xnpmSxWbU41h5a7DZD7v4gGV3yXa2NbPbmhrPt10+QmE5PkOxxypeSqbqzd9si+ArNvbsqIRuLH2Lw== Full
+
+ PS C:\WINDOWS\system32>
+ ```
+
+### [AzureRM](#tab/azure-rm)
+ ```powershell New-AzureRmStorageAccount -Name <Storage account name> -ResourceGroupName <Resource group name> -Location DBELocal -SkuName Standard_LRS ```
New-AzureRmStorageAccount -Name <Storage account name> -ResourceGroupName <Resou
Here's some example output:
-```powershell
+```output
New-AzureRmStorageAccount -Name sa191113014333 -ResourceGroupName rg191113014333 -SkuName Standard_LRS -Location DBELocal ResourceGroupName : rg191113014333 StorageAccountName : sa191113014333
-Id : /subscriptions/a4257fde-b946-4e01-ade7-674760b8d1a3/resourceGroups/rg191113014333/providers/Microsoft.Storage/storageaccounts/sa191113014333
+Id : /subscriptions/.../resourceGroups/rg191113014333/providers/Microsoft.Storage/storageaccounts/sa191113014333
Location : DBELocal Sku : Microsoft.Azure.Management.Storage.Models.Sku Kind : Storage
ExtendedProperties : {}
To get the storage account key, run the `Get-AzureRmStorageAccountKey` command. Here's some example output:
-```powershell
-PS C:\Users\Administrator> Get-AzureRmStorageAccountKey
+```output
+PS C:\windows\system32> Get-AzureRmStorageAccountKey
cmdlet Get-AzureRmStorageAccountKey at command pipeline position 1 Supply values for the following parameters:
KeyName Value
key1 /IjVJN+sSf7FMKiiPLlDm8mc9P4wtcmhhbnCa7... key2 gd34TcaDzDgsY9JtDNMUgLDOItUU0Qur3CBo6Q... ```+ ## Add the blob URI to the host file
-You already added the blob URI in the hosts file for the client that you're using to connect to Azure Blob Storage in "Step 5: Modify host file for endpoint name resolution" of [Deploy VMs on your Azure Stack Edge device via Azure PowerShell](./azure-stack-edge-gpu-connect-resource-manager.md#step-5-modify-host-file-for-endpoint-name-resolution). This entry was used to add the blob URI:
+You already added the blob URI in the hosts file for the client that you're using to connect to Azure Blob Storage in **Modify host file for endpoint name resolution** of [Connecting to Azure Resource Manager on your Azure Stack Edge device](./azure-stack-edge-gpu-connect-resource-manager.md#step-5-modify-host-file-for-endpoint-name-resolution). This entry was used to add the blob URI:
-\<Azure consistent network services VIP \> \<storage name\>.blob.\<appliance name\>.\<dnsdomain\>
+`<Device IP address>` `<storage name>.blob.<appliance name>.<dnsdomain>`
## Install certificates
If you're using HTTPS, you need to install the appropriate certificates on your
Copy any disk images to be used into page blobs in the local storage account that you created earlier. You can use a tool such as [AzCopy](../storage/common/storage-use-azcopy-v10.md) to upload the virtual hard disk (VHD) to the storage account.
-<!--Before you use AzCopy, make sure that the [AzCopy is configured correctly](#configure-azcopy) for use with the blob storage REST API version that you're using with your Azure Stack Edge Pro device.
-```powershell
-AzCopy /Source:<sourceDirectoryForVHD> /Dest:<blobContainerUri> /DestKey:<storageAccountKey> /Y /S /V /NC:32 /BlobType:page /destType:blob
-```
+### [Az](#tab/az)
-> [!NOTE]
-> Set `BlobType` to `page` for creating a managed disk out of VHD. Set `BlobType` to `block` when you're writing to tiered storage accounts by using AzCopy.
+Use the following commands with AzCopy 10:
-You can download the disk images from Azure Marketplace. For more information, see [Get the virtual disk image from Azure Marketplace](azure-stack-edge-j-series-create-virtual-machine-image.md).
+1. Set some parameters including the appropriate version of APIs for AzCopy. In this example, AzCopy 10 was used.
-Here's some example output that uses AzCopy 7.3. For more information about this command, see [Upload VHD file to storage account by using AzCopy](../devtest-labs/devtest-lab-upload-vhd-using-azcopy.md).
+ ```powershell
+ $Env:AZCOPY_DEFAULT_SERVICE_API_VERSION="2019-07-07"
+ $ContainerName = <Container name>
+ $ResourceGroupName = <Resource group name>
+ $StorageAccountName = <Storage account name>
+ $VHDPath = "Full VHD Path"
+ $VHDFile = <VHD file name>
+ ```
+1. Copy the VHD from the source (in this case, local system) to the storage account that you created on your device in the earlier step.
+ ```powershell
+ $StorageAccountKey = (Get-AzStorageAccountKey -ResourceGroupName $ResourceGroupName -Name $StorageAccountName)[0].Value
+ $endPoint = (Get-AzStorageAccount -name $StorageAccountName -ResourceGroupName $ResourceGroupName).PrimaryEndpoints.Blob
+ $StorageAccountContext = New-AzStorageContext -StorageAccountName $StorageAccountName -StorageAccountKey $StorageAccountKey -Endpoint $endpoint
+ $StorageAccountSAS = New-AzStorageAccountSASToken -Service Blob -ResourceType Container,Service,Object -Permission "acdlrw" -Context $StorageAccountContext -Protocol HttpsOnly
+ <Path to azcopy.exe> cp "$VHDPath\$VHDFile" "$endPoint$ContainerName$StorageAccountSAS"
+ ```
+
+ Here's an example output:
+
+ ```output
+ PS C:\windows\system32> $ContainerName = "testcontainer1"
+ PS C:\windows\system32> $ResourceGroupName = "myaseazrg"
+ PS C:\windows\system32> $StorageAccountName = "myaseazsa"
+ PS C:\windows\system32> $VHDPath = "C:\Users\alkohli\Downloads\Ubuntu1604"
+ PS C:\windows\system32> $VHDFile = "ubuntu13.vhd"
+
+ PS C:\windows\system32> $StorageAccountKey = (Get-AzStorageAccountKey -ResourceGroupName $ResourceGroupName -Name $StorageAccountName)[0].Value
+ PS C:\windows\system32> $endPoint = (Get-AzStorageAccount -name $StorageAccountName -ResourceGroupName $ResourceGroupName).PrimaryEndpoints.Blob
+ PS C:\windows\system32> $StorageAccountContext = New-AzStorageContext -StorageAccountName $StorageAccountName -StorageAccountKey $StorageAccountKey -Endpoint $endpoint
+ PS C:\windows\system32> $StorageAccountSAS = New-AzStorageAccountSASToken -Service Blob -ResourceType Container,Service,Object -Permission "acdlrw" -Context $StorageAccountContext -Protocol HttpsOnly
+
+ PS C:\windows\system32> C:\azcopy\azcopy_windows_amd64_10.10.0\azcopy.exe cp "$VHDPath\$VHDFile" "$endPoint$ContainerName$StorageAccountSAS"
+ INFO: Scanning...
+ INFO: Any empty folders will not be processed, because source and/or destination doesn't have full folder support
+
+ Job 72a5e3dd-9210-3e43-6691-6bebd4875760 has started
+ Log file is located at: C:\Users\alkohli\.azcopy\72a5e3dd-9210-3e43-6691-6bebd4875760.log
+
+ INFO: azcopy.exe: A newer version 10.11.0 is available to download
+ ```
+
+### [AzureRM](#tab/azure-rm)
-```powershell
-AzCopy /Source:\\hcsfs\scratch\vm_vhds\linux\ /Dest:http://sa191113014333.blob.dbe-1dcmhq2.microsoftdatabox.com/vmimages /DestKey:gJKoyX2Amg0Zytd1ogA1kQ2xqudMHn7ljcDtkJRHwMZbMK== /Y /S /V /NC:32 /BlobType:page /destType:blob /z:2e7d7d27-c983-410c-b4aa-b0aa668af0c6
-```-->
Use the following commands with AzCopy 10: ```powershell
$StorageAccountSAS = New-AzureStorageAccountSASToken -Service Blob,File,Queue,Ta
Here's some example output:
-```powershell
-$ContainerName = <ContainerName>
-$ResourceGroupName = <ResourceGroupName>
-$StorageAccountName = <StorageAccountName>
-$VHDPath = "Full VHD Path"
-$VHDFile = <VHDFileName>
+```output
+$ContainerName = <Container name>
+$ResourceGroupName = <Resource group name>
+$StorageAccountName = <Storage account name>
+$VHDPath = "Full VHD path"
+$VHDFile = <VHD file name>
$StorageAccountKey = (Get-AzureRmStorageAccountKey -ResourceGroupName $ResourceGroupName -Name $StorageAccountName)[0].Value
$StorageAccountSAS = New-AzureStorageAccountSASToken -Service Blob,File,Queue,Ta
C:\AzCopy.exe cp "$VHDPath\$VHDFile" "$endPoint$ContainerName$StorageAccountSAS" ```+ ## Create a managed disk from the VHD
-To create a managed disk from the uploaded VHD, run the following command:
+You will now create a managed disk from the uploaded VHD.
+
+### [Az](#tab/az)
+
+1. Set some parameters.
+
+ ```powershell
+ $DiskName = "<Managed disk name>"
+ ```
+
+1. Create a managed disk from uploaded VHD. To get the source URL for your VHD, go to the container in the storage account that contains the VHD in Storage Explorer. Select the VHD, and right-click and then select **Properties**. In the **Blob properties** dialog, select the **URI**.
+
+ ```powershell
+ $StorageAccountId = (Get-AzStorageAccount -ResourceGroupName $ResourceGroupName -Name $StorageAccountName).Id
+ $DiskConfig = New-AzDiskConfig -Location DBELocal -StorageAccountId $StorageAccountId -CreateOption Import -SourceUri "Source URL for your VHD"
+ New-AzDisk -ResourceGroupName $ResourceGroupName -DiskName $DiskName -Disk $DiskConfig
+ ```
+ Here's an example output:.
+
+ ```output
+ PS C:\WINDOWS\system32> $DiskName = "myazmd"
+ PS C:\WINDOWS\system32> $StorageAccountId = (Get-AzStorageAccount -ResourceGroupName $ResourceGroupName -Name $StorageAccountName).Id
+ PS C:\WINDOWS\system32> $DiskConfig = New-AzDiskConfig -Location DBELocal -StorageAccountId $StorageAccountId -CreateOption Import -SourceUri "https://myaseazsa.blob.myasegpu.wdshcsso.com/testcontainer1/ubuntu13.vhd"
+ PS C:\WINDOWS\system32> New-AzDisk -ResourceGroupName $ResourceGroupName -DiskName $DiskName -Disk $DiskConfig
+
+ ResourceGroupName : myaseazrg
+ ManagedBy :
+ Sku : Microsoft.Azure.Management.Compute.Models.DiskSku
+ Zones :
+ TimeCreated : 6/24/2021 12:19:56 PM
+ OsType :
+ HyperVGeneration :
+ CreationData : Microsoft.Azure.Management.Compute.Models.CreationDat
+ a
+ DiskSizeGB : 30
+ DiskSizeBytes : 32212254720
+ UniqueId : 53743801-cbf2-4d2f-acb4-971d037a9395
+ EncryptionSettingsCollection :
+ ProvisioningState : Succeeded
+ DiskIOPSReadWrite : 500
+ DiskMBpsReadWrite : 60
+ DiskState : Unattached
+ Encryption : Microsoft.Azure.Management.Compute.Models.Encryption
+ Id : /subscriptions/.../r
+ esourceGroups/myaseazrg/providers/Microsoft.Compute/d
+ isks/myazmd
+ Name : myazmd
+ Type : Microsoft.Compute/disks
+ Location : DBELocal
+ Tags : {}
+
+ PS C:\WINDOWS\system32>
+ ```
+
+
+### [AzureRM](#tab/azure-rm)
```powershell $DiskConfig = New-AzureRmDiskConfig -Location DBELocal -CreateOption Import -SourceUri "Source URL for your VHD" ``` Here's some example output:
-<code>
-$DiskConfig = New-AzureRmDiskConfig -Location DBELocal -CreateOption Import ΓÇôSourceUri http://</code><code>sa191113014333.blob.dbe-1dcmhq2.microsoftdatabox.com/vmimages/ubuntu13.vhd</code>
+```output
+$DiskConfig = New-AzureRmDiskConfig -Location DBELocal -CreateOption Import ΓÇôSourceUri http://sa191113014333.blob.dbe-1dcmhq2.microsoftdatabox.com/vmimages/ubuntu13.vhd
+```
+
```powershell New-AzureRMDisk -ResourceGroupName <Resource group name> -DiskName <Disk name> -Disk $DiskConfig
New-AzureRMDisk -ResourceGroupName <Resource group name> -DiskName <Disk name> -
Here's some example output. For more information about this cmdlet, see [New-AzureRmDisk](/powershell/module/azurerm.compute/new-azurermdisk?view=azurermps-6.13.0&preserve-view=true).
-```powershell
-Tags :
-New-AzureRmDisk -ResourceGroupName rg191113014333 -DiskName ld191113014333 -Disk $DiskConfig
+```output
+Tags : New-AzureRmDisk -ResourceGroupName rg191113014333 -DiskName ld191113014333 -Disk $DiskConfig
ResourceGroupName : rg191113014333 ManagedBy :
CreationData : Microsoft.Azure.Management.Compute.Models.CreationData
DiskSizeGB : 30 EncryptionSettings : ProvisioningState : Succeeded
-Id : /subscriptions/a4257fde-b946-4e01-ade7-674760b8d1a3/resourceGroups/rg191113014333/providers/Micros
+Id : /subscriptions/.../resourceGroups/rg191113014333/providers/Micros
oft.Compute/disks/ld191113014333 Name : ld191113014333 Type : Microsoft.Compute/disks Location : DBELocal Tags : {} ```++
+## Create a VM image from the managed disk
+
+You'll now create a VM image from the managed disk.
-## Create a VM image from the image managed disk
+### [Az](#tab/az)
-To create a VM image from the managed disk, run the following command. Replace *\<Disk name>*, *\<OS type>*, and *\<Disk size>* with real values.
+1. Set some parameters.
+
+ ```powershell
+ $DiskSize = "<Size greater than or equal to size of source managed disk>"
+ $OsType = "<linux or windows>"
+ $ImageName = "<Image name>"
+ ```
+1. Create a VM image. The supported OS types are Linux and Windows.
+
+ ```powershell
+ $imageConfig = New-AzImageConfig -Location DBELocal
+ $ManagedDiskId = (Get-AzDisk -Name $DiskName -ResourceGroupName $ResourceGroupName).Id
+ Set-AzImageOsDisk -Image $imageConfig -OsType $OsType -OsState 'Generalized' -DiskSizeGB $DiskSize -ManagedDiskId $ManagedDiskId
+ New-AzImage -Image $imageConfig -ImageName $ImageName -ResourceGroupName $ResourceGroupName
+ ```
+ Here's an example output.
+
+ ```output
+ PS C:\WINDOWS\system32> $OsType = "linux"
+ PS C:\WINDOWS\system32> $ImageName = "myaseazlinuxvmimage"
+ PS C:\WINDOWS\system32> $DiskSize = 35
+ PS C:\WINDOWS\system32> $imageConfig = New-AzImageConfig -Location DBELocal
+ PS C:\WINDOWS\system32> $ManagedDiskId = (Get-AzDisk -Name $DiskName -ResourceGroupName $ResourceGroupName).Id
+ PS C:\WINDOWS\system32> Set-AzImageOsDisk -Image $imageConfig -OsType $OsType -OsState 'Generalized' -DiskSizeGB $DiskSize -ManagedDiskId $ManagedDiskId
+
+ ResourceGroupName :
+ SourceVirtualMachine :
+ StorageProfile : Microsoft.Azure.Management.Compute.Models.ImageStorageProfile
+ ProvisioningState :
+ HyperVGeneration : V1
+ Id :
+ Name :
+ Type :
+ Location : DBELocal
+ Tags :
+
+ PS C:\WINDOWS\system32> New-AzImage -Image $imageConfig -ImageName $ImageName -ResourceGroupName $ResourceGroupName
+
+ ResourceGroupName : myaseazrg
+ SourceVirtualMachine :
+ StorageProfile : Microsoft.Azure.Management.Compute.Models.ImageStorageProfile
+ ProvisioningState : Succeeded
+ HyperVGeneration : V1
+ Id : /subscriptions/.../resourceG
+ roups/myaseazrg/providers/Microsoft.Compute/images/myaseazlin
+ uxvmimage
+ Name : myaseazlinuxvmimage
+ Type : Microsoft.Compute/images
+ Location : dbelocal
+ Tags : {}
+
+ PS C:\WINDOWS\system32>
+ ```
+
+### [AzureRM](#tab/azure-rm)
+
+Run the following command. Replace *\<Disk name>*, *\<OS type>*, and *\<Disk size>* with real values.
```powershell $imageConfig = New-AzureRmImageConfig -Location DBELocal $ManagedDiskId = (Get-AzureRmDisk -Name <Disk name> -ResourceGroupName <Resource group name>).Id Set-AzureRmImageOsDisk -Image $imageConfig -OsType '<OS type>' -OsState 'Generalized' -DiskSizeGB <Disk size> -ManagedDiskId $ManagedDiskId -
-The supported OS types are Linux and Windows.
-
-For OS Type=Linux, for example:
-Set-AzureRmImageOsDisk -Image $imageConfig -OsType 'Linux' -OsState 'Generalized' -DiskSizeGB <Disk size> -ManagedDiskId $ManagedDiskId
New-AzureRmImage -Image $imageConfig -ImageName <Image name> -ResourceGroupName <Resource group name> ```
+The supported OS types are Linux and Windows.
+ Here's some example output. For more information about this cmdlet, see [New-AzureRmImage](/powershell/module/azurerm.compute/new-azurermimage?view=azurermps-6.13.0&preserve-view=true).
-```powershell
-New-AzureRmImage -Image Microsoft.Azure.Commands.Compute.Automation.Models.PSImage -ImageName ig191113014333 -ResourceGroupName rg191113014333
-ResourceGroupName : rg191113014333
+```output
+PS C:\Windows\system32> New-AzImage -Image $imageConfig -ImageName ig191113014333 -ResourceGroupName RG191113014333
+ResourceGroupName : RG191113014333
SourceVirtualMachine : StorageProfile : Microsoft.Azure.Management.Compute.Models.ImageStorageProfile ProvisioningState : Succeeded
-Id : /subscriptions/a4257fde-b946-4e01-ade7-674760b8d1a3/resourceGroups/rg191113014333/providers/Micr
- osoft.Compute/images/ig191113014333
+HyperVGeneration : V1
+Id : /subscriptions/.../resourceGroups/RG191113014333/providers/Microsoft.Compute/images/ig191113014333
Name : ig191113014333 Type : Microsoft.Compute/images Location : dbelocal Tags : {} ```+ ## Create your VM with previously created resources
Before you create and deploy the VM, you must create one virtual network and ass
> The following rules apply: > - You can create only one virtual network, even across resource groups. The virtual network must have exactly the same address space as the logical network. > - The virtual network can have only one subnet. The subnet must have exactly the same address space as the virtual network.
-> - When you create the virtual network interface card, you can use only the static allocation method. The user needs to provide a private IP address.
+> - When you create the virtual network interface card, you can use only the static allocation method. The user needs to provide a private IP address.<!--Confirm w/ Neeraj given we can have both static and DHCP-->
### Query the automatically created virtual network When you enable compute from the local UI of your device, a virtual network called `ASEVNET` is created automatically, under the `ASERG` resource group.
+### [Az](#tab/az)
+ Use the following command to query the existing virtual network: ```powershell
-$aRmVN = Get-AzureRMVirtualNetwork -Name ASEVNET -ResourceGroupName ASERG
+$ArmVn = Get-AzVirtualNetwork -Name ASEVNET -ResourceGroupName ASERG
```
-<!--```powershell
-$subNetId=New-AzureRmVirtualNetworkSubnetConfig -Name <Subnet name> -AddressPrefix <Address Prefix>
-$aRmVN = New-AzureRmVirtualNetwork -ResourceGroupName <Resource group name> -Name <Vnet name> -Location DBELocal -AddressPrefix <Address prefix> -Subnet $subNetId
-```-->
+### [AzureRM](#tab/azure-rm)
+
+Use the following command to query the existing virtual network:
+
+```powershell
+$aRmVN = Get-AzureRMVirtualNetwork -Name ASEVNET -ResourceGroupName ASERG
+```
+ ### Create a virtual network interface card
-To create a virtual network interface card by using the virtual network subnet ID, run the following command:
+You'll create a virtual network interface card by using the virtual network subnet ID.
+
+### [Az](#tab/az)
+
+1. Set some parameters.
+
+ ```powershell
+ $IpConfigName = "<IP config name>"
+ $NicName = "<Network interface name>"
+ ```
+
+1. Create a virtual network interface.
+
+ ```powershell
+ $ipConfig = New-AzNetworkInterfaceIpConfig -Name $IpConfigName -SubnetId $aRmVN.Subnets[0].Id
+ $Nic = New-AzNetworkInterface -Name $NicName -ResourceGroupName $ResourceGroupName -Location DBELocal -IpConfiguration $IpConfig
+ ```
+
+ By default, an IP is dynamically assigned to your network interface from the network enabled for compute. Use the `-PrivateIpAddress parameter` if you are allocating a static IP to your network interface.
+
+ Here's an example output:
+
+ ```output
+ PS C:\WINDOWS\system32> $IpConfigName = "myazipconfig1"
+ PS C:\WINDOWS\system32> $NicName = "myaznic1"
+ PS C:\WINDOWS\system32> $ipConfig = New-AzNetworkInterfaceIpConfig -Name $IpConfigName -SubnetId $aRmVN.Subnets[0].Id
+ PS C:\WINDOWS\system32> $ipConfig = New-AzNetworkInterfaceIpConfig -Name $IpConfigName -SubnetId $aRmVN.Subnets[0].Id
+ PS C:\WINDOWS\system32> $Nic = New-AzNetworkInterface -Name $NicName -ResourceGroupName $ResourceGroupName -Location DBELocal -IpConfiguration $IpConfig
+ PS C:\WINDOWS\system32> $Nic
+
+ Name : myaznic1
+ ResourceGroupName : myaseazrg
+ Location : dbelocal
+ Id : /subscriptions/.../re
+ sourceGroups/myaseazrg/providers/Microsoft.Network/net
+ workInterfaces/myaznic1
+ Etag : W/"0b20057b-2102-4f34-958b-656327c0fb1d"
+ ResourceGuid : e7d4131f-6f01-4492-9d4c-a8ff1af7244f
+ ProvisioningState : Succeeded
+ Tags :
+ VirtualMachine : null
+ IpConfigurations : [
+ {
+ "Name": "myazipconfig1",
+ "Etag":
+ "W/\"0b20057b-2102-4f34-958b-656327c0fb1d\"",
+ "Id": "/subscriptions/.../resourceGroups/myaseazrg/providers/Microsoft.
+ Network/networkInterfaces/myaznic1/ipConfigurations/my
+ azipconfig1",
+ "PrivateIpAddress": "10.126.76.60",
+ "PrivateIpAllocationMethod": "Dynamic",
+ "Subnet": {
+ "Delegations": [],
+ "Id": "/subscriptions/.../resourceGroups/ASERG/providers/Microsoft.Ne
+ twork/virtualNetworks/ASEVNET/subnets/ASEVNETsubNet",
+ "ServiceAssociationLinks": []
+ },
+ "ProvisioningState": "Succeeded",
+ "PrivateIpAddressVersion": "IPv4",
+ "LoadBalancerBackendAddressPools": [],
+ "LoadBalancerInboundNatRules": [],
+ "Primary": true,
+ "ApplicationGatewayBackendAddressPools": [],
+ "ApplicationSecurityGroups": []
+ }
+ ]
+ DnsSettings : {
+ "DnsServers": [],
+ "AppliedDnsServers": [],
+ "InternalDomainNameSuffix": "auwlfcx0dhxurjgisct43fc
+ ywb.a--x.internal.cloudapp.net"
+ }
+ EnableIPForwarding : False
+ EnableAcceleratedNetworking : False
+ NetworkSecurityGroup : null
+ Primary :
+ MacAddress : 001DD84A58D1
+
+ PS C:\WINDOWS\system32>
+ ```
+
+Optionally, while you're creating a virtual network interface card for a VM, you can pass the public IP. In this instance, the public IP returns the private IP.
+
+```powershell
+New-AzPublicIPAddress -Name <Public IP> -ResourceGroupName <ResourceGroupName> -AllocationMethod Static -Location DBELocal
+$publicIP = (Get-AzPublicIPAddress -Name <Public IP> -ResourceGroupName <Resource group name>).Id
+$ipConfig = New-AzNetworkInterfaceIpConfig -Name <ConfigName> -PublicIpAddressId $publicIP -SubnetId $subNetId
+```
+
+### [AzureRM](#tab/azure-rm)
```powershell $ipConfig = New-AzureRmNetworkInterfaceIpConfig -Name <IP config Name> -SubnetId $aRmVN.Subnets[0].Id -PrivateIpAddress <Private IP> $Nic = New-AzureRmNetworkInterface -Name <Nic name> -ResourceGroupName <Resource group name> -Location DBELocal -IpConfiguration $ipConfig ```
+By default, an IP is dynamically assigned to your network interface from the network enabled for compute. Use the `-PrivateIpAddress parameter` if you are allocating a static IP to your network interface.
+ Here's some example output:
-```powershell
-PS C:\Users\Administrator> $subNetId=New-AzureRmVirtualNetworkSubnetConfig -Name my-ase-subnet -AddressPrefix "5.5.0.0/16"
+```output
+PS C:\windows\system32> $subNetId=New-AzureRmVirtualNetworkSubnetConfig -Name my-ase-subnet -AddressPrefix "5.5.0.0/16"
-PS C:\Users\Administrator> $aRmVN = New-AzureRmVirtualNetwork -ResourceGroupName Resource-my-ase -Name my-ase-virtualnetwork -Location DBELocal -AddressPrefix "5.5.0.0/16" -Subnet $subNetId
+PS C:\windows\system32> $aRmVN = New-AzureRmVirtualNetwork -ResourceGroupName Resource-my-ase -Name my-ase-virtualnetwork -Location DBELocal -AddressPrefix "5.5.0.0/16" -Subnet $subNetId
WARNING: The output object type of this cmdlet will be modified in a future release.
-PS C:\Users\Administrator> $ipConfig = New-AzureRmNetworkInterfaceIpConfig -Name my-ase-ip -SubnetId $aRmVN.Subnets[0].Id
-PS C:\Users\Administrator> $Nic = New-AzureRmNetworkInterface -Name my-ase-nic -ResourceGroupName Resource-my-ase -Location DBELocal -IpConfiguration $ipConfig
+PS C:\windows\system32> $ipConfig = New-AzureRmNetworkInterfaceIpConfig -Name my-ase-ip -SubnetId $aRmVN.Subnets[0].Id
+PS C:\windows\system32> $Nic = New-AzureRmNetworkInterface -Name my-ase-nic -ResourceGroupName Resource-my-ase -Location DBELocal -IpConfiguration $ipConfig
WARNING: The output object type of this cmdlet will be modified in a future release.
-PS C:\Users\Administrator> $Nic
+PS C:\windows\system32> $Nic
-PS C:\Users\Administrator> (Get-AzureRmNetworkInterface)[0]
+PS C:\windows\system32> (Get-AzureRmNetworkInterface)[0]
Name : nic200108020444 ResourceGroupName : rg200108020444 Location : dbelocal
-Id : /subscriptions/a4257fde-b946-4e01-ade7-674760b8d1a3/resourceGroups/rg200108020444/providers/Microsoft.Network/networ
+Id : /subscriptions/.../resourceGroups/rg200108020444/providers/Microsoft.Network/networ
kInterfaces/nic200108020444 Etag : W/"f9d1759d-4d49-42fa-8826-e218e0b1d355" ResourceGuid : 3851ae62-c13e-4416-9386-e21d9a2fef0f ProvisioningState : Succeeded Tags : VirtualMachine : {
- "Id": "/subscriptions/a4257fde-b946-4e01-ade7-674760b8d1a3/resourceGroups/rg200108020444/providers/Microsoft.Compu
+ "Id": "/subscriptions/.../resourceGroups/rg200108020444/providers/Microsoft.Compu
te/virtualMachines/VM200108020444" } IpConfigurations : [ { "Name": "ip200108020444", "Etag": "W/\"f9d1759d-4d49-42fa-8826-e218e0b1d355\"",
- "Id": "/subscriptions/a4257fde-b946-4e01-ade7-674760b8d1a3/resourceGroups/rg200108020444/providers/Microsoft.Net
+ "Id": "/subscriptions/.../resourceGroups/rg200108020444/providers/Microsoft.Net
work/networkInterfaces/nic200108020444/ipConfigurations/ip200108020444", "PrivateIpAddress": "5.5.166.65", "PrivateIpAllocationMethod": "Static", "Subnet": {
- "Id": "/subscriptions/a4257fde-b946-4e01-ade7-674760b8d1a3/resourceGroups/DbeSystemRG/providers/Microsoft.Netw
+ "Id": "/subscriptions/.../resourceGroups/DbeSystemRG/providers/Microsoft.Netw
ork/virtualNetworks/vSwitch1/subnets/subnet123", "ResourceNavigationLinks": [], "ServiceEndpoints": []
New-AzureRmPublicIPAddress -Name <Public IP> -ResourceGroupName <ResourceGroupNa
$publicIP = (Get-AzureRmPublicIPAddress -Name <Public IP> -ResourceGroupName <Resource group name>).Id $ipConfig = New-AzureRmNetworkInterfaceIpConfig -Name <ConfigName> -PublicIpAddressId $publicIP -SubnetId $subNetId ```+ ### Create a VM You can now use the VM image to create a VM and attach it to the virtual network that you created earlier.
+### [Az](#tab/az)
+
+1. Set the username and password to sign in to the VM that you want to create.
+
+ ```powershell
+ $pass = ConvertTo-SecureString "<Password>" -AsPlainText -Force;
+ $cred = New-Object System.Management.Automation.PSCredential("<Enter username>", $pass)
+ ```
+ After you've created and powered up the VM, you'll use the preceding username and password to sign in to it.
+
+1. Set the parameters.
+
+ ```powershell
+ $VmName = "<VM name>"
+ $ComputerName = "<VM display name>"
+ $OsDiskName = "<OS disk name>"
+ ```
+1. Create the VM.
+
+ ```powershell
+ $VirtualMachine = New-AzVMConfig -VmName $VmName -VMSize "Standard_D1_v2"
+
+ $VirtualMachine = Set-AzVMOperatingSystem -VM $VirtualMachine -Linux -ComputerName $ComputerName -Credential $cred
+
+ $VirtualMachine = Set-AzVmOsDisk -VM $VirtualMachine -Name $OsDiskName -Caching "ReadWrite" -CreateOption "FromImage" -Linux -StorageAccountType Standard_LRS
+
+ $nicID = (Get-AzNetworkInterface -Name $NicName -ResourceGroupName $ResourceGroupName).Id
+
+ $VirtualMachine = Add-AzVMNetworkInterface -Vm $VirtualMachine -Id $nicID
+
+ $image = ( Get-AzImage -ResourceGroupName $ResourceGroupName -ImageName $ImageName).Id
+
+ $VirtualMachine = Set-AzVMSourceImage -VM $VirtualMachine -Id $image
+
+ New-AzVM -ResourceGroupName $ResourceGroupName -Location DBELocal -VM $VirtualMachine -Verbose
+ ```
+
+ Here's an example output.
+
+ ```powershell
+ PS C:\WINDOWS\system32> $pass = ConvertTo-SecureString "Password1" -AsPlainText -Force;
+ PS C:\WINDOWS\system32> $cred = New-Object System.Management.Automation.PSCredential("myazuser", $pass)
+ PS C:\WINDOWS\system32> $VmName = "myazvm"
+ >> $ComputerName = "myazvmfriendlyname"
+ >> $OsDiskName = "myazosdisk1"
+ PS C:\WINDOWS\system32> $VirtualMachine = New-AzVMConfig -VmName $VmName -VMSize "Standard_D1_v2"
+ PS C:\WINDOWS\system32> $VirtualMachine = Set-AzVMOperatingSystem -VM $VirtualMachine -Linux -ComputerName $ComputerName -Credential $cred
+ PS C:\WINDOWS\system32> $VirtualMachine = Set-AzVmOsDisk -VM $VirtualMachine -Name $OsDiskName -Caching "ReadWrite" -CreateOption "FromImage" -Linux -StorageAccountType Standard_LRS
+ PS C:\WINDOWS\system32> $nicID = (Get-AzNetworkInterface -Name $NicName -ResourceGroupName $ResourceGroupName).Id
+ PS C:\WINDOWS\system32> $nicID/subscriptions/.../resourceGroups/myaseazrg/providers/Microsoft.Network/networkInterfaces/myaznic1
+ PS C:\WINDOWS\system32> $VirtualMachine = Add-AzVMNetworkInterface -VM $VirtualMachine -Id $nicID
+ PS C:\WINDOWS\system32> $image = ( Get-AzImage -ResourceGroupName $ResourceGroupName -ImageName $ImageName).Id
+ PS C:\WINDOWS\system32> $VirtualMachine = Set-AzVMSourceImage -VM $VirtualMachine -Id $image
+ PS C:\WINDOWS\system32> New-AzVM -ResourceGroupName $ResourceGroupName -Location DBELocal -VM $VirtualMachine -Verbose
+ WARNING: Since the VM is created using premium storage or managed disk, existing
+ standard storage account, myaseazsa, is used for boot diagnostics.
+ VERBOSE: Performing the operation "New" on target "myazvm".
+
+ RequestId IsSuccessStatusCode StatusCode ReasonPhrase
+ - -
+ True OK OK
+ ```
+1. To figure out the IP assigned to the VM that you created, query the virtual network interface that you created. Locate the `PrivateIPAddress` and copy the IP for your VM. Here's an example output.
+
+ ```powershell
+ PS C:\WINDOWS\system32> $Nic
+
+ Name : myaznic1
+ ResourceGroupName : myaseazrg
+ Location : dbelocal
+ Id : /subscriptions/.../re
+ sourceGroups/myaseazrg/providers/Microsoft.Network/net
+ workInterfaces/myaznic1
+ Etag : W/"0b20057b-2102-4f34-958b-656327c0fb1d"
+ ResourceGuid : e7d4131f-6f01-4492-9d4c-a8ff1af7244f
+ ProvisioningState : Succeeded
+ Tags :
+ VirtualMachine : null
+ IpConfigurations : [
+ {
+ "Name": "myazipconfig1",
+ "Etag":
+ "W/\"0b20057b-2102-4f34-958b-656327c0fb1d\"",
+ "Id": "/subscriptions/.../resourceGroups/myaseazrg/providers/Microsoft.
+ Network/networkInterfaces/myaznic1/ipConfigurations/my
+ azipconfig1",
+ "PrivateIpAddress": "10.126.76.60",
+ "PrivateIpAllocationMethod": "Dynamic",
+ "Subnet": {
+ "Delegations": [],
+ "Id": "/subscriptions/.../resourceGroups/ASERG/providers/Microsoft.Ne
+ twork/virtualNetworks/ASEVNET/subnets/ASEVNETsubNet",
+ "ServiceAssociationLinks": []
+ },
+ "ProvisioningState": "Succeeded",
+ "PrivateIpAddressVersion": "IPv4",
+ "LoadBalancerBackendAddressPools": [],
+ "LoadBalancerInboundNatRules": [],
+ "Primary": true,
+ "ApplicationGatewayBackendAddressPools": [],
+ "ApplicationSecurityGroups": []
+ }
+ ]
+ DnsSettings : {
+ "DnsServers": [],
+ "AppliedDnsServers": [],
+ "InternalDomainNameSuffix": "auwlfcx0dhxurjgisct43fc
+ ywb.a--x.internal.cloudapp.net"
+ }
+ EnableIPForwarding : False
+ EnableAcceleratedNetworking : False
+ NetworkSecurityGroup : null
+ Primary :
+ MacAddress : 001DD84A58D1
+
+ PS C:\WINDOWS\system32>
+ ```
++
+### [AzureRM](#tab/azure-rm)
+ ```powershell $pass = ConvertTo-SecureString "<Password>" -AsPlainText -Force; $cred = New-Object System.Management.Automation.PSCredential("<Enter username>", $pass)
$VirtualMachine = Set-AzureRmVMSourceImage -VM $VirtualMachine -Id $image
New-AzureRmVM -ResourceGroupName <Resource Group Name> -Location DBELocal -VM $VirtualMachine -Verbose ```+ ## Connect to the VM Depending on whether you created a Windows VM or a Linux VM, the connection instructions can be different.
-### Connect to a Linux VM
+## Connect to a Linux VM
To connect to a Linux VM, do the following: [!INCLUDE [azure-stack-edge-gateway-connect-vm](../../includes/azure-stack-edge-gateway-connect-virtual-machine-linux.md)]
- If you used a public IP address during the VM creation, you can use that IP to connect to the VM. To get the public IP, run the following command:
+If you used a public IP address during the VM creation, you can use that IP to connect to the VM. To get the public IP, run the following command:
- ```powershell
- $publicIp = Get-AzureRmPublicIpAddress -Name <Public IP> -ResourceGroupName <Resource group name>
- ```
- In this instance, the public IP is the same as the private IP that you passed during the creation of the virtual network interface.
+### [Az](#tab/az)
-### Connect to a Windows VM
+```powershell
+$publicIp = Get-AzPublicIpAddress -Name $PublicIp -ResourceGroupName $ResourceGroupName
+```
+In this instance, the public IP is the same as the private IP that you passed during the creation of the virtual network interface.
+
+### [AzureRM](#tab/azure-rm)
-To connect to a Windows VM, do the following:
+```powershell
+$publicIp = Get-AzureRmPublicIpAddress -Name <Public IP> -ResourceGroupName <Resource group name>
+```
+In this instance, the public IP is the same as the private IP that you passed during the creation of the virtual network interface.
+
+## Connect to a Windows VM
-<!--Connect to the VM by using the private IP that you passed during the VM creation.
+To connect to a Windows VM, do the following:
-Open an SSH session to connect with the IP address.
-`ssh -l <username> <ip address>`
-When you're prompted, provide the password that you used when creating the VM.
+## Manage the VM
-If you need to provide the SSH key, use this command:
+The following sections describe some of the common operations that you can create on your Azure Stack Edge Pro device.
-ssh -i c:/users/Administrator/.ssh/id_rsa Administrator@5.5.41.236
+### List VMs that are running on the device
+
+To return a list of all the VMs that are running on your Azure Stack Edge device, run this command:
-If you used a public IP address during VM creation, you can use that IP to connect to the VM. To get the public IP:
+### [Az](#tab/az)
```powershell
-$publicIp = Get-AzureRmPublicIpAddress -Name <Public IP> -ResourceGroupName <Resource group name>
+Get-AzVM -ResourceGroupName <String> -Name <String>
```
-The public IP in this instance is the same as the private IP that you passed during the virtual network interface creation.-->
+For more information about this cmdlet, see [Get-AzVM](/powershell/module/az.compute/get-azvm?view=azps-6.1.0&preserve-view=true).
-## Manage the VM
-
-The following sections describe some of the common operations that you can create on your Azure Stack Edge Pro device.
-
-### List VMs that are running on the device
+### [AzureRM](#tab/azure-rm)
-To return a list of all the VMs that are running on your Azure Stack Edge device, run this command:
+```powershell
+Get-AzureRmVM -ResourceGroupName <String> -Name <String>
+```
+
-`Get-AzureRmVM -ResourceGroupName <String> -Name <String>`
-ΓÇâ
### Turn on the VM To turn on a virtual machine that's running on your device, run the following cmdlet:
-`Start-AzureRmVM [-Name] <String> [-ResourceGroupName] <String>`
+### [Az](#tab/az)
+
+```powershell
+Start-AzVM [-Name] <String> [-ResourceGroupName] <String>
+```
+For more information about this cmdlet, see [Start-AzVM](/powershell/module/az.compute/start-azvm?view=azps-5.9.0&preserve-view=true).
+
+### [AzureRM](#tab/azure-rm)
+
+```powershell
+Start-AzureRmVM [-Name] <String> [-ResourceGroupName] <String>
+```
For more information about this cmdlet, see [Start-AzureRmVM](/powershell/module/azurerm.compute/start-azurermvm?view=azurermps-6.13.0&preserve-view=true). ++ ### Suspend or shut down the VM To stop or shut down a virtual machine that's running on your device, run the following cmdlet:
+### [Az](#tab/az)
+
+```powershell
+Stop-AzVM [-Name] <String> [-StayProvisioned] [-ResourceGroupName] <String>
+```
+
+For more information about this cmdlet, see [Stop-AzVM cmdlet](/powershell/module/az.compute/stop-azvm?view=azps-5.9.0&preserve-view=true).
+
+### [AzureRM](#tab/azure-rm)
```powershell Stop-AzureRmVM [-Name] <String> [-StayProvisioned] [-ResourceGroupName] <String>
Stop-AzureRmVM [-Name] <String> [-StayProvisioned] [-ResourceGroupName] <String>
For more information about this cmdlet, see [Stop-AzureRmVM cmdlet](/powershell/module/azurerm.compute/stop-azurermvm?view=azurermps-6.13.0&preserve-view=true). ++ ### Add a data disk If the workload requirements on your VM increase, you might need to add a data disk. To do so, run the following command:
+### [Az](#tab/az)
+
+```powershell
+Add-AzRmVMDataDisk -VM $VirtualMachine -Name "disk1" -VhdUri "https://contoso.blob.core.windows.net/vhds/diskstandard03.vhd" -LUN 0 -Caching ReadOnly -DiskSizeinGB 1 -CreateOption Empty
+
+Update-AzVM -ResourceGroupName "<Resource Group Name string>" -VM $VirtualMachine
+```
+
+### [AzureRM](#tab/azure-rm)
+ ```powershell Add-AzureRmVMDataDisk -VM $VirtualMachine -Name "disk1" -VhdUri "https://contoso.blob.core.windows.net/vhds/diskstandard03.vhd" -LUN 0 -Caching ReadOnly -DiskSizeinGB 1 -CreateOption Empty Update-AzureRmVM -ResourceGroupName "<Resource Group Name string>" -VM $VirtualMachine ```+++ ### Delete the VM To remove a virtual machine from your device, run the following cmdlet:
+### [Az](#tab/az)
+ ```powershell
-Remove-AzureRmVM [-Name] <String> [-ResourceGroupName] <String>
+Remove-AzVM [-Name] <String> [-ResourceGroupName] <String>
```
+For more information about this cmdlet, see [Remove-AzVm cmdlet](/powershell/module/az.compute/remove-azvm?view=azps-5.9.0&preserve-view=true).
+### [AzureRM](#tab/azure-rm)
+
+```powershell
+Remove-AzureRmVM [-Name] <String> [-ResourceGroupName] <String>
+```
For more information about this cmdlet, see [Remove-AzureRmVm cmdlet](/powershell/module/azurerm.compute/remove-azurermvm?view=azurermps-6.13.0&preserve-view=true). ++ ## Next steps [Azure Resource Manager cmdlets](/powershell/module/azurerm.resources/?view=azurermps-6.13.0&preserve-view=true)
databox-online Azure Stack Edge Gpu Local Resource Manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-local-resource-manager-overview.md
+
+ Title: What is local Azure Resource Manager on Azure Stack Edge Pro GPU device
+description: Describes an overview of what is the local Azure Resource Manager on your Azure Stack Edge device.
++++++ Last updated : 06/30/2021+
+#Customer intent: As an IT admin, I need to understand what is the local Azure Resource Manager on my Azure Stack Edge Pro device.
++
+# What is local Azure Resource Manager on Azure Stack Edge?
++
+Azure Resource Manager provides a management layer that enables you to create, update, and delete resources in your Azure subscription. The Azure Stack Edge devices support the same Azure Resource Manager APIs to create, update, and delete VMs in a local subscription. This support lets you manage the device in a manner consistent with the cloud.
+
+This article provides an overview of the local Azure Resource Manager that can be used to connect to the local APIs on your Azure Stack Edge devices.
+
+## About local Azure Resource Manager
+
+The local Azure Resource Manager provides a consistent management layer for all the calls to the Azure Stack Edge device via the use of Resource Manager templates. The benefits of local Azure Resource Manager are discussed in the following sections.
+
+#### Consistent management layer
+
+The local Azure Resource Manager provides a consistent management layer to call the Azure Stack Edge device APIs and perform operations such as create, update, and delete VMs.
+
+1. When you send a request from REST APIs or SDKs, the local Azure Resource Manager on the device receives the request.
+1. The local Azure Resource Manager uses the Security Token Service (STS) to authenticate and authorize the request. STS is responsible for creation, validation, renewal, and cancellation of security tokens. STS creates both types of security tokens - the access tokens and the refresh tokens. These tokens are used for continuous communication between the device and the clients accessing the device via the local Azure Resource Manager.
+1. The Resource Manager then sends the request to the resource providers that take the requested action.
+
+ The resource providers that are pre-registered with the Azure Stack Edge are as follows:
+
+ - **Compute**: The `Microsoft.Compute` or the Compute Resource Provider lets you deploy VMs on your Azure Stack Edge. The Compute Resource Provider includes the ability to create VMs and VM extensions.
+
+ - **Networking Resource Provider**: The `Microsoft.Network` or the Networking Resource Provider lets you create resources like network interfaces and virtual networks.
+
+ - **Storage Resource Provider**: The `Microsoft.Storage` or the Storage Resource Provider delivers Azure-consistent blob storage service and Key Vault account management providing management and auditing of secrets, such as passwords and certificates.
+
+ - **Disk Resource Provider**: The `Microsoft.Disks` or the Disk Resource Provider will let you create managed disks that can be used to create VMs.
+
+ Resources are manageable items that are available through Azure Stack Edge and the resource providers are responsible for providing resources. For example, virtual machines, storage accounts, and virtual networks are examples of resources. And the compute resource provider supplies the virtual machine resource.
+
+Because all requests are handled through the same API, you see consistent results and capabilities in all the different tools.
+
+The following image shows the mechanism of handling all the API requests and the role the local Azure Resource Manager plays in providing a consistent management layer to handle those requests.
+
+![Diagram for Azure Resource Manager.](media/azure-stack-edge-gpu-connect-resource-manager/edge-device-flow.svg)
++
+#### Use of Resource Manager templates
+
+Another key benefit of Azure Resource Manager is that it lets you use Resource Manager templates. These are JavaScript Object Notation (JSON) files in a declarative syntax that can be used to deploy the resources consistently and repeatedly. The declarative syntax lets you state "Here is what I intend to create" without having to write the sequence of programming commands to create it. For example, you can use these declarative syntax templates to deploy virtual machines on your Azure Stack Edge devices. For detailed information, see [Deploy virtual machines on your Azure Stack Edge device via templates](azure-stack-edge-gpu-deploy-virtual-machine-templates.md).
+
+## Connect to the local Azure Resource Manager
+
+To create virtual machines or shares or storage accounts on your Azure Stack Edge device, you will need to create the corresponding resources. For example, for virtual machines, you will need resources such as network interface, OS and data disks on VM, from the networking, disk, and storage resource providers.
+
+To request the creation of any resources from the resource providers, you will need to first connect to the local Azure Resource Manager. For detailed steps, see [Connect to Azure Resource Manager on your Azure Stack Edge device](azure-stack-edge-gpu-connect-resource-manager.md).
+
+The first time you connect to Azure Resource Manager, you would also need to reset your password. For detailed steps, see [Reset your Azure Resource Manager password](azure-stack-edge-gpu-set-azure-resource-manager-password.md).
++
+## Azure Resource Manager endpoints
+
+The local Azure Resource Manager and the STS services run on your device and can be reached at specific endpoints. The following table summarizes the various endpoints exposed on your device by these service, the supported protocols, and the ports to access those endpoints.
+
+| # | Endpoint | Supported protocols | Port used | Used for |
+| | | | | |
+| 1. | Azure Resource Manager | https | 443 | To connect to Azure Resource Manager for automation |
+| 2. | Security token service | https | 443 | To authenticate via access and refresh tokens |
++
+## Next steps
+
+[Connect to the local Azure Resource Manager on your Azure Stack Edge Pro GPU device](azure-stack-edge-gpu-deploy-virtual-machine-powershell.md).
ddos-protection Ddos Protection Partner Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/ddos-protection-partner-onboarding.md
The following steps are required for partners to configure integration with Azur
> [!NOTE] > Only 1 DDoS Protection Plan needs to be created for a given tenant. 2. Deploy a service with public endpoint in your (partner) subscriptions, such as load balancer, firewalls, and web application firewall.
-3. Enable Azure DDoS Protection Standard on the virtual network of the service that has public endpoints using DDoS Protection Plan created in the first step. For stpe-by-step instructions, see [Enable DDoS Standard Protection plan](manage-ddos-protection.md#enable-ddos-protection-for-an-existing-virtual-network)
+3. Enable Azure DDoS Protection Standard on the virtual network of the service that has public endpoints using DDoS Protection Plan created in the first step. For step-by-step instructions, see [Enable DDoS Standard Protection plan](manage-ddos-protection.md#enable-ddos-protection-for-an-existing-virtual-network)
> [!IMPORTANT] > After Azure DDoS Protection Standard is enabled on a virtual network, all public IPs within that virtual network are automatically protected. The origin of these public IPs can be either within Azure (client subscription) or outside of Azure. 4. Optionally, integrate Azure DDoS Protection Standard telemetry and attack analytics in your application-specific customer-facing dashboard. For more information about using telemetry, see [View and configure DDoS protection telemetry](telemetry.md).
iot-develop Quickstart Devkit Stm B L475e https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-develop/quickstart-devkit-stm-b-l475e.md
Title: Connect an ST Microelectronics B-L475E-IOT01A or B-L4S5I-IOT01A to Azure IoT Central quickstart
-description: Use Azure RTOS embedded software to connect an ST Microelectronics B-L475E-IOT01A or B-L4S5I-IOT01A device to Azure IoT and send telemetry.
+ Title: Connect an STMicroelectronics B-L475E-IOT01A or B-L4S5I-IOT01A to Azure IoT Central quickstart
+description: Use Azure RTOS embedded software to connect an STMicroelectronics B-L475E-IOT01A or B-L4S5I-IOT01A device to Azure IoT and send telemetry.
Last updated 06/02/2021
-# Quickstart: Connect an ST Microelectronics B-L475E-IOT01A or B-L4S5I-IOT01A Discovery kit to IoT Central
+# Quickstart: Connect an STMicroelectronics B-L475E-IOT01A or B-L4S5I-IOT01A Discovery kit to IoT Central
**Applies to**: [Embedded device development](about-iot-develop.md#embedded-device-development)<br> **Total completion time**: 30 minutes [![Browse code](media/common/browse-code.svg)](https://github.com/azure-rtos/getting-started/tree/master/STMicroelectronics/STM32L4_L4+)
-In this quickstart, you use Azure RTOS to connect either the ST Microelectronics [B-L475E-IOT01A](https://www.st.com/en/evaluation-tools/b-l475e-iot01a.html) or [B-L4S5I-IOT01A](https://www.st.com/en/evaluation-tools/b-l4s5i-iot01a.html) Discovery kit (hereafter, the STM DevKit) to Azure IoT.
+In this quickstart, you use Azure RTOS to connect either the STMicroelectronics [B-L475E-IOT01A](https://www.st.com/en/evaluation-tools/b-l475e-iot01a.html) or [B-L4S5I-IOT01A](https://www.st.com/en/evaluation-tools/b-l4s5i-iot01a.html) Discovery kit (hereafter, the STM DevKit) to Azure IoT.
You will complete the following tasks:
iot-dps How To Verify Certificates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-dps/how-to-verify-certificates.md
Title: Verify X.509 CA certificates with Azure IoT Hub Device Provisioning Servi
description: How to do proof-of-possession for X.509 CA certificates with Azure IoT Hub Device Provisioning Service (DPS) Previously updated : 02/26/2018 Last updated : 06/29/2021
A verified X.509 Certificate Authority (CA) certificate is a CA certificate that has been uploaded and registered to your provisioning service and has gone through proof-of-possession with the service.
+Verified certificates play an important role when using enrollment groups. Verifying certificate ownership provides an additional security layer by ensuring that the uploader of the certificate is in possession of the certificate's private key. Verification prevents a malicious actor sniffing your traffic from extracting an intermediate certificate and using that certificate to create an enrollment group in their own provisioning service, effectively hijacking your devices. By proving ownership of the root or an intermediate certificate in a certificate chain, you're proving that you have permission to generate leaf certificates for the devices that will be registering as a part of that enrollment group. For this reason, the root or intermediate certificate configured in an enrollment group must either be a verified certificate or must roll up to a verified certificate in the certificate chain a device presents when it authenticates with the service. To learn more about X.509 certificate attestation, see [X.509 certificates](concepts-x509-attestation.md) and [Controlling device access to the provisioning service with X.509 certificates](concepts-x509-attestation.md#controlling-device-access-to-the-provisioning-service-with-x509-certificates).
+
+## Automatic verification of intermediate or root CA through self-attestation
+If you are using an intermediate or root CA that you trust and know you have full ownernship of the certificate, you can self-attest that you have verified the certificate.
+
+To add an auto-verified certificate, follow these steps:
+
+1. In the Azure portal, navigate to your provisioning service and open **Certificates** from the left-hand menu.
+2. Click **Add** to add a new certificate.
+3. Enter a friendly display name for your certificate. Browse to the .cer or .pem file that represents the public part of your X.509 certificate. Click **Upload**.
+4. Check the box next to **Set certificate status to verified on upload**.
+
+ ![Upload certificate_with_verified](./media/how-to-verify-certificates/add-certificate-with-verified.png)
+
+1. Click **Save**.
+1. Your certificate is show in the certificate tab with a status *Verified*.
+
+ ![Certificate_Status](./media/how-to-verify-certificates/certificate-status.png)
+
+## Manual verification of intermediate or root CA
+ Proof-of-possession involves the following steps: 1. Get a unique verification code generated by the provisioning service for your X.509 CA certificate. You can do this from the Azure portal. 2. Create an X.509 verification certificate with the verification code as its subject and sign the certificate with the private key associated with your X.509 CA certificate. 3. Upload the signed verification certificate to the service. The service validates the verification certificate using the public portion of the CA certificate to be verified, thus proving that you are in possession of the CA certificate's private key.
-Verified certificates play an important role when using enrollment groups. Verifying certificate ownership provides an additional security layer by ensuring that the uploader of the certificate is in possession of the certificate's private key. Verification prevents a malicious actor sniffing your traffic from extracting an intermediate certificate and using that certificate to create an enrollment group in their own provisioning service, effectively hijacking your devices. By proving ownership of the root or an intermediate certificate in a certificate chain, you're proving that you have permission to generate leaf certificates for the devices that will be registering as a part of that enrollment group. For this reason, the root or intermediate certificate configured in an enrollment group must either be a verified certificate or must roll up to a verified certificate in the certificate chain a device presents when it authenticates with the service. To learn more about X.509 certificate attestation, see [X.509 certificates](concepts-x509-attestation.md) and [Controlling device access to the provisioning service with X.509 certificates](concepts-x509-attestation.md#controlling-device-access-to-the-provisioning-service-with-x509-certificates).
-## Register the public part of an X.509 certificate and get a verification code
+### Register the public part of an X.509 certificate and get a verification code
To register a CA certificate with your provisioning service and get a verification code that you can use during proof-of-possession, follow these steps.
To register a CA certificate with your provisioning service and get a verificati
![Verify certificate](./media/how-to-verify-certificates/verify-cert.png)
-## Digitally sign the verification code to create a verification certificate
+### Digitally sign the verification code to create a verification certificate
Now, you need to sign the *Verification Code* with the private key associated with your X.509 CA certificate, which generates a signature. This is known as [Proof of possession](https://tools.ietf.org/html/rfc5280#section-3.1) and results in a signed verification certificate.
Microsoft provides tools and samples that can help you create a signed verificat
The PowerShell and Bash scripts provided in the documentation and SDKs rely on [OpenSSL](https://www.openssl.org/). You may also use OpenSSL or other third-party tools to help you do proof-of-possession. For an example using tooling provided with the SDKs, see [Create an X.509 certificate chain](tutorial-custom-hsm-enrollment-group-x509.md#create-an-x509-certificate-chain).
-## Upload the signed verification certificate
+### Upload the signed verification certificate
1. Upload the resulting signature as a verification certificate to your provisioning service in the portal. In **Certificate Details** on the Azure portal, use the _File Explorer_ icon next to the **Verification Certificate .pem or .cer file** field to upload the signed verification certificate from your system.
The PowerShell and Bash scripts provided in the documentation and SDKs rely on [
## Next steps - To learn about how to use the portal to create an enrollment group, see [Managing device enrollments with Azure portal](how-to-manage-enrollments.md).-- To learn about how to use the service SDKs to create an enrollment group, see [Managing device enrollments with service SDKs](./quick-enroll-device-x509-java.md).
+- To learn about how to use the service SDKs to create an enrollment group, see [Managing device enrollments with service SDKs](./quick-enroll-device-x509-java.md).
iot-dps Tutorial Custom Hsm Enrollment Group X509 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-dps/tutorial-custom-hsm-enrollment-group-x509.md
To create the device certificates signed by the intermediate certificate in the
## Verify ownership of the root certificate
+> [!NOTE]
+> As of July 1st, 2021, you can perform automatic verification of certificate via [automatic verification](how-to-verify-certificates.md#automatic-verification-of-intermediate-or-root-ca-through-self-attestation)
+>
+ 1. Using the directions from [Register the public part of an X.509 certificate and get a verification code](how-to-verify-certificates.md#register-the-public-part-of-an-x509-certificate-and-get-a-verification-code), upload the root certificate (`./certs/azure-iot-test-only.root.ca.cert.pem`) and get a verification code from DPS. 2. Once you have a verification code from DPS for the root certificate, run the following command from your certificate script working directory to generate a verification certificate.
When you're finished testing and exploring this device client sample, use the fo
In this tutorial, you provisioned an X.509 device using a custom HSM to your IoT hub. To learn how to provision IoT devices to multiple hubs continue to the next tutorial. > [!div class="nextstepaction"]
-> [Tutorial: Provision devices across load-balanced IoT hubs](tutorial-provision-multiple-hubs.md)
+> [Tutorial: Provision devices across load-balanced IoT hubs](tutorial-provision-multiple-hubs.md)
iot-edge How To Auto Provision Symmetric Keys https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-auto-provision-symmetric-keys.md
This article shows you how to create a Device Provisioning Service individual or
* Create an enrollment for the device. * Install the IoT Edge runtime and connect to the IoT Hub.
+>[!TIP]
+>For a simplified experience, try the [Azure IoT Edge configuration tool](https://github.com/azure/iot-edge-config). This command-line tool, currently in public preview, installs IoT Edge on your device and provisions it using DPS and symmetric key attestation.
+ Symmetric key attestation is a simple approach to authenticating a device with a Device Provisioning Service instance. This attestation method represents a "Hello world" experience for developers who are new to device provisioning, or do not have strict security requirements. Device attestation using a [TPM](../iot-dps/concepts-tpm-attestation.md) or [X.509 certificates](../iot-dps/concepts-x509-attestation.md) is more secure, and should be used for more stringent security requirements. ## Prerequisites
iot-hub-device-update Device Update Agent Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/device-update-agent-provisioning.md
If you're setting up the IoT device/IoT Edge device for [package based updates](
1. Open a Terminal window. 1. Install the repository configuration that matches your deviceΓÇÖs operating system.+ ```shell curl https://packages.microsoft.com/config/ubuntu/18.04/multiarch/prod.list > ./microsoft-prod.list ``` 1. Copy the generated list to the sources.list.d directory.+ ```shell sudo cp ./microsoft-prod.list /etc/apt/sources.list.d/ ``` 1. Install the Microsoft GPG public key.+ ```shell curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.gpg ```
If you're setting up the IoT device/IoT Edge device for [package based updates](
## How to provision the Device Update agent as a Module Identity
-This section describes how to provision the Device Update agent as a module identity on IoT Edge enabled devices, non-Edge IoT devices, and other IoT devices.
-
+This section describes how to provision the Device Update agent as a module identity on
+* IoT Edge enabled devices, or
+* Non-Edge IoT devices, or
+* Other IoT devices.
+
+Follow all or any of the below sections to add the Device update agent based on the type of IoT device you are managing.
### On IoT Edge enabled devices
Follow these instructions to provision the Device Update agent on [IoT Edge enab
1. Follow the instructions to [Install and provision the Azure IoT Edge runtime](../iot-edge/how-to-install-iot-edge.md?preserve-view=true&view=iotedge-2020-11).
-1. Install the Device Update image update agent
- - We provide sample images in [Artifacts](https://github.com/Azure/iot-hub-device-update/releases), the swUpdate file is the base image that you can flash onto a Raspberry Pi B3+ board, and the .gz file is the update you would import through Device Update for IoT Hub. See example of [how to flash the image to your IoT Hub device](./device-update-raspberry-pi.md#flash-sd-card-with-image).
+1. Install the Device Update image update agent.
+
+ We provide sample images in the [Artifacts](https://github.com/Azure/iot-hub-device-update/releases) repository. The swUpdate file is the base image that you can flash onto a Raspberry Pi B3+ board. The .gz file is the update you would import through Device Update for IoT Hub. For an example, see [How to flash the image to your IoT Hub device](./device-update-raspberry-pi.md#flash-sd-card-with-image).
+
+1. Install the Device Update package update agent.
-1. Install the Device Update package update agent
- For latest agent versions from packages.miscrosoft.com: Update package lists on your device and install the Device Update agent package and its dependencies using:
- ```shell
- sudo apt-get update
- ```
+
+ ```shell
+ sudo apt-get update
+ ```
- ```shell
- sudo apt-get install deviceupdate-agent deliveryoptimization-plugin-apt
- ```
+ ```shell
+ sudo apt-get install deviceupdate-agent deliveryoptimization-plugin-apt
+ ```
- For any 'rc' i.e. release candidate agent versions from [Artifacts](https://github.com/Azure/iot-hub-device-update/releases) : Download the .dep file to the machine you want to install the Device Update agent on, then:
- ```shell
- sudo apt-get install -y ./"<PATH TO FILE>"/"<.DEP FILE NAME>"
- ```
+
+ ```shell
+ sudo apt-get install -y ./"<PATH TO FILE>"/"<.DEP FILE NAME>"
+ ```
1. You are now ready to start the Device Update agent on your IoT Edge device.
Follow these instructions to provision the Device Update agent on your IoT Linux
1. Install the IoT Identity Service and add the latest version to your IoT device. 1. Log onto the machine or IoT device. 1. Open a terminal window.
- 1. Install the latest [IoT Identity Service](https://github.com/Azure/iot-identity-service/blob/main/docs-dev/packaging.md#installing-and-configuring-the-package) on your IoT device using this command:
-
- ```shell
- sudo apt-get install aziot-identity-service
- ```
+ 1. Install the latest [IoT Identity Service](https://github.com/Azure/iot-identity-service/blob/main/docs/packaging.md#installing-and-configuring-the-package) on your IoT device using this command:
+ > [!Note]
+ > The IoT Identity service registers module identities with IoT Hub by using symmetric keys currently.
+
+ ```shell
+ sudo apt-get install aziot-identity-service
+ ```
1. Provisioning IoT Identity service to get the IoT device information.
- 1. Create a custom copy of the configuration template so we can add the provisioning information. In a terminal, enter the below command.
+
+ Create a custom copy of the configuration template so we can add the provisioning information. In a terminal, enter the following command:
- ```shell
- sudo cp /etc/aziot/config.toml.template /etc/aziot/config.toml
- ```
+ ```shell
+ sudo cp /etc/aziot/config.toml.template /etc/aziot/config.toml
+ ```
1. Next edit the configuration file to include the connection string of the device you wish to act as the provisioner for this device or machine. In a terminal, enter the below command.
Follow these instructions to provision the Device Update agent on your IoT Linux
1. In the window, delete the string within the quotes to the right of 'connection_string' and then add your connection string there 1. Save your changes to the file with 'Ctrl+X' and then 'Y' and hit the 'enter' key to save your changes.
-1. Now apply and restart the IoT Identity service with the command below. You should now see a ΓÇ£Done!ΓÇ¥ printout that means you have successfully configured the IoT Identity Service.
+1. Now apply and restart the IoT Identity service with the command below. You should now see a ΓÇ£Done!ΓÇ¥ printout that means you have successfully configured the IoT Identity Service.
> [!Note] > The IoT Identity service registers module identities with IoT Hub by using symmetric keys currently.
Follow these instructions to provision the Device Update agent on your IoT Linux
sudo aziotctl config apply ```
-1. Finally install the Device Update agent. We provide sample images in [Artifacts](https://github.com/Azure/iot-hub-device-update/releases), the swUpdate file is the base image that you can flash onto a Raspberry Pi B3+ board, and the .gz file is the update you would import through Device Update for IoT Hub. See example of [how to flash the image to your IoT Hub device](./device-update-raspberry-pi.md#flash-sd-card-with-image).
+1. Finally install the Device Update agent. We provide sample images in [Artifacts](https://github.com/Azure/iot-hub-device-update/releases), the swUpdate file is the base image that you can flash onto a Raspberry Pi B3+ board, and the .gz file is the update you would import through Device Update for IoT Hub. See example of [how to flash the image to your IoT Hub device](./device-update-raspberry-pi.md#flash-sd-card-with-image).
-1. You are now ready to start the Device Update agent on your IoT device.
+1. You are now ready to start the Device Update agent on your IoT device.
### Other IoT devices The Device Update agent can also be configured without the IoT Identity service for testing or on constrained devices. Follow the below steps to provision the Device Update agent using a connection string (from the Module or Device).
+1. We provide sample images in the [Artifacts](https://github.com/Azure/iot-hub-device-update/releases) repository. The swUpdate file is the base image that you can flash onto a Raspberry Pi B3+ board. The .gz file is the update you would import through Device Update for IoT Hub. For an example, see [How to flash the image to your IoT Hub device](./device-update-raspberry-pi.md#flash-sd-card-with-image).
-1. We provide sample images in [Artifacts](https://github.com/Azure/iot-hub-device-update/releases), the swUpdate file is the base image that you can flash onto a Raspberry Pi B3+ board, and the .gz file is the update you would import through Device Update for IoT Hub. See example of [how to flash the image to your IoT Hub device](./device-update-raspberry-pi.md#flash-sd-card-with-image).
-
-1. Log onto the machine or IoT Edge device/IoT device.
+1. Log onto the machine or IoT Edge device/IoT device.
-1. Open a terminal window.
+1. Open a terminal window.
+
+1. Add the connection string to the [Device Update configuration file](device-update-configuration-file.md):
-1. Add the connection string to the [Device Update configuration file](device-update-configuration-file.md):
1. Enter the below in the terminal window:
+
- [For Package updates](device-update-ubuntu-agent.md) use: sudo nano /etc/adu/adu-conf.txt - [For Image updates](device-update-raspberry-pi.md) use: sudo nano /adu/adu-conf.txt 1. You should see a window open with some text in it. Delete the entire string following 'connection_String=' the first-time you provision the Device Update agent on the IoT device. It is just place holder text.
- 1. In the terminal, replace "<your-connection-string>" with the connection string of the device for your instance of Device Update agent.
-
- > [!Important]
- > Do not add quotes around the connection string.
- ```shell
+ 1. In the terminal, replace <your-connection-string> with the connection string of the device for your instance of Device Update agent. Select Enter and then **Save.** It should look this example:
+
+ ```text
connection_string=<ADD CONNECTION STRING HERE>
- ```
-
- 1. Enter and save.
-
-1. Now you are now ready to start the Device Update agent on your IoT device.
+ ```
+
+ > [!Important]
+ > Do not add quotes around the connection string.
+
+1. Now you are now ready to start the Device Update agent on your IoT device.
## How to start the Device Update Agent This section describes how to start and verify the Device Update agent as a module identity running successfully on your IoT device.
-1. Log into the machine or device that has the Device Update agent installed.
+1. Log into the machine or device that has the Device Update agent installed.
+
+1. Open a Terminal window, and enter the command below.
-1. Open a Terminal window, and enter the command below.
```shell sudo systemctl restart adu-agent ```
-1. You can check the status of the agent using the command below. If you see any issues, refer to this [troubleshooting guide](troubleshoot-device-update.md).
+1. You can check the status of the agent using the command below. If you see any issues, refer to this [troubleshooting guide](troubleshoot-device-update.md).
+
```shell sudo systemctl status adu-agent ``` You should see status OK.
-1. On the IoT Hub portal, go to IoT device or IoT Edge devices to find the device that you configured with Device Update agent. There you will see the Device Update agent running as a module. For example:
+1. On the IoT Hub portal, go to IoT device or IoT Edge devices to find the device that you configured with Device Update agent. There you will see the Device Update agent running as a module. For example:
:::image type="content" source="media/understand-device-update/device-update-module.png " alt-text="Diagram of Device Update module name." lightbox="media/understand-device-update/device-update-module.png":::
You can use the following pre-built images and binaries for a simple demonstrati
- [Package Update:Getting Started using Ubuntu Server 18.04 x64 Package agent](device-update-ubuntu-agent.md) -- [Device Update for Azure IoT Hub tutorial for Azure-Real-Time-Operating-System](device-update-azure-real-time-operating-system.md)
+- [Device Update for Azure IoT Hub tutorial for Azure-Real-Time-Operating-System](device-update-azure-real-time-operating-system.md)
key-vault Security Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/security-features.md
Azure Private Link Service enables you to access Azure Key Vault and Azure hoste
- The HTTPS protocol allows the client to participate in TLS negotiation. **Clients can enforce the most recent version of TLS**, and whenever a client does so, the entire connection will use the corresponding level protection. The fact that Key Vault still supports older TLS versions wonΓÇÖt impair the security of connections using newer TLS versions. - Despite known vulnerabilities in TLS protocol, there is no known attack that would allow a malicious agent to extract any information from your key vault when the attacker initiates a connection with a TLS version that has vulnerabilities. The attacker would still need to authenticate and authorize itself, and as long as legitimate clients always connect with recent TLS versions, there is no way that credentials could have been leaked from vulnerabilities at old TLS versions.
-## Identity management
-
-When you create a key vault in an Azure subscription, it's automatically associated with the Azure AD tenant of the subscription. Anyone trying to manage or retrieve content from a vault must be authenticated by Azure AD. In both cases, applications can access Key Vault in three ways:
--- **Application-only**: The application represents a service principal or managed identity. This identity is the most common scenario for applications that periodically need to access certificates, keys, or secrets from the key vault. For this scenario to work, the `objectId` of the application must be specified in the access policy and the `applicationId` must _not_ be specified or must be `null`.-- **User-only**: The user accesses the key vault from any application registered in the tenant. Examples of this type of access include Azure PowerShell and the Azure portal. For this scenario to work, the `objectId` of the user must be specified in the access policy and the `applicationId` must _not_ be specified or must be `null`.-- **Application-plus-user** (sometimes referred as _compound identity_): The user is required to access the key vault from a specific application _and_ the application must use the on-behalf-of authentication (OBO) flow to impersonate the user. For this scenario to work, both `applicationId` and `objectId` must be specified in the access policy. The `applicationId` identifies the required application and the `objectId` identifies the user. Currently, this option isn't available for data plane Azure RBAC.-
-In all types of access, the application authenticates with Azure AD. The application uses any [supported authentication method](../../active-directory/develop/authentication-vs-authorization.md) based on the application type. The application acquires a token for a resource in the plane to grant access. The resource is an endpoint in the management or data plane, based on the Azure environment. The application uses the token and sends a REST API request to Key Vault. To learn more, review the [whole authentication flow](../../active-directory/develop/v2-oauth2-auth-code-flow.md).
-
-For full details, see [Key Vault Authentication Fundamentals](/azure/key-vault/general/authentication)
- ## Key Vault authentication options When you create a key vault in an Azure subscription, it's automatically associated with the Azure AD tenant of the subscription. All callers in both planes must register in this tenant and authenticate to access the key vault. In both cases, applications can access Key Vault in three ways:
The model of a single mechanism for authentication to both planes has several be
- If a user leaves, they instantly lose access to all key vaults in the organization. - Organizations can customize authentication by using the options in Azure AD, such as to enable multi-factor authentication for added security.
+For more information, see [Key Vault authentication fundamentals](authentication.md).
+ ## Access model overview Access to a key vault is controlled through two interfaces: the **management plane** and the **data plane**. The management plane is where you manage Key Vault itself. Operations in this plane include creating and deleting key vaults, retrieving Key Vault properties, and updating access policies. The data plane is where you work with the data stored in a key vault. You can add, delete, and modify keys, secrets, and certificates.
A security principal is an object that represents a user, group, service, or app
- A **group** security principal identifies a set of users created in Azure Active Directory. Any roles or permissions assigned to the group are granted to all of the users within the group. - A **service principal** is a type of security principal that identifies an application or service, which is to say, a piece of code rather than a user or group. A service principal's object ID is known as its **client ID** and acts like its username. The service principal's **client secret** or **certificate** acts like its password. Many Azure Services supports assigning [Managed Identity](../../active-directory/managed-identities-azure-resources/overview.md) with automated management of **client ID** and **certificate**. Managed identity is the most secure and recommended option for authenticating within Azure.
-For more information about authentication to Key Vault, see [Authenticate to Azure Key Vault](authentication.md)
+For more information about authentication to Key Vault, see [Authenticate to Azure Key Vault](authentication.md).
## Privileged access
Azure Key Vault soft-delete and purge protection allows you to recover deleted v
You should also take regular back ups of your vault on update/delete/create of objects within a Vault.
-## Next Steps
+## Next steps
- [Azure Key Vault security baseline](security-baseline.md) - [Azure Key Vault best practices](security-baseline.md)
key-vault About Secrets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/secrets/about-secrets.md
From a developer's perspective, Key Vault APIs accept and return secret values a
For highly sensitive data, clients should consider additional layers of protection for data. Encrypting data using a separate protection key prior to storage in Key Vault is one example.
-Key Vault also supports a contentType field for secrets. Clients may specify the content type of a secret to assist in interpreting the secret data when it's retrieved. The maximum length of this field is 255 characters. There are no pre-defined values. The suggested usage is as a hint for interpreting the secret data. For instance, an implementation may store both passwords and certificates as secrets, then use this field to differentiate. There are no predefined values.
+Key Vault also supports a contentType field for secrets. Clients may specify the content type of a secret to assist in interpreting the secret data when it's retrieved. The maximum length of this field is 255 characters. The suggested usage is as a hint for interpreting the secret data. For instance, an implementation may store both passwords and certificates as secrets, then use this field to differentiate. There are no predefined values.
## Encryption
How-to guides to control access in Key Vault:
- [About keys](../keys/about-keys.md) - [About certificates](../certificates/about-certificates.md) - [Secure access to a key vault](../general/security-features.md)-- [Key Vault Developer's Guide](../general/developers-guide.md)
+- [Key Vault Developer's Guide](../general/developers-guide.md)
lighthouse Monitor At Scale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lighthouse/how-to/monitor-at-scale.md
Title: Monitor delegated resources at scale description: Azure Lighthouse helps you use Azure Monitor Logs in a scalable way across customer tenants. Previously updated : 05/10/2021 Last updated : 06/30/2021
When you've determined which policies to deploy, you can [deploy them to your de
After you've deployed your policies, data will be logged in the Log Analytics workspaces you've created in each customer tenant. To gain insights across all managed customers, you can use tools such as [Azure Monitor Workbooks](../../azure-monitor/visualize/workbooks-overview.md) to gather and analyze information from multiple data sources.
+## Query data across customer workspaces
+
+You can run [log queries](../../azure-monitor/logs/log-query-overview.md) to retrieve data across Log Analytics workspaces in different customer tenants by creating a union that includes multiple workspaces. By including the TenantID column, you can see which results belong to which tenants.
+
+The following example query creates a union on the AzureDiagnostics table across workspaces in two separate customer tenants. The results show the Category, ResourceGroup, and TenantID columns.
+
+``` Kusto
+union AzureDiagnostics,
+workspace("WS-customer-tenant-1").AzureDiagnostics,
+workspace("WS-customer-tenant-2").AzureDiagnostics
+| project Category, ResourceGroup, TenantId
+```
+
+For more examples of queries across multiple Log Analytics workspaces, see [Query across resources with Azure Monitor](../../azure-monitor/logs/cross-workspace-query.md).
+ ## View alerts across customers You can view [alerts](../../azure-monitor/alerts/alerts-overview.md) for the delegated subscriptions in customer tenants that your manage.
load-balancer Upgrade Basic Standard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/upgrade-basic-standard.md
An Azure PowerShell script is available that does the following:
## Download the script
-Download the migration script from the [PowerShell Gallery](https://www.powershellgallery.com/packages/AzurePublicLBUpgrade/4.0).
+Download the migration script from the [PowerShell Gallery](https://www.powershellgallery.com/packages/AzurePublicLBUpgrade/5.0).
## Use the script There are two options for you depending on your local PowerShell environment setup and preferences:
machine-learning How To Troubleshoot Managed Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-troubleshoot-managed-online-endpoints.md
To get more details about this error, run:
az ml endpoint get-logs -n <endpoint-name> --deployment <deployment-name> --lines 100 ```
+### ERR_1350: Unable to download user model, not enough space on the disk
+
+This issue happens when the size of the model is bigger than the available disk space. Please try an SKU with more disk space.
+ ### ERR_2100: Unable to start user container To run the `score.py` provided as part of the deployment, Azure creates a container that includes all the resources that the `score.py` needs, and runs the scoring script on that container.
marketplace Isv App License https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/isv-app-license.md
Last updated 04/30/2021
# ISV app license management
-> [!IMPORTANT]
-> This capability is currently in Public Preview.
- Applies to the following offer type: - Dynamics 365 for Customer Engagement & Power Apps
marketplace License Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/license-dashboard.md
# License dashboard in commercial marketplace analytics
-> [!IMPORTANT]
-> This capability is currently in Public Preview.
- This article provides information about the License dashboard in the commercial marketplace program in Partner Center. The License dashboard shows the following information: - Number of customers who purchased licenses
network-watcher Migrate To Connection Monitor From Connection Monitor Classic https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/network-watcher/migrate-to-connection-monitor-from-connection-monitor-classic.md
After the migration begins, the following changes take place:
* The migrated connection monitors are no longer displayed as the older connection monitor solution. They're now available for use only in Connection Monitor. * Any external integrations, such as dashboards in Power BI and Grafana, and integrations with Security Information and Event Management (SIEM) systems, must be migrated manually. This is the only manual step you need to perform to migrate your setup.
+## Common Errors Encountered
+
+Below are some common errors faced during the migration :
+
+| Error | Reason |
+|||
+|Following Connection monitors cannot be imported as one or more Subscription/Region combination don't have network watcher enabled. Enable network watcher and click refresh to import them. List of Connection monitor - {0} | This error occurs when User is migrating tests from CM(classic) to Connection Monitor and Network Watcher Extension is not enabled not enabled in one or more subscriptions and regions of CM (classic). User needs to enable NW Extension in the subscription and refresh to import them before migrating again |
+|Connection monitors having following tests cannot be imported as one or more azure virtual machines don't have network watcher extension installed. Install network watcher extension and click refresh to import them. List of tests - {0} | This error occurs when User is migrating tests from CM(classic) to Connection Monitor and Network Watcher Extension is not installed in one or more Azure VMs of CM (classic). User needs to install NW Extension in the Azure VM and refresh before migrating again |
+|No rows to display | This error occurs when User is trying to migrate subscriptions from CM (Classic) to CM but no CM (classic) is created in the subscriptions |
+ ## Next steps To learn more about Connection Monitor, see: * [Migrate from Network Performance Monitor to Connection Monitor](./migrate-to-connection-monitor-from-network-performance-monitor.md) * [Create Connection Monitor by using the Azure portal](./connection-monitor-create-using-portal.md)++
+
+
+
network-watcher Migrate To Connection Monitor From Network Performance Monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/network-watcher/migrate-to-connection-monitor-from-network-performance-monitor.md
After the migration, be sure to:
* While you're disabling NPM, re-create your alerts on the NWConnectionMonitorTestResult and NWConnectionMonitorPathResult tables or use metrics. * Migrate any external integrations to the NWConnectionMonitorTestResult and NWConnectionMonitorPathResult tables. Examples of external integrations are dashboards in Power BI and Grafana, and integrations with Security Information and Event Management (SIEM) systems.
+## Common Errors Encountered
+
+Below are some common errors faced during the migration :
+
+| Error | Reason |
+|||
+| No valid NPM config found. Go to NPM UI to check config | This error occurs when User is selecting Import Tests from NPM to migrate the tests but NPM is not enabled in the workspace |
+|Workspace selected does not have 'Service Connectivity Monitor' config | This error occurs when User is migrating tests from NPMΓÇÖs Service Connectivity Monitor to Connection Monitor but there are no tests configured in Service Connectivity Monitor |
+|Workspace selected does not have 'ExpressRoute Monitor' config | This error occurs when User is migrating tests from NPMΓÇÖs ExpressRoute Monitor to Connection Monitor but there are no tests configured in ExpressRoute Monitor |
+|Workspace selected does not have 'Performance Monitor' config | This error occurs when User is migrating tests from NPMΓÇÖs Performance Monitor to Connection Monitor but there are no tests configured in Performance Monitor |
+|Workspace selected does not have valid '{0}' tests | This error occurs when User is migrating tests from NPM to Connection Monitor but there are no valid tests present in the feature chosen by User to migrate |
+|Before you attempt migrate, please enable Network watcher extension in selection subscription and location of LA workspace selected | This error occurs when User is migrating tests from NPM to Connection Monitor and Network Watcher Extension is not enabled in the LA workspace selected. User needs to enable NW Extension before migrating tests |
+|Few {1} tests contain agents that are no longer active. List of inactive agents - {0}. These agents may be running in the past but are shut down/not running any more. Enable agents and migrate to Connection Monitor. Click continue to migrate the tests that do not contain agents that are not active | This error occurs when User is migrating tests from NPM to Connection Monitor and some selected tests contain inactive Network Watcher Agents or such NW Agents which are no longer active but used to be active in the past and have been shut down. User can deselect these tests and continue to select and migrate the tests which do not contain any such inactive agents |
+|Your {1} tests contain agents that are no longer active. List of inactive agents - {0}. These agents may be running in the past but are shut down/not running any more. Enable agents and migrate to Connection Monitor | This error occurs when User is migrating tests from NPM to Connection Monitor and selected tests contain inactive Network Watcher Agents or such NW Agents which are no longer active but used to be active in the past and have been shut down. User needs to enable the agents and then continue to migrate these tests to Connection Monitor |
+|An error occurred while importing tests to connection monitor | This error occurs when User is trying to migrate tests from NPM to CM but due to errors the migration is not successful |
++ ## Next steps
networking Networking Partners Msp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/networking/networking-partners-msp.md
Use the links in this section for more information about managed cloud networkin
|[KoçSistem](https://azure.kocsistem.com.tr/en)|[KoçSistem Managed Cloud Services for Azure](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/kocsistem.kocsistemcloudmanagementtool?tab=Overview)|[KoçSistem Azure ExpressRoute Management](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/kocsistem.ks_azure_express_route?tab=Overview)|[KoçSistem Azure Virtual WAN Management](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/kocsistem.ks_azure_virtual_wan?tab=Overview)||[KoçSistem Azure Security Center Managed Service](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/kocsistem.ks_azure_security_center?tab=Overview)| |[Liquid Telecom](https://liquidcloud.africa/)|[Cloud Readiness - 2 Hour Assessment](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/liquidtelecommunicationsoperationslimited.liquid_cloud_readiness_assessment);[Liquid Azure Expert Services](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/liquidtelecommunicationsoperationslimited.5dab29ab-bb14-4df8-8978-9a8608a41ad7?tab=Overview)|[Liquid Managed ExpressRoute for Azure](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/liquidtelecommunicationsoperationslimited.42cfee0b-8f07-4948-94b0-c9fc3e1ddc42?tab=Overview)|||| |[Lumen](https://www.lumen.com/en-us/solutions/hybrid-cloud.html)||[ExpressRoute Consulting Svcs: 8-wk Implementation](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/centurylink2362604-2362604.centurylink_consultingservicesforexpressroute); [Lumen Landing Zone for ExpressRoute 1 Day](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/centurylinklimited.centurylink_landing_zone_for_azure_expressroute)||||
-|[Macquarie Telecom](https://macquariecloudservices.com/azure-managed-services/)|[Azure Managed Services by Macquarie Cloud](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/macquariecloudservices.managed_services?tab=Overview); [Azure Extend by Macquarie Cloud Services](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/macquariecloudservices.azure_extend?tab=Overview)||[Azure Deploy by Macquarie Cloud Services](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/macquariecloudservices.azure_deploy_vedge?tab=Overview); [SD-WAN Virtual Edge Install by Macquarie Cloud](https://azuremarketplace.microsoft.com/marketplace/apps/coevolveptylimited1581027739259.managed-vmware-sdwan-edge?tab=Overview)||[Managed Security by Macquarie Cloud Services](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/macquariecloudservices.managed_security?tab=Overview)|
+|[Macquarie Telecom](https://macquariecloudservices.com/azure-managed-services/)|[Azure Managed Services by Macquarie Cloud](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/macquariecloudservices.managed_services?tab=Overview); [Azure Extend by Macquarie Cloud Services](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/macquariecloudservices.azure_extend?tab=Overview)||[Azure Deploy by Macquarie Cloud Services](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/macquariecloudservices.azure_deploy_vedge?tab=Overview); [SD-WAN Virtual Edge offer by Macquarie Cloud Services](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/macquariecloudservices.azure_deploy?tab=Overview)||[Managed Security by Macquarie Cloud Services](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/macquariecloudservices.managed_security?tab=Overview)|
|[Megaport](https://www.megaport.com/services/microsoft-expressroute/)||[Managed Routing Service for ExpressRoute](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/megaport1582290752989.megaport_mcr?tab=Overview)|||| |[Nokia](https://www.nokia.com/networks/services/managed-services/)|||[NBConsult Nokia Nuage SDWAN](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/nbconsult1588859334197.nbconsult-nokia-nuage?tab=Overview); [Nuage SD-WAN 2.0 Azure Virtual WAN](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/nokiaofamericacorporation1591716055441.nuage_sd-wan_2-0_azure_virtual_wan?tab=Overview)|[Nokia 4G & 5G Private Wireless (NDAC)](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/nokiaofamericacorporation1591716055441.ndac_5g-ready_private_wireless?tab=Overview)| |[NTT Ltd](https://www.nttglobal.net/)|[Azure Cloud Discovery: 2-Week Workshop](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/capside.cloud-discovery-workshops-capside)|[NTT Managed ExpressRoute Service](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/nttglobalnetworks1592424806122.ntt_managed_expressroute_service?tab=Overview);[NTT Managed IP VPN Service](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/nttglobalnetworks1592424806122.ntt_managed_ip_vpn_service?tab=Overview)|[NTT Managed SD-WAN Service](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/nttglobalnetworks1592424806122.ntt_mng_sdwan_1?tab=Overview)|||
private-link Private Link Service Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/private-link/private-link-service-overview.md
Complete alias: *Prefix*. {GUID}.*region*.azure.privatelinkservice
## Control service exposure
-Private Link service provides you options to control the exposure of your service through "Visibility" setting. You can make the service private for consumption from different VNets you own (Azure RBAC permissions only), restrict the exposure to a limited set of subscriptions that you trust, or make it public so that all Azure subscriptions can request connections on the Private Link service. Your visibility settings decide whether a consumer can connect to your service or not.
+The Private Link service provides you with three options in the **Visibility** setting to control the exposure of your service. Your visibility setting determines whether a consumer can connect to your service. Here are the visibility setting options, from most restrictive to least restrictive:
+
+- **Role-based access control only**: If your service is for private consumption from different VNets that you own, you can use RBAC as an access control mechanism inside subscriptions that are associated with the same Active Directory tenant.
+- **Restricted by subscription**: If your service will be consumed across different tenants, you can restrict the exposure to a limited set of subscriptions that you trust. Authorizations can be pre-approved.
+- **Anyone with your alias**: If you want to make your service public and allow anyone with your Private Link service alias to request a connection, select this option.
## Control service access
purview How To Lineage Sql Server Integration Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/how-to-lineage-sql-server-integration-services.md
+
+ Title: Lineage from SQL Server Integration Services
+description: This article describes the data lineage extraction from SQL Server Integration Services.
+++++ Last updated : 06/30/2021+
+# How to get lineage from SQL Server Integration Services (SSIS) into Azure Purview
+
+This article elaborates on the data lineage aspects of SQL Server Integration Services (SSIS) in Azure Purview.
+
+## Prerequisites
+
+- [Lift and shift SQL Server Integration Services workloads to the cloud](https://docs.microsoft.com/sql/integration-services/lift-shift/ssis-azure-lift-shift-ssis-packages-overview)
+
+## Supported scenarios
+
+The current scope of support includes the lineage extraction from SSIS packages executed by Azure Data Factory SSIS integration runtime.
+
+On premises SSIS lineage extraction is not supported yet.
+
+### Supported data stores
+
+| Data store | Supported |
+| - | - |
+| Azure Blob Storage | Yes |
+| Azure Data Lake Storage Gen1 | Yes |
+| Azure Data Lake Storage Gen2 | Yes |
+| Azure File Storage | Yes |
+| Azure SQL Database \* | Yes |
+| Azure SQL Managed Instance \*| Yes |
+| Azure Synapse Analytics \* | Yes |
+| SQL Server \* | Yes |
+
+*\* Azure Purview currently doesn't support query or stored procedure for lineage or scanning. Lineage is limited to table and view sources only.*
++
+## How to bring SSIS lineage into Purview
+
+### Step 1. [Connect a Data Factory to Azure Purview](how-to-link-azure-data-factory.md)
+
+### Step 2. Trigger SSIS activity execution in Azure Data Factory
+
+You can [run SSIS package with Execute SSIS Package activity](../data-factory/how-to-invoke-ssis-package-ssis-activity.md) or [run SSIS package with Transact-SQL in ADF SSIS Integration Runtime](../data-factory/how-to-invoke-ssis-package-stored-procedure-activity.md).
+
+Once Execute SSIS Package activity finishes the execution, you can check lineage report status from the activity output in [Data Factory activity monitor](../data-factory/monitor-visually.md#monitor-activity-runs).
+
+### Step 3. Browse lineage Information in your Azure Purview account
+
+- You can browse the Data Catalog by choosing asset type ΓÇ£SQL Server Integration ServicesΓÇ¥.
++
+- You can also search the Data Catalog using keywords.
++
+- You can view lineage information for an SSIS Execute Package activity and open in Data Factory to view/edit the activity settings.
++
+- You can choose one data source to drill into how the columns in the source are mapped to the columns in the destination.
++
+## Next steps
+
+- [Lift and shift SQL Server Integration Services workloads to the cloud](https://docs.microsoft.com/sql/integration-services/lift-shift/ssis-azure-lift-shift-ssis-packages-overview)
+- [Learn about Data lineage in Azure Purview](catalog-lineage-user-guide.md)
+- [Link Azure Data Factory to push automated lineage](how-to-link-azure-data-factory.md)
purview How To Link Azure Data Factory https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/how-to-link-azure-data-factory.md
In additional to lineage, the data asset schema (shown in Asset -> Schema tab) i
### Data Factory Execute SSIS Package support
-| Data store | Supported |
-| - | - |
-| Azure Blob Storage | Yes |
-| Azure Data Lake Storage Gen1 | Yes |
-| Azure Data Lake Storage Gen2 | Yes |
-| Azure File Storage | Yes |
-| Azure SQL Database \* | Yes |
-| Azure SQL Managed Instance \*| Yes |
-| Azure Synapse Analytics \* | Yes |
-| SQL Server \* | Yes |
-
-*\* Azure Purview currently doesn't support query or stored procedure for lineage or scanning. Lineage is limited to table and view sources only.*
+Refer to [supported data stores](how-to-lineage-sql-server-integration-services.md#supported-data-stores).
> [!Note] > Azure Data Lake Storage Gen2 is now generally available. We recommend that you start using it today. For more information, see the [product page](https://azure.microsoft.com/en-us/services/storage/data-lake-storage/).
purview Tutorial Data Sources Readiness https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/tutorial-data-sources-readiness.md
Title: 'Tutorial: Check data sources readiness at scale (preview)'
-description: In this tutorial, you will run a subset of tools to verify readiness of your Azure data sources before registering and scanning them in Azure Purview.
+ Title: 'Check data source readiness at scale (preview)'
+description: In this tutorial, you'll verify the readiness of your Azure data sources before you register and scan them in Azure Purview.
Last updated 05/28/2021
-# Customer intent: As a data steward or catalog administrator, I need to onboard Azure data sources at scale before registering and scanning.
+# Customer intent: As a data steward or catalog administrator, I need to onboard Azure data sources at scale before I register and scan them.
-# Tutorial: Check Data Sources Readiness at Scale (Preview)
+# Tutorial: Check data source readiness at scale (preview)
> [!IMPORTANT]
-> Azure Purview is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> Azure Purview is currently in preview. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta or preview or are otherwise not yet released for general availability.
-To scan data sources, Azure Purview requires access to data sources. This is done by using **Credentials**. A credential is an authentication information that Azure Purview can use to authenticate to your registered data sources. There are few options to setup the credentials for Azure Purview such as using Managed Identity assigned to the Purview Account, using a Key Vault or a Service Principals.
+To scan data sources, Azure Purview requires access to them. It uses credentials to obtain this access. A *credential* is the authentication information that Azure Purview can use to authenticate to your registered data sources. There are a few ways to set up the credentials for Azure Purview, including:
+- The managed identity assigned to the Azure Purview account.
+- Secrets stored in Azure Key Vault.
+- Service principals.
-In this *two-part tutorial series*, we aim to help you to verify and configure required Azure role assignments and network access for various Azure Data Sources across your Azure subscriptions at scale, so you can then register and scan your Azure data sources in Azure Purview.
+In this two-part tutorial series, we'll help you verify and configure required Azure role assignments and network access for various Azure data sources across your Azure subscriptions at scale. You can then register and scan your Azure data sources in Azure Purview.
-Run [Azure Purview data sources readiness checklist](https://github.com/Azure/Purview-Samples/tree/master/Data-Source-Readiness) script after you deploy your Azure Purview account and before registering and scanning your Azure data sources.
+Run the [Azure Purview data sources readiness checklist](https://github.com/Azure/Purview-Samples/tree/master/Data-Source-Readiness) script after you deploy your Azure Purview account and before you register and scan your Azure data sources.
-In part 1 of this tutorial series, you will:
+In part 1 of this tutorial series, you'll:
> [!div class="checklist"] >
-> * Locate your data sources and prepare a list of data sources subscriptions.
-> * Run readiness checklist script to find any missing RBAC and network configurations across your data sources in Azure.
-> * Review missing Azure Purview MSI required role assignments and network configurations from the output report.
+> * Locate your data sources and prepare a list of data source subscriptions.
+> * Run the readiness checklist script to find any missing role-based access control (RBAC) or network configurations across your data sources in Azure.
+> * In the output report, review missing network configurations and role assignments required by Azure Purview Managed Identity (MSI).
> * Share the report with data Azure subscription owners so they can take suggested actions. ## Prerequisites
-* Azure Subscriptions where your data sources are located. If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio) before you begin.
+* Azure subscriptions where your data sources are located. If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio) before you begin.
* An [Azure Purview account](create-catalog-portal.md).
-* An Azure Key Vault resource in each subscription if any data sources such as Azure SQL Database, Azure Synapse or Azure SQL Manged Instances.
+* An Azure Key Vault resource in each subscription that has data sources like Azure SQL Database, Azure Synapse Analytics, or Azure SQL Managed Instance.
* The [Azure Purview data sources readiness checklist](https://github.com/Azure/Purview-Samples/tree/master/Data-Source-Readiness) script. > [!NOTE]
-> The Azure Purview data sources readiness checklist is only available for **Windows**.
-> This readiness checklist script currently is supported for **Azure Purview Managed Identity (MSI)**.
+> The Azure Purview data sources readiness checklist is available only for Windows.
+> This readiness checklist script is currently supported for Azure Purview MSI.
-## Prepare data sources' Azure subscriptions list
+## Prepare Azure subscriptions list for data sources
-Before running the script, create a csv file (e.g. "C:\temp\Subscriptions.csv) with 4 columns:
-
-1. Column name: `SubscriptionId`
- This column must contain all your Azure subscription IDs where your data sources reside.
-
- for example each column should have one subscription ID: 12345678-aaaa-bbbb-cccc-1234567890ab
-
-2. Column name: `KeyVaultName`
- Provide existing key vault name resource that is deployed in the same corresponding data source subscription.
-
- example: ContosoDevKeyVault
+Before running the script, create a .csv file (for example, C:\temp\Subscriptions.csv) with four columns:
-3. Column name: `SecretNameSQLUserName`
- Provide the name of an existing Azure key vault secret that contains an Azure AD user name that can logon to Azure Synapse, Azure SQL Servers or Azure SQL Managed Instance through Azure AD authentication.
+|Column name|Description|Example|
+|-|-|-|
+|`SubscriptionId`|Azure subscription IDs for your data sources.|12345678-aaaa-bbbb-cccc-1234567890ab|
+|`KeyVaultName`|Name of existing key vault thatΓÇÖs deployed in the data source subscription.|ContosoDevKeyVault|
+|`SecretNameSQLUserName`|Name of an existing Azure Key Vault secret that contains an Azure Active Directory (Azure AD) user name that can sign in to Azure Synapse, Azure SQL Database, or Azure SQL Managed Instance by using Azure AD authentication.|ContosoDevSQLAdmin|
+|`SecretNameSQLPassword`|Name of an existing Azure Key Vault secret that contains an Azure AD user password that can sign in to Azure Synapse, Azure SQL Database, or Azure SQL Managed Instance by using Azure AD authentication.|ContosoDevSQLPassword|
- example: ContosoDevSQLAdmin
-4. Column name: `SecretNameSQLPassword`
- Provide the name of an existing Azure key vault secret that contains an Azure AD user password that can logon to Azure Synapse, Azure SQL Servers or Azure SQL Managed Instance through Azure AD authentication.
-
- example: ContosoDevSQLPassword
-
- **Sample csv file:**
+**Sample .csv file:**
- :::image type="content" source="./media/tutorial-data-sources-readiness/subscriptions-input.png" alt-text="Subscriptions List" lightbox="./media/tutorial-data-sources-readiness/subscriptions-input.png":::
+
+> [!NOTE]
+> You can update the file name and path in the code, if you need to.
- > [!NOTE]
- > You can update the file name and path in the code, if needed.
-<br>
-## Prepare to run the script and install required PowerShell modules
+## Run the script and install the required PowerShell modules
-Follow these steps to run the script from your Windows machine:
+Follow these steps to run the script from your Windows computer:
-1. [Download Azure Purview data sources readiness checklist](https://github.com/Azure/Purview-Samples/tree/master/Data-Source-Readiness) script to the location of your choice.
+1. [Download the Azure Purview data sources readiness checklist](https://github.com/Azure/Purview-Samples/tree/master/Data-Source-Readiness) script to the location of your choice.
-2. On your computer, enter **PowerShell** in the search box on the Windows taskbar. In the search list, right-click **Windows PowerShell**, and then select **Run as administrator**.
+2. On your computer, enter **PowerShell** in the search box on the Windows taskbar. In the search list, right-click **Windows PowerShell** and then select **Run as administrator**.
-3. In the PowerShell window, enter the following command, replacing `<path-to-script>` with the folder path of the extracted the script file.
+3. In the PowerShell window, enter the following command. (Replace `<path-to-script>` with the folder path of the extracted script file.)
```powershell dir -Path <path-to-script> | Unblock-File ```
-4. Enter the following command to install the Azure cmdlets.
+4. Enter the following command to install the Azure cmdlets:
```powershell Install-Module -Name Az -AllowClobber -Scope CurrentUser ```
-6. If you see the warning prompt, *NuGet provider is required to continue*, enter **Y**, and then press Enter.
+6. If you see the prompt *NuGet provider is required to continue*, enter **Y**, and then select **Enter**.
-7. If you see the warning prompt, *Untrusted repository*, enter **A**, and then press Enter.
+7. If you see the prompt *Untrusted repository*, enter **A**, and then select **Enter**.
-5. Repeat the previous steps to install `Az.Synpase` and `AzureAD` modules.
+5. Repeat the previous steps to install the `Az.Synapse` and `AzureAD` modules.
It might take up to a minute for PowerShell to install the required modules.
-<br>
-## Collect additional data needed to run the script
+## Collect other data needed to run the script
-Before you run the PowerShell script to verify data sources subscriptions readiness, get the values of the following arguments to use in the scripts:
+Before you run the PowerShell script to verify the readiness of data source subscriptions, obtain the values of the following arguments to use in the scripts:
-- `AzureDataType`: choose any of the following options as your data source type to run the readiness for the data type across your subscriptions:
+- `AzureDataType`: Choose any of the following options as your data-source type to check the readiness for the data type across your subscriptions:
- `BlobStorage`
Before you run the PowerShell script to verify data sources subscriptions readin
- `All` -- `PurviewAccount`: Your existing Azure Purview Account resource name.
+- `PurviewAccount`: Your existing Azure Purview account resource name.
-- `PurviewSub`: Subscription ID where Azure Purview Account is deployed.
+- `PurviewSub`: Subscription ID where the Azure Purview account is deployed.
## Verify your permissions Make sure your user has the following roles and permissions:
-Role | Scope |
+Role or permission | Scope |
|-|--|
-| Global Reader | Azure AD Tenant |
-| Reader | Azure Subscriptions where your Azure Data Sources reside |
-| Reader | Subscription where Azure Purview Account is created |
-| SQL Admin (Azure AD Authentication) | Azure Synapse Dedicated Pools, Azure SQL Servers, Azure SQL Managed Instances |
-| Access to your Azure Key Vault | Access to get/list Key Vault's secret or Azure Key Vault Secret User |
+| **Global Reader** | Azure AD tenant |
+| **Reader** | Azure subscriptions where your Azure data sources are located |
+| **Reader** | Subscription where your Azure Purview account was created |
+| **SQL Admin** (Azure AD Authentication) | Azure Synapse dedicated pools, Azure SQL Database instances, Azure SQL managed instances |
+| Access to your Azure key vault | Access to get/list key vault's secret or Azure Key Vault secret user |
-<br>
## Run the client-side readiness script
-Run the script using the following steps:
+Run the script by completing these steps:
-1. Use the following command to navigate to the script's directory. Replace `path-to-script` with the folder path of the extracted file.
+1. Use the following command to go to the script's folder. Replace `<path-to-script>` with the folder path of the extracted file.
```powershell cd <path-to-script> ```
-2. The following command sets the execution policy for the local computer. Enter **A** for *Yes to All* when you are prompted to change the execution policy.
+2. Run the following command to set the execution policy for the local computer. Enter **A** for *Yes to All* when you're prompted to change the execution policy.
```powershell Set-ExecutionPolicy -ExecutionPolicy Unrestricted ```
-3. Execute the script using the following parameters. Replace the `DataType`, `PurviewName` and `SubscriptionID` placeholders.
+3. Run the script with the following parameters. Replace the `DataType`, `PurviewName`, and `SubscriptionID` placeholders.
```powershell .\purview-data-sources-readiness-checklist.ps1 -AzureDataType <DataType> -PurviewAccount <PurviewName> -PurviewSub <SubscriptionID> ```
- When you run the command, a pop-up window may appear twice for you to sign in to Azure and Azure AD using your Azure Active Directory credentials.
+ When you run the command, a pop-up window might appear twice prompting you to sign in to Azure and Azure AD by using your Azure Active Directory credentials.
-It can take several minutes until the report is fully generated depending on number Azure subscriptions and resources in the environment.
+It can take several minutes to create the report, depending on the number of Azure subscriptions and resources in the environment.
-After the process has finished, review the output report which demonstrates the detected missing configurations in your Azure subscriptions or resources. The results may appear as _Passed_, _Not Passed_ or _Awareness_. You can share the results with the corresponding subscriptions admins in your organization so they can configure the required settings.
+After the process completes, review the output report, which demonstrates the detected missing configurations in your Azure subscriptions or resources. The results can appear as _Passed_, _Not Passed_, or _Awareness_. You can share the results with the corresponding subscription admins in your organization so they can configure the required settings.
-<br>
-## Additional Information
+## More information
-### What data sources are supported in the script?
+### What data sources are supported by the script?
-Currently, the following data sources are supported in the script:
+Currently, the following data sources are supported by the script:
- Azure Blob Storage (BlobStorage)-- Azure Data Lake Storage Gen 2 (ADLSGen2)-- Azure Data Lake Storage Gen 1 (ADLSGen1)
+- Azure Data Lake Storage Gen2 (ADLSGen2)
+- Azure Data Lake Storage Gen1 (ADLSGen1)
- Azure SQL Database (AzureSQLDB) - Azure SQL Managed Instance (AzureSQLMI) - Azure Synapse (Synapse) dedicated pool
-You can choose **all** or any of these data sources as input parameter when running the script.
+You can choose all or any of these data sources as the input parameter when you run the script.
### What checks are included in the results? #### Azure Blob Storage (BlobStorage) -- RBAC: Verify if Azure Purview MSI has 'Storage Blob Data Reader role' in each of the subscriptions below the selected scope.-- RBAC: Verify if Azure Purview MSI has 'Reader' role on selected scope.-- Service Endpoint: Verify if Service Endpoint is ON, AND check if 'Allow trusted Microsoft services to access this storage account' is enabled.-- Networking: check if Private Endpoint is created for storage and enabled for Blob.
+- RBAC. Check whether Azure Purview MSI is assigned the **Storage Blob Data Reader** role in each of the subscriptions below the selected scope.
+- RBAC. Check whether Azure Purview MSI is assigned the **Reader** role on the selected scope.
+- Service endpoint. Check whether service endpoint is on, and check whether **Allow trusted Microsoft services to access this storage account** is enabled.
+- Networking: Check whether private endpoint is created for storage and enabled for Blob Storage.
-#### Azure Data Lake Storage Gen 2 (ADLSGen2)
+#### Azure Data Lake Storage Gen2 (ADLSGen2)
-- RBAC: Verify if Azure Purview MSI has 'Storage Blob Data Reader' role in each of the subscriptions below the selected scope.-- RBAC: Verify if Azure Purview MSI has 'Reader' role on selected scope.-- Service Endpoint: Verify if Service Endpoint is ON, AND check if 'Allow trusted Microsoft services to access this storage account' is enabled.-- Networking: check if Private Endpoint is created for storage and enabled for Blob Storage.
+- RBAC. Check whether Azure Purview MSI is assigned the **Storage Blob Data Reader** role in each of the subscriptions below the selected scope.
+- RBAC. Check whether Azure Purview MSI is assigned the **Reader** role on the selected scope.
+- Service endpoint. Check whether service endpoint is on, and check whether **Allow trusted Microsoft services to access this storage account** is enabled.
+- Networking: Check whether private endpoint is created for storage and enabled for Blob Storage.
-#### Azure Data Lake Storage Gen 1 (ADLSGen1)
+#### Azure Data Lake Storage Gen1 (ADLSGen1)
-- Networking: Verify if Service Endpoint is ON, AND check if 'Allow all Azure services to access this Data Lake Storage Gen1 account' is enabled.-- Permissions: Verify if Azure Purview MSI has access to Read/Execute.
+- Networking. Check whether service endpoint is on, and check whether **Allow all Azure services to access this Data Lake Storage Gen1 account** is enabled.
+- Permissions. Check whether Azure Purview MSI has Read/Execute permissions.
#### Azure SQL Database (AzureSQLDB) -- SQL Servers:
- - Network: Verify if Public or Private Endpoint is enabled.
- - Firewall: Verify if 'Allow Azure services and resources to access this server' is enabled.
- - Azure AD Admin: Check if Azure SQL Server has AAD Authentication.
- - AAD Admin: Populate Azure SQL Server AAD Admin user or group.
+- SQL Server instances:
+ - Network. Check whether public endpoint or private endpoint is enabled.
+ - Firewall. Check whether **Allow Azure services and resources to access this server** is enabled.
+ - Azure AD administration. Check whether Azure SQL Server has Azure AD authentication.
+ - Azure AD administration. Populate the Azure SQL Server Azure AD admin user or group.
-- SQL Databases:
- - SQL Role: Check if Azure Purview MSI has db_datareader role.
+- SQL databases:
+ - SQL role. Check whether Azure Purview MSI is assigned the **db_datareader** role.
#### Azure SQL Managed Instance (AzureSQLMI) -- SQL Managed Instance Servers:
- - Network: Verify if Public or Private Endpoint is enabled.
- - ProxyOverride: Verify if Azure SQL Managed Instance is configured as Proxy or Redirect.
- - Networking: Verify if NSG has an inbound rule to allow AzureCloud over required ports; Redirect: 1433 and 11000-11999 or Proxy: 3342.
- - Azure AD Admin: Check if Azure SQL Server has AAD Authentication.
- - AAD Admin: Populate Azure SQL Server AAD Admin user or group.
+- SQL Managed Instance servers:
+ - Network. Check whether public endpoint or private endpoint is enabled.
+ - ProxyOverride. Check whether Azure SQL Managed Instance is configured as Proxy or Redirect.
+ - Networking. Check whether NSG has an inbound rule to allow AzureCloud over required ports:
+ - Redirect: 1433 and 11000-11999
+ or
+ - Proxy: 3342
+ - Azure AD administration. Check whether Azure SQL Server has Azure AD authentication.
+ - Azure AD administration. Populate the Azure SQL Server Azure AD admin user or group.
-- SQL Databases:
- - SQL Role: Check if Azure Purview MSI has db_datareader role.
+- SQL databases:
+ - SQL role. Check whether Azure Purview MSI is assigned the **db_datareader** role.
#### Azure Synapse (Synapse) dedicated pool -- RBAC: Verify if Azure Purview MSI has 'Storage Blob Data Reader role' in each of the subscriptions below the selected scope.-- RBAC: Verify if Azure Purview MSI has 'Reader' role on selected scope.-- SQL Servers (dedicated pools):
- - Network: Verify if Public or Private Endpoint is enabled.
- - Firewall: Verify if 'Allow Azure services and resources to access this server' is enabled.
- - Azure AD Admin: Check if Azure SQL Server has AAD Authentication.
- - AAD Admin: Populate Azure SQL Server AAD Admin user or group.
+- RBAC. Check whether Azure Purview MSI is assigned the **Storage Blob Data Reader** role in each of the subscriptions below the selected scope.
+- RBAC. Check whether Azure Purview MSI is assigned the **Reader** role on the selected scope.
+- SQL Server instances (dedicated pools):
+ - Network: Check whether public endpoint or private endpoint is enabled.
+ - Firewall: Check whether **Allow Azure services and resources to access this server** is enabled.
+ - Azure AD administration: Check whether Azure SQL Server has Azure AD authentication.
+ - Azure AD administration: Populate the Azure SQL Server Azure AD admin user or group.
-- SQL Databases:
- - SQL Role: Check if Azure Purview MSI has db_datareader role.
+- SQL databases:
+ - SQL role. Check whether Azure Purview MSI is assigned the **db_datareader** role.
## Next steps In this tutorial, you learned how to: > [!div class="checklist"] >
-> * Run Azure Purview readiness checklist to verify your Azure subscriptions missing configuration at scale, before they can be registered and scanned in Azure Purview.
+> * Run the Azure Purview readiness checklist to check, at scale, whether your Azure subscriptions are missing configuration, before you register and scan them in Azure Purview.
-Advance to the next tutorial to learn how to navigate the home page and search for an asset.
+Go to the next tutorial to learn how to identify the required access and set up required authentication and network rules for Azure Purview across Azure data sources:
> [!div class="nextstepaction"] > [Configure access to data sources for Azure Purview MSI at scale](tutorial-msi-configuration.md)
purview Tutorial Msi Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/tutorial-msi-configuration.md
Title: 'Tutorial: Configure access to data sources for Azure Purview MSI at scale (preview)'
-description: In this tutorial, you will run a subset of tools configure Azure MSI settings on your Azure data sources subscriptions.
+ Title: 'Configure access to data sources for Azure Purview MSI at scale (preview)'
+description: In this tutorial, you'll configure Azure MSI settings on your Azure data source subscriptions.
Last updated 05/28/2021
-# Customer intent: As a data steward or catalog administrator, I need to onboard Azure data sources at scale before registering and scanning.
+# Customer intent: As a data steward or catalog administrator, I need to onboard Azure data sources at scale before I register and scan them.
-# Tutorial: Configure access to data sources for Azure Purview MSI at scale (Preview)
+# Tutorial: Configure access to data sources for Azure Purview MSI at scale (preview)
> [!IMPORTANT]
-> Azure Purview is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> Azure Purview is currently in preview. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta or preview or are otherwise not yet released for general availability.
-To scan data sources, Azure Purview requires access to data sources. This tutorial is aimed to assist Azure Subscription owners and Azure Purview Data Source Administrators to identify required access and setup required authentication and network rules for Azure Purview across Azure data sources.
+To scan data sources, Azure Purview requires access to them. This tutorial is intended for Azure subscription owners and Azure Purview Data Source Administrators. It will help you identify required access and set up required authentication and network rules for Azure Purview across Azure data sources.
-In part 2 of this tutorial series, you will:
+In part 2 of this tutorial series, you'll:
> [!div class="checklist"] >
-> * Locate your data sources and prepare a list of data sources subscriptions.
-> * Run the script to configure any missing RBAC and required network configurations across your data sources in Azure.
+> * Locate your data sources and prepare a list of data source subscriptions.
+> * Run a script to configure any missing role-based access control (RBAC) or required network configurations across your data sources in Azure.
> * Review the output report. ## Prerequisites
-* Azure Subscriptions where your data sources are located. If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio) before you begin.
+* Azure subscriptions where your data sources are located. If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio) before you begin.
* An [Azure Purview account](create-catalog-portal.md).
-* An Azure Key Vault resource in each subscription if any data sources such as Azure SQL Database, Azure Synapse or Azure SQL Manged Instances.
+* An Azure Key Vault resource in each subscription that has data sources like Azure SQL Database, Azure Synapse Analytics, or Azure SQL Managed Instance.
* The [Azure Purview MSI Configuration](https://github.com/Azure/Purview-Samples/tree/master/Data-Source-MSI-Configuration) script. > [!NOTE]
-> The Azure Purview MSI Configuration script is only available for **Windows**.
-> This script currently is supported for **Azure Purview Managed Identity (MSI)**.
+> The Azure Purview MSI Configuration script is available only for Windows.
+> This script is currently supported for Azure Purview Managed Identity (MSI).
> [!IMPORTANT]
-> It is highly recommended to test and verify all the changes the script performs in your Azure environment before deploying it into your production environment.
+> We strongly recommend that you test and verify all the changes the script performs in your Azure environment before you deploy it into your production environment.
-## Prepare data sources' Azure subscriptions list
+## Prepare Azure subscriptions list for data sources
-Before running the script, create a csv file (e.g. "C:\temp\Subscriptions.csv) with 4 columns:
+Before you run the script, create a .csv file (for example, "C:\temp\Subscriptions.csv) with four columns:
-1. Column name: `SubscriptionId`
- This column must contain all your Azure subscription IDs where your data sources reside.
-
- for example each column should have one subscription ID: 12345678-aaaa-bbbb-cccc-1234567890ab
-
-2. Column name: `KeyVaultName`
- Provide existing key vault name resource that is deployed in the same corresponding data source subscription.
-
- example: ContosoDevKeyVault
-
-3. Column name: `SecretNameSQLUserName`
- Provide the name of an existing Azure key vault secret that contains an Azure AD user name that can logon to Azure Synapse, Azure SQL Servers or Azure SQL Managed Instance through Azure AD authentication.
+|Column name|Description|Example|
+|-|-|-|
+|`SubscriptionId`|Azure subscription IDs for your data sources.|12345678-aaaa-bbbb-cccc-1234567890ab|
+|`KeyVaultName`|Name of existing key vault thatΓÇÖs deployed in the data source subscription.|ContosoDevKeyVault|
+|`SecretNameSQLUserName`|Name of an existing Azure Key Vault secret that contains an Azure Active Directory (Azure AD) user name that can sign in to Azure Synapse, Azure SQL Database, or Azure SQL Managed Instance by using Azure AD authentication.|ContosoDevSQLAdmin|
+|`SecretNameSQLPassword`|Name of an existing Azure Key Vault secret that contains an Azure AD user password that can sign in to Azure Synapse, Azure SQL Database, or Azure SQL Managed Instance by using Azure AD authentication.|ContosoDevSQLPassword|
- example: ContosoDevSQLAdmin
-4. Column name: `SecretNameSQLPassword`
- Provide the name of an existing Azure key vault secret that contains an Azure AD user password that can logon to Azure Synapse, Azure SQL Servers or Azure SQL Managed Instance through Azure AD authentication.
-
- example: ContosoDevSQLPassword
- **Sample csv file:**
+**Sample .csv file:**
- :::image type="content" source="./media/tutorial-data-sources-readiness/subscriptions-input.png" alt-text="Subscriptions List" lightbox="./media/tutorial-data-sources-readiness/subscriptions-input.png":::
- > [!NOTE]
- > You can update the file name and path in the code, if needed.
+> [!NOTE]
+> You can update the file name and path in the code, if you need to.
-<br>
-## Prepare to run the script and install required PowerShell modules
+## Run the script and install the required PowerShell modules
-Follow these steps to run the script from your Windows machine:
+Follow these steps to run the script from your Windows computer:
1. [Download Azure Purview MSI Configuration](https://github.com/Azure/Purview-Samples/tree/master/Data-Source-MSI-Configuration) script to the location of your choice.
-2. On your computer, enter **PowerShell** in the search box on the Windows taskbar. In the search list, right-click **Windows PowerShell**, and then select **Run as administrator**.
+2. On your computer, enter **PowerShell** in the search box on the Windows taskbar. In the search list, right-click **Windows PowerShell** and then select **Run as administrator**.
-3. In the PowerShell window, enter the following command, replacing `<path-to-script>` with the folder path of the extracted the script file.
+3. In the PowerShell window, enter the following command. (Replace `<path-to-script>` with the folder path of the extracted script file.)
```powershell dir -Path <path-to-script> | Unblock-File ```
-4. Enter the following command to install the Azure cmdlets.
+4. Enter the following command to install the Azure cmdlets:
```powershell Install-Module -Name Az -AllowClobber -Scope CurrentUser ```
-5. If you see the warning prompt, *NuGet provider is required to continue*, enter **Y**, and then press Enter.
+5. If you see the prompt *NuGet provider is required to continue*, enter **Y**, and then select **Enter**.
-6. If you see the warning prompt, *Untrusted repository*, enter **A**, and then press Enter.
+6. If you see the prompt *Untrusted repository*, enter **A**, and then select **Enter**.
-7. Repeat the previous steps to install `Az.Synpase` and `AzureAD` modules.
+7. Repeat the previous steps to install the `Az.Synapse` and `AzureAD` modules.
It might take up to a minute for PowerShell to install the required modules.
-<br>
-## Collect additional data needed to run the script
+## Collect other data needed to run the script
-Before you run the PowerShell script to verify data sources subscriptions readiness, get the values of the following arguments to use in the scripts:
+Before you run the PowerShell script to verify the readiness of data source subscriptions, obtain the values of the following arguments to use in the scripts:
-- `AzureDataType`: choose any of the following options as your data source type to run the readiness for the data type across your subscriptions:
+- `AzureDataType`: Choose any of the following options as your data-source type to check the readiness for the data type across your subscriptions:
- `BlobStorage`
Before you run the PowerShell script to verify data sources subscriptions readin
- `All` -- `PurviewAccount`: Your existing Azure Purview Account resource name.
+- `PurviewAccount`: Your existing Azure Purview account resource name.
-- `PurviewSub`: Subscription ID where Azure Purview Account is deployed.
+- `PurviewSub`: Subscription ID where the Azure Purview account is deployed.
## Verify your permissions Make sure your user has the following roles and permissions:
-The following permissions (minimum) are needed run the script in your Azure environment:
+At a minimum, you need the following permissions to run the script in your Azure environment:
-Role | Scope | Why is needed? |
+Role | Scope | Why is it needed? |
|-|--|--|
-| Global Reader | Azure AD Tenant | To read Azure SQL Admin user group membership and Azure Purview MSI |
-| Global Administrator | Azure AD Tenant | To assign 'Directory Reader' role to Azure SQL Managed Instances |
-| Contributor | Subscription or Resource Group where Azure Purview Account is created | To read Azure Purview Account resource. Create Key Vault resource and a secret. |
-| Owner or User Access Administrator | Management Group or Subscription where your Azure Data Sources reside | To assign RBAC |
-| Contributor | Management Group or Subscription where your Azure Data Sources reside | To setup Network configuration |
-| SQL Admin (Azure AD Authentication) | Azure SQL Servers or Azure SQL Managed Instances | To assign db_datareader role to Azure Purview |
-| Access to your Azure Key Vault | Access to get/list Key Vault's secret for Azure SQL DB, SQL MI or Synapse authentication |
+| **Global Reader** | Azure AD tenant | To read Azure SQL Admin user group membership and Azure Purview MSI |
+| **Global Administrator** | Azure AD tenant | To assign the **Directory Reader** role to Azure SQL managed instances |
+| **Contributor** | Subscription or resource group where your Azure Purview account is created | To read the Azure Purview account resource and create a Key Vault resource and secret |
+| **Owner or User Access Administrator** | Management group or subscription where your Azure data sources are located | To assign RBAC |
+| **Contributor** | Management group or subscription where your Azure data sources are located | To set up network configuration |
+| **SQL Admin** (Azure AD Authentication) | Azure SQL Server instances or Azure SQL managed instances | To assign the **db_datareader** role to Azure Purview |
+| Access to your Azure key vault | Access to get/list Key Vault secret for Azure SQL Database, Azure SQL Managed Instance, or Azure Synapse authentication |
-<br>
## Run the client-side readiness script
-Run the script using the following steps:
+Run the script by completing these steps:
-1. Use the following command to navigate to the script's directory. Replace `path-to-script` with the folder path of the extracted file.
+1. Use the following command to go to the script's folder. Replace `<path-to-script>` with the folder path of the extracted file.
```powershell cd <path-to-script> ```
-2. The following command sets the execution policy for the local computer. Enter **A** for *Yes to All* when you are prompted to change the execution policy.
+2. Run the following command to set the execution policy for the local computer. Enter **A** for *Yes to All* when you're prompted to change the execution policy.
```powershell Set-ExecutionPolicy -ExecutionPolicy Unrestricted ```
-3. Execute the script using the following parameters. Replace the `DataType`, `PurviewName` and `SubscriptionID` placeholders.
+3. Run the script with the following parameters. Replace the `DataType`, `PurviewName`, and `SubscriptionID` placeholders.
```powershell .\purview-msi-configuration.ps1 -AzureDataType <DataType> -PurviewAccount <PurviewName> -PurviewSub <SubscriptionID> ```
- When you run the command, a pop-up window may appear twice for you to sign in to Azure and Azure AD using your Azure Active Directory credentials.
+ When you run the command, a pop-up window might appear twice prompting you to sign in to Azure and Azure AD by using your Azure Active Directory credentials.
-It can take several minutes until the report is fully generated depending on number Azure subscriptions and resources in the environment.
+It can take several minutes to create the report, depending on the number of Azure subscriptions and resources in the environment.
-you maybe prompted to sign in to your Azure SQL Servers if the provided credentials in the Key Vault do not match. You can provide the credentials or hit enter to skip the specific server.
+You might be prompted to sign in to your Azure SQL Server instances if the credentials in the key vault don't match. You can provide the credentials or select **Enter** to skip the specific server.
-After the process has finished, review the output report to review the changes.
+After the process completes, view the output report to review the changes.
-<br>
-## Additional Information
+## More information
-### What data sources are supported in the script?
+### What data sources are supported by the script?
-Currently, the following data sources are supported in the script:
+Currently, the following data sources are supported by the script:
- Azure Blob Storage (BlobStorage)-- Azure Data Lake Storage Gen 2 (ADLSGen2)-- Azure Data Lake Storage Gen 1 (ADLSGen1)
+- Azure Data Lake Storage Gen2 (ADLSGen2)
+- Azure Data Lake Storage Gen1 (ADLSGen1)
- Azure SQL Database (AzureSQLDB) - Azure SQL Managed Instance (AzureSQLMI) - Azure Synapse (Synapse) dedicated pool
-You can choose **all** or any of these data sources as input parameter when running the script.
+You can choose all or any of these data sources as input parameter when you run the script.
### What configurations are included in the script?
-This script can help you to automatically perform the following tasks:
+This script can help you automatically complete the following tasks:
#### Azure Blob Storage (BlobStorage) -- RBAC: Verify and assign Azure RBAC 'Reader' role to Azure Purview MSI on selected scope.-- RBAC: Verify and assign Azure RBAC 'Storage Blob Data Reader role' to Azure Purview MSI in each of the subscriptions below selected scope.-- Networking: Verify and report if Private Endpoint is created for storage and enabled for Blob Storage.-- Service Endpoint: If Private Endpoint is disabled check if Service Endpoint is ON, AND enable 'Allow trusted Microsoft services to access this storage account'.
+- RBAC. Assign the Azure RBAC **Reader** role to Azure Purview MSI on the selected scope. Verify the assignment.
+- RBAC. Assign the Azure RBAC **Storage Blob Data Reader** role to Azure Purview MSI in each of the subscriptions below the selected scope. Verify the assignments.
+- Networking. Report whether private endpoint is created for storage and enabled for Blob Storage.
+- Service endpoint. If private endpoint is off, check whether service endpoint is on, and enable **Allow trusted Microsoft services to access this storage account**.
-#### Azure Data Lake Storage Gen 2 (ADLSGen2)
+#### Azure Data Lake Storage Gen2 (ADLSGen2)
-- RBAC: Verify and assign Azure RBAC 'Reader' role to Azure Purview MSI on selected scope.-- RBAC: Verify and assign Azure RBAC 'Storage Blob Data Reader role' to Azure Purview MSI in each of the subscriptions below selected scope.-- Networking: Verify and report if Private Endpoint is created for storage and enabled for Blob Storage.-- Service Endpoint: If Private Endpoint is disabled check if Service Endpoint is ON, AND enable 'Allow trusted Microsoft services to access this storage account'.
+- RBAC. Assign the Azure RBAC **Reader** role to Azure Purview MSI on the selected scope. Verify the assignment.
+- RBAC. Assign the Azure RBAC **Storage Blob Data Reader** role to Azure Purview MSI in each of the subscriptions below the selected scope. Verify the assignments.
+- Networking. Report whether private endpoint is created for storage and enabled for Blob Storage.
+- Service endpoint. If private endpoint is off, check whether service endpoint is on, and enable **Allow trusted Microsoft services to access this storage account**.
-#### Azure Data Lake Storage Gen 1 (ADLSGen1)
+#### Azure Data Lake Storage Gen1 (ADLSGen1)
-- Networking: Verify if Service Endpoint is ON, AND enabled 'Allow all Azure services to access this Data Lake Storage Gen1 account' on Data Lake Storage.-- Permissions: Verify and assign Read/Execute access to Azure Purview MSI .
+- Networking. Verify that service endpoint is on, and enable **Allow all Azure services to access this Data Lake Storage Gen1 account** on Data Lake Storage.
+- Permissions. Assign Read/Execute access to Azure Purview MSI. Verify the access.
#### Azure SQL Database (AzureSQLDB) -- SQL Servers:
- - Network: Verify and report if Public or Private Endpoint is enabled.
- - Firewall: If Private Endpoint is off, verify firewall rules and enable 'Allow Azure services and resources to access this server'.
- - Azure AD Admin: Enable Azure AD Authentication for Azure SQL Server.
+- SQL Server instances:
+ - Network. Report whether public endpoint or private endpoint is enabled.
+ - Firewall. If private endpoint is off, verify firewall rules and enable **Allow Azure services and resources to access this server**.
+ - Azure AD administration. Enable Azure AD authentication for Azure SQL Database.
-- SQL Databases:
- - SQL Role: Assign Azure Purview MSI with db_datareader role.
+- SQL databases:
+ - SQL role. Assign the **db_datareader** role to Azure Purview MSI.
#### Azure SQL Managed Instance (AzureSQLMI) -- SQL Managed Instance Servers:
- - Network: Verify if Public or Private Endpoint is enabled. Reports if Public endpoint is disabled.
- - ProxyOverride: Verify if Azure SQL Managed Instance is configured as Proxy or Redirect.
- - Networking: Verify and update NSG rules to allow AzureCloud with inbound access to SQL Server over required ports; Redirect: 1433 and 11000-11999 or Proxy: 3342.
- - Azure AD Admin: Enable Azure AD Authentication for Azure SQL Managed Instance.
+- SQL Managed Instance servers:
+ - Network. Verify that public endpoint or private endpoint is on. Report if public endpoint is off.
+ - ProxyOverride. Verify that Azure SQL Managed Instance is configured as Proxy or Redirect.
+ - Networking. Update NSG rules to allow AzureCloud inbound access to SQL Server instances over required ports:
+ - Redirect: 1433 and 11000-11999
+
+ or
+ - Proxy: 3342
+
+ Verify this access.
+ - Azure AD administration. Enable Azure AD authentication for Azure SQL Managed Instance.
-- SQL Databases:
- - SQL Role: Assign Azure Purview MSI with db_datareader role.
+- SQL databases:
+ - SQL role. Assign the **db_datareader** role to Azure Purview MSI.
-#### Azure Synapse (Synapse) dedicated pools:
+#### Azure Synapse (Synapse) dedicated pool
-- RBAC: Verify and assign Azure RBAC 'Reader' role to Azure Purview MSI on selected scope.-- RBAC: Verify and assign Azure RBAC 'Storage Blob Data Reader role' to Azure Purview MSI in each of the subscriptions below selected scope.-- SQL Servers (Dedicated Pools):
- - Network: Verify and report if Public or Private Endpoint is enabled.
- - Firewall: If Private Endpoint is off, verify firewall rules and enable 'Allow Azure services and resources to access this server'.
- - Azure AD Admin: Enable Azure AD Authentication for Azure SQL Server.
+- RBAC. Assign the Azure RBAC **Reader** role to Azure Purview MSI on the selected scope. Verify the assignment.
+- RBAC. Assign the Azure RBAC **Storage Blob Data Reader** role to Azure Purview MSI in each of the subscriptions below the selected scope. Verify the assignments.
+- SQL Server instances (dedicated pools):
+ - Network. Report whether public endpoint or private endpoint is on.
+ - Firewall. If private endpoint is off, verify firewall rules and enable **Allow Azure services and resources to access this server**.
+ - Azure AD administration. Enable Azure AD authentication for Azure SQL Database.
-- SQL Databases:
- - SQL Role: Assign Azure Purview MSI with db_datareader role.
+- SQL databases:
+ - SQL role. Assign the **db_datareader** role to Azure Purview MSI.
## Next steps In this tutorial, you learned how to: > [!div class="checklist"] >
-> * Register and scan Azure data sources in Azure Purview .
-
-Advance to the next tutorial to learn how to navigate the home page and search for an asset.
+> * Identify required access and set up required authentication and network rules for Azure Purview across Azure data sources.
-> [!div class="nextstepaction"]
-> [Register and scan multiple sources in Azure Purview](register-scan-azure-multiple-sources.md)
+Go to the next tutorial to learn how to [Register and scan multiple sources in Azure Purview](register-scan-azure-multiple-sources.md).
security Threat Modeling Tool Communication Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/develop/threat-modeling-tool-communication-security.md
public class ValuesController : ApiController
| **SDL Phase** | Build | | **Applicable Technologies** | Generic | | **Attributes** | N/A |
-| **References** | [Azure Redis TLS support](../../azure-cache-for-redis/cache-faq.md) |
+| **References** | [Azure Redis TLS support](../../azure-cache-for-redis/cache-faq.yml) |
| **Steps** | Redis server does not support TLS out of the box, but Azure Cache for Redis does. If you are connecting to Azure Cache for Redis and your client supports TLS, like StackExchange.Redis, then you should use TLS. By default non-TLS port is disabled for new Azure Cache for Redis instances. Ensure that the secure defaults are not changed unless there is a dependency on TLS support for redis clients. | Please note that Redis is designed to be accessed by trusted clients inside trusted environments. This means that usually it is not a good idea to expose the Redis instance directly to the internet or, in general, to an environment where untrusted clients can directly access the Redis TCP port or UNIX socket.
sentinel Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/customer-managed-keys.md
# Set up Azure Sentinel customer-managed key
-This article provides background information and steps to configure a customer-managed key (CMK) for Azure Sentinel. CMK enables all data saved or sent to Azure Sentinel to be encrypted in all relevant storage resources with an Azure Key Vault key created or owned by you.
+This article provides background information and steps to configure a [customer-managed key (CMK)](../azure-monitor/logs/customer-managed-keys.md) for Azure Sentinel. CMK allows you to give all data stored in Azure Sentinel - already encrypted by Microsoft in all relevant storage resources - an extra layer of protection with an encryption key created and owned by you and stored in your [Azure Key Vault](../key-vault/general/overview.md).
-> [!NOTE]
-> - The Azure Sentinel CMK capability is provided only to **new customers**.
->
-> - Access to this capability is controlled by Azure feature registration. You can request access by contacting azuresentinelCMK@microsoft.com. Pending requests will be approved according to the available capacity.
->
-> - The CMK capability is only available to customers sending 1TB per day or more. You will receive information about additional pricing when you apply to Microsoft to provision CMK on your Azure subscription. Learn more about [Log Analytics pricing](../azure-monitor/logs/manage-cost-storage.md#log-analytics-dedicated-clusters).
+## Prerequisites
+
+- The CMK capability requires a Log Analytics dedicated cluster with at least a 1 TB/day commitment tier. Several workspaces can be linked to the same dedicated cluster, and they will share the same customer-managed key.
+- After you complete the steps in this guide and before you use the workspace, for onboarding confirmation, contact the [Azure Sentinel Product Group](mailto:azuresentinelCMK@microsoft.com).
+- Learn about [Log Analytics Dedicated Cluster Pricing](../azure-monitor/logs/logs-dedicated-clusters.md#cluster-pricing-model).
+
+## Considerations
+
+- Onboarding a CMK workspace to Sentinel is supported only via REST API, and not via the Azure portal. Azure Resource Manager templates (ARM templates) currently aren't supported for CMK onboarding.
+
+- The Azure Sentinel CMK capability is provided only to *workspaces in Log Analytics dedicated clusters* that have *not already been onboarded to Azure Sentinel*.
+
+- The following CMK-related changes *are not supported* because they will be ineffective (Azure Sentinel data will continue to be encrypted only by the Microsoft-managed key, and not by the CMK):
+
+ - Enabling CMK on a workspace that's *already onboarded* to Azure Sentinel.
+ - Enabling CMK on a cluster that contains Sentinel-onboarded workspaces.
+ - Linking a Sentinel-onboarded non-CMK workspace to a CMK-enabled cluster.
+
+- The following CMK-related changes *are not supported* because they may lead to undefined and problematic behavior:
+
+ - Disabling CMK on a workspace already onboarded to Azure Sentinel.
+ - Setting a Sentinel-onboarded, CMK-enabled workspace as a non-CMK workspace by de-linking it from its CMK-enabled dedicated cluster.
+ - Disabling CMK on a CMK-enabled Log Analytics dedicated cluster.
+
+- Azure Sentinel supports System Assigned Identities in CMK configuration. Therefore, the dedicated Log Analytics cluster's identity should be of **System Assigned** type. We recommend that you use the identity that's automatically assigned to the Log Analytics cluster when it's created.
+
+- Changing the customer-managed key to another key (with another URI) currently *isn't supported*. You should change the key by [rotating it](../azure-monitor/logs/customer-managed-keys.md#key-rotation).
+
+- Before you make any CMK changes to a production workspace or to a Log Analytics cluster, contact the [Azure Sentinel Product Group](mailto:azuresentinelCMK@microsoft.com).
## How CMK works
-The Azure Sentinel solution uses several storage resources for log collection and features, including Log Analytics and others. As part of the Azure Sentinel CMK configuration, you will have to configure the CMK settings on the related storage resources as well. Data saved in storage resources other than Log Analytics will also be encrypted.
+The Azure Sentinel solution uses several storage resources for log collection and features, including a Log Analytics dedicated cluster. As part of the Azure Sentinel CMK configuration, you will have to configure the CMK settings on the related Log Analytics dedicated cluster. Data saved by Azure Sentinel in storage resources other than Log Analytics will also be encrypted using the customer managed key configured for the dedicated Log Analytics cluster.
-Learn more about [CMK](../azure-monitor/logs/customer-managed-keys.md#customer-managed-key-overview).
+See the following additional relevant documentation:
+- [Azure Monitor customer-managed keys (CMK)](../azure-monitor/logs/customer-managed-keys.md).
+- [Azure Key Vault](../key-vault/general/overview.md).
+- [Log Analytics dedicated clusters](../azure-monitor/logs/logs-dedicated-clusters.md).
> [!NOTE] > If you enable CMK on Azure Sentinel, any Public Preview feature that does not support CMK will not be enabled.
Learn more about [CMK](../azure-monitor/logs/customer-managed-keys.md#customer-m
To provision CMK, follow these steps: 
-1. Create an Azure Key Vault and storing key.
+1. Create an Azure Key Vault and generate or import a key.
2. Enable CMK on your Log Analytics workspace.
-3. Register for Cosmos DB.
+3. Register to the Cosmos DB Resource Provider.
4. Add an access policy to your Azure Key Vault instance.
-5. Enable CMK in Azure Sentinel.
-
-6. Enable Azure Sentinel.
+5. Onboard the workspace to Azure Sentinel via the [Onboarding API](https://github.com/Azure/Azure-Sentinel/raw/master/docs/Azure%20Sentinel%20management.docx).
-### STEP 1: Create an Azure Key Vault and storing key
+### STEP 1: Create an Azure Key Vault and generate or import a key
1. [Create Azure Key Vault resource](/azure-stack/user/azure-stack-key-vault-manage-portal), then generate or import a key to be used for data encryption.
To provision CMK, follow these steps: 
Follow the instructions in [Azure Monitor customer-managed key configuration](../azure-monitor/logs/customer-managed-keys.md) in order to create a CMK workspace that will be used as the Azure Sentinel workspace in the following steps.
-### STEP 3: Register for Cosmos DB
+### STEP 3: Register to the Cosmos DB Resource Provider
-Azure Sentinel works with Cosmos DB as an additional storage resource. Make sure to register to Cosmos DB.
+Azure Sentinel works with Cosmos DB as an additional storage resource. Make sure to register to the Cosmos DB Resource Provider.
-Follow the Cosmos DB instruction to [Register the Azure Cosmos DB](../cosmos-db/how-to-setup-cmk.md#register-resource-provider) resource provider for your Azure subscription.
+Follow the Cosmos DB instruction to [Register the Azure Cosmos DB Resource Provider](../cosmos-db/how-to-setup-cmk.md#register-resource-provider) resource provider for your Azure subscription.
### STEP 4: Add an access policy to your Azure Key Vault instance Make sure to add access from Cosmos DB to your Azure Key Vault instance. Follow the Cosmos DB instruction to [add an access policy to your Azure Key Vault instance](../cosmos-db/how-to-setup-cmk.md#add-access-policy) with Azure Cosmos DB principal.
-### STEP 5: Enable CMK in Azure Sentinel
-
-The Azure Sentinel CMK capability is provided to new customers only after receiving access directly from the Azure product group. Use your contacts at Microsoft to receive approval from the Azure Sentinel team to enable CMK in your solution.
+### STEP 5: Onboard the workspace to Azure Sentinel via the onboarding API
-After you get approval, you will be asked to provide the following information to enable the CMK feature.
--- Workspace ID on which you want to enable CMK--- Key Vault URL: Copy the key's "Key Identifier" up to the last forward slash:
-
-
- ![key identifier](./media/customer-managed-keys/key-identifier.png)
-
- The Azure Sentinel team will enable the Azure Sentinel CMK feature for your
- provided workspace.
--- Verification from the Azure Sentinel product team that you were approved to use this feature. You must have this before proceeding.-
-### STEP 6: Enable Azure Sentinel
--
-Go to the Azure portal and enable Azure Sentinel on the workspace on which you set up CMK. For more information, see [Azure Sentinel Onboarding](quickstart-onboard.md).
+Onboard the workspace to Azure Sentinel via the [Onboarding API](https://github.com/Azure/Azure-Sentinel/raw/master/docs/Azure%20Sentinel%20management.docx).
## Key Encryption Key revocation or deletion -
-In the event that a user revokes the key encryption key, either by deleting it or removing access for Azure Sentinel, within one hour, Azure Sentinel will
-honor the change and behave as if the data is no longer available. At this point, any operation performed that uses persistent storage resources such as
+In the event that a user revokes the key encryption key, either by deleting it or removing access for the dedicated cluster and Cosmos DB Resource Provider, Azure Sentinel will
+honor the change and behave as if the data is no longer available, within one hour. At this point, any operation that uses persistent storage resources such as
data ingestion, persistent configuration changes, and incident creation, will be prevented. Previously stored data will not be deleted but will remain inaccessible. Inaccessible data is governed by the data-retention policy and will be purged in accordance with that policy.
The only operation possible after the encryption key is revoked or deleted is ac
If access is restored after revocation, Azure Sentinel will restore access to the data within an hour.
-To understand more about how this works in Azure Monitor, see [Azure Monitor CMK revocation](../azure-monitor/logs/customer-managed-keys.md#key-revocation).
+Access to the data can be revoked by disabling the customer-managed key in the key vault, or deleting the access policy to the key, for both the dedicated Log Analytics cluster and Cosmos DB. Revoking access by removing the key from the dedicated Log Analytics cluster, or by removing the identity associated with the dedicated Log Analytics cluster is not supported.
-## Key encryption key rotation
+To understand more about how this works in Azure Monitor, see [Azure Monitor CMK revocation](../azure-monitor/logs/customer-managed-keys.md#key-revocation).
+## Customer-managed key rotation
Azure Sentinel and Log Analytics support key rotation. When a user performs key rotation in Key Vault, Azure Sentinel supports the new key within an hour.
In Key Vault, you can perform key rotation by creating a new version of the key:
You can disable the previous version of the key after 24 hours, or after the Azure Key Vault audit logs no longer show any activity that uses the previous version.
-If you use the same key in Azure Sentinel and in Log Analytics, it is necessary to perform key rotation you must explicitly update the cluster resource in Log
+After rotating a key, you must explicitly update the dedicated Log Analytics cluster resource in Log
Analytics with the new Azure Key Vault key version. For more information, see [Azure Monitor CMK rotation](../azure-monitor/logs/customer-managed-keys.md#key-rotation).
+## Replacing a customer-managed key
+
+Azure Sentinel and Log Analytics support replacing a customer-managed key. In order to replace the key, create another key, either in the same key vault or in another key vault, and configure it according to the key creation instructions above. Then, update the dedicated Log Analytics cluster with the new key. Sentinel will detect the key change and will use it across all Azure Sentinel's data storage resources within one hour.
+ ## Next steps In this document, you learned how to set up a customer-managed key in Azure Sentinel. To learn more about Azure Sentinel, see the following articles: - Learn how to [get visibility into your data, and potential threats](quickstart-get-visibility.md). - Get started [detecting threats with Azure Sentinel](./tutorial-detect-threats-built-in.md).-- [Use workbooks](tutorial-monitor-your-data.md) to monitor your data.
+- [Use workbooks](tutorial-monitor-your-data.md) to monitor your data.
sentinel Notebooks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/notebooks.md
After you've created an AML workspace, start launching your notebooks in your Az
:::image type="content" source="media/notebooks/sentinel-notebooks-restart-kernel.png" alt-text="Restart a notebook kernel.":::
-> [!NOTE]
-> If you run into issues with your notebooks, see the [Azure Machine Learning notebook troubleshooting](/azure/machine-learning/how-to-run-jupyter-notebooks#troubleshooting).
->
+## Troubleshooting
+
+If you run into issues with your notebooks, see the [Azure Machine Learning notebook troubleshooting](/azure/machine-learning/how-to-run-jupyter-notebooks#troubleshooting).
## Next steps
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/whats-new.md
Previously updated : 06/15/2021 Last updated : 06/30/2021 # What's new in Azure Sentinel
If you're looking for items older than six months, you'll find them in the [Arch
- [Alert enrichment: alert details (Public preview)](#alert-enrichment-alert-details-public-preview) - [Upgrades for normalization and the Azure Sentinel Information Model](#upgrades-for-normalization-and-the-azure-sentinel-information-model) - [More help for playbooks!](#more-help-for-playbooks)
+- [New documentation reorganization](#new-documentation-reorganization)
### Updated service-to-service connectors
Two new documents can help you get started or get more comfortable with creating
Playbook documentation also explicitly addresses the multi-tenant MSSP scenario.
+### New documentation reorganization
+
+This month we've reorganization our [Azure Sentinel documentation](index.yml), restructuring into intuitive categories that follow common customer journeys. Use the filtered docs search and updated landing page to navigate through Azure Sentinel docs.
+++ ## May 2021 - [Azure Sentinel PowerShell module](#azure-sentinel-powershell-module)
service-bus-messaging Service Bus Dotnet Get Started With Queues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-dotnet-get-started-with-queues.md
In this section, you'll create a .NET Core console application that receives mes
1. Select **Tools** > **NuGet Package Manager** > **Package Manager Console** from the menu. 1. In the **Package Manager Console** window, confirm that **QueueReceiver** is selected for the **Default project**. If not, use the drop-down list to select **QueueReceiver**.+
+ :::image type="content" source="./media/service-bus-dotnet-get-started-with-queues/package-manager-console.png" alt-text="Screenshot showing QueueReceiver project selected in the Package Manager Console":::
1. Run the following command to install the **Azure.Messaging.ServiceBus** NuGet package: ```cmd
service-fabric Service Fabric Controlled Chaos https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/service-fabric-controlled-chaos.md
using System.Fabric;
using System.Diagnostics; using System.Fabric.Chaos.DataStructures;
-class Program
+static class Program
{ private class ChaosEventComparer : IEqualityComparer<ChaosEvent> {
class Program
} }
- static void Main(string[] args)
+ static async Task Main(string[] args)
{ var clusterConnectionString = "localhost:19000"; using (var client = new FabricClient(clusterConnectionString))
class Program
try {
- client.TestManager.StartChaosAsync(parameters).GetAwaiter().GetResult();
+ await client.TestManager.StartChaosAsync(parameters);
} catch (FabricChaosAlreadyRunningException) {
class Program
try { report = string.IsNullOrEmpty(continuationToken)
- ? client.TestManager.GetChaosReportAsync(filter).GetAwaiter().GetResult()
- : client.TestManager.GetChaosReportAsync(continuationToken).GetAwaiter().GetResult();
+ ? await client.TestManager.GetChaosReportAsync(filter)
+ : await client.TestManager.GetChaosReportAsync(continuationToken);
} catch (Exception e) {
class Program
throw; }
- Task.Delay(TimeSpan.FromSeconds(1.0)).GetAwaiter().GetResult();
+ await Task.Delay(TimeSpan.FromSeconds(1.0));
continue; }
class Program
break; }
- Task.Delay(TimeSpan.FromSeconds(1.0)).GetAwaiter().GetResult();
+ await Task.Delay(TimeSpan.FromSeconds(1.0));
} } }
site-recovery Azure To Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/azure-to-azure-support-matrix.md
Oracle Linux | 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5,
18.04 LTS | [9.39](https://support.microsoft.com/help/4597409/) | 4.15.0-20-generic to 4.15.0-123-generic </br> 4.18.0-13-generic to 4.18.0-25-generic </br> 5.0.0-15-generic to 5.0.0-63-generic </br> 5.3.0-19-generic to 5.3.0-69-generic </br> 5.4.0-37-generic to 5.4.0-53-generic</br> 4.15.0-1009-azure to 4.15.0-1099-azure </br> 4.18.0-1006-azure to 4.18.0-1025-azure </br> 5.0.0-1012-azure to 5.0.0-1036-azure </br> 5.3.0-1007-azure to 5.3.0-1035-azure </br> 5.4.0-1020-azure to 5.4.0-1031-azure </br> 4.15.0-124-generic, 5.4.0-54-generic, 5.4.0-1032-azure, 5.4.0-56-generic, 4.15.0-1100-azure, 4.15.0-126-generic, 4.15.0-128-generic, 5.4.0-58-generic, 4.15.0-1102-azure, 5.4.0-1034-azure through 9.39 hot fix patch**| 18.04 LTS | [9.38](https://support.microsoft.com/help/4590304/) | 4.15.0-20-generic to 4.15.0-118-generic </br> 4.18.0-13-generic to 4.18.0-25-generic </br> 5.0.0-15-generic to 5.0.0-61-generic </br> 5.3.0-19-generic to 5.3.0-67-generic </br> 5.4.0-37-generic to 5.4.0-48-generic</br> 4.15.0-1009-azure to 4.15.0-1096-azure </br> 4.18.0-1006-azure to 4.18.0-1025-azure </br> 5.0.0-1012-azure to 5.0.0-1036-azure </br> 5.3.0-1007-azure to 5.3.0-1035-azure </br> 5.4.0-1020-azure to 5.4.0-1026-azure </br> 4.15.0-121-generic, 4.15.0-122-generic, 5.0.0-62-generic, 5.3.0-68-generic, 5.4.0-51-generic, 5.4.0-52-generic, 4.15.0-1099-azure, 5.4.0-1031-azure through 9.38 hot fix patch**| |||
-20.04 LTS |[9.42](https://support.microsoft.com/en-us/topic/update-rollup-55-for-azure-site-recovery-kb5003408-b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8)| 5.4.0-26-generic to 5.4.0-60-generic </br> -generic 5.4.0-1010-azure to 5.4.0-1043-azure </br> 5.8.0-29-generic to 5.8.0-48-generic|
+20.04 LTS |[9.42](https://support.microsoft.com/en-us/topic/update-rollup-55-for-azure-site-recovery-kb5003408-b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8)| 5.4.0-26-generic to 5.4.0-60-generic </br> 5.4.0-1010-azure to 5.4.0-1043-azure </br> 5.8.0-29-generic to 5.8.0-48-generic|
20.04 LTS |[9.41](https://support.microsoft.com/en-us/topic/update-rollup-54-for-azure-site-recovery-50873c7c-272c-4a7a-b9bb-8cd59c230533)| 5.4.0-26-generic to 5.4.0-65-generic </br> 5.4.0-1010-azure to 5.4.0-1039-azure </br> 5.8.0-29-generic to 5.8.0-43-generic </br> 5.4.0-66-generic, 5.4.0-67-generic, 5.4.0-70-generic, 5.8.0-44-generic, 5.8.0-45-generic, 5.8.0-48-generic, 5.4.0-1040-azure, 5.4.0-1041-azure, 5.4.0-1043-azure through 9.41 hot fix patch**|
-20.04 LTS |[9.40](https://support.microsoft.com/en-us/topic/update-rollup-53-for-azure-site-recovery-060268ef-5835-bb49-7cbc-e8c1e6c6e12a)| 5.4.0-26-generic to 5.4.0-59 </br> -generic 5.4.0-1010-azure to 5.4.0-1035-azure </br> 5.8.0-29-generic to 5.8.0-34-generic </br> 5.4.0-1036-azure, 5.4.0-60-generic, 5.4.0-62-generic, 5.8.0-36-generic, 5.8.0-38-generic, 5.4.0-1039-azure, 5.4.0-64-generic, 5.4.0-65-generic, 5.8.0-40-generic, 5.8.0-41-generic through 9.40 hot fix patch**|
-20.04 LTS |[9.39](https://support.microsoft.com/help/4597409/) | 5.4.0-26-generic to 5.4.0-53 </br> -generic 5.4.0-1010-azure to 5.4.0-1031-azure </br> 5.4.0-54-generic, 5.8.0-29-generic, 5.4.0-1032-azure, 5.4.0-56-generic, 5.8.0-31-generic, 5.8.0-33-generic, 5.4.0-58-generic, 5.4.0-1034-azure through 9.39 hot fix patch**
-20.04 LTS |[9.39](https://support.microsoft.com/help/4597409/) | 5.4.0-26-generic to 5.4.0-53 </br> -generic 5.4.0-1010-azure to 5.4.0-1031-azure </br> 5.4.0-54-generic, 5.8.0-29-generic, 5.4.0-1032-azure, 5.4.0-56-generic, 5.8.0-31-generic, 5.8.0-33-generic, 5.4.0-58-generic, 5.4.0-1034-azure through 9.39 hot fix patch**
-20.04 LTS |[9.38](https://support.microsoft.com/help/4590304/) | 5.4.0-26-generic to 5.4.0-48 </br> -generic 5.4.0-1010-azure to 5.4.0-1026-azure </br> 5.4.0-51-generic, 5.4.0-52-generic, 5.8.0-23-generic, 5.8.0-25-generic, 5.4.0-1031-azure through 9.38 hot fix patch**
+20.04 LTS |[9.40](https://support.microsoft.com/en-us/topic/update-rollup-53-for-azure-site-recovery-060268ef-5835-bb49-7cbc-e8c1e6c6e12a)| 5.4.0-26-generic to 5.4.0-59-generic </br> 5.4.0-1010-azure to 5.4.0-1035-azure </br> 5.8.0-29-generic to 5.8.0-34-generic </br> 5.4.0-1036-azure, 5.4.0-60-generic, 5.4.0-62-generic, 5.8.0-36-generic, 5.8.0-38-generic, 5.4.0-1039-azure, 5.4.0-64-generic, 5.4.0-65-generic, 5.8.0-40-generic, 5.8.0-41-generic through 9.40 hot fix patch**|
+20.04 LTS |[9.39](https://support.microsoft.com/help/4597409/) | 5.4.0-26-generic to 5.4.0-53-generic </br> 5.4.0-1010-azure to 5.4.0-1031-azure </br> 5.4.0-54-generic, 5.8.0-29-generic, 5.4.0-1032-azure, 5.4.0-56-generic, 5.8.0-31-generic, 5.8.0-33-generic, 5.4.0-58-generic, 5.4.0-1034-azure through 9.39 hot fix patch**
+20.04 LTS |[9.38](https://support.microsoft.com/help/4590304/) | 5.4.0-26-generic to 5.4.0-48-generic </br> 5.4.0-1010-azure to 5.4.0-1026-azure </br> 5.4.0-51-generic, 5.4.0-52-generic, 5.8.0-23-generic, 5.8.0-25-generic, 5.4.0-1031-azure through 9.38 hot fix patch**
**Note: To support latest Linux kernels within 15 days of release, Azure Site Recovery rolls out hot fix patch on top of latest mobility agent version. This fix is rolled out in between two major version releases. To update to latest version of mobility agent (including hot fix patch) follow steps mentioned in [this article](service-updates-how-to.md#azure-vm-disaster-recovery-to-azure). This patch is currently rolled out for mobility agents used in Azure to Azure DR scenario.
storage Data Lake Storage Supported Blob Storage Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/data-lake-storage-supported-blob-storage-features.md
Previously updated : 06/09/2021 Last updated : 06/29/2021
The following table shows how each Blob storage feature is supported with Data L
|Container soft delete|Preview|Preview|[Soft delete for containers (preview)](soft-delete-container-overview.md)| |Azure Storage inventory|Preview|Preview|[Use Azure Storage inventory to manage blob data (preview)](blob-inventory.md)| |Custom domains|Preview<div role="complementary" aria-labelledby="preview-form-2"><sup>2</sup></div>|Preview<div role="complementary" aria-labelledby="preview-form-2"><sup>2</sup></div>|[Map a custom domain to an Azure Blob storage endpoint](storage-custom-domain-name.md)|
-|Blob soft delete|Not yet supported|Not yet supported|[Soft delete for blobs](./soft-delete-blob-overview.md)|
+|Blob soft delete|Preview|Preview|[Soft delete for blobs](./soft-delete-blob-overview.md)|
|Blobfuse|Generally available|Generally available|[How to mount Blob storage as a file system with blobfuse](storage-how-to-mount-container-linux.md)| |Anonymous public access |Generally available|Generally available| See [Configure anonymous public read access for containers and blobs](anonymous-read-access-configure.md).| |Customer-managed account failover|Not yet supported|Not yet supported|[Disaster recovery and account failover](../common/storage-disaster-recovery-guidance.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json)|
storage Soft Delete Blob Enable https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/soft-delete-blob-enable.md
Previously updated : 03/27/2021 Last updated : 06/29/2021
Blob soft delete protects an individual blob and its versions, snapshots, and me
Blob soft delete is part of a comprehensive data protection strategy for blob data. To learn more about Microsoft's recommendations for data protection, see [Data protection overview](data-protection-overview.md).
-## Enable blob soft delete
+> [!NOTE]
+> Blob soft delete can also protect blobs and directories in accounts that have the hierarchical namespace feature enabled. Blob soft delete for accounts that have the hierarchical namespace feature enabled is currently in public preview, and is available globally in all Azure regions.
Blob soft delete is disabled by default for a new storage account. You can enable or disable soft delete for a storage account at any time by using the Azure portal, PowerShell, or Azure CLI.
-# [Portal](#tab/azure-portal)
+## Enable blob soft delete
+
+### [Portal](#tab/azure-portal)
To enable blob soft delete for your storage account by using the Azure portal, follow these steps:
To enable blob soft delete for your storage account by using the Azure portal, f
:::image type="content" source="media/soft-delete-blob-enable/blob-soft-delete-configuration-portal.png" alt-text="Screenshot showing how to enable soft delete in the Azure portal":::
-# [PowerShell](#tab/azure-powershell)
+### [PowerShell](#tab/azure-powershell)
To enable blob soft delete with PowerShell, call the [Enable-AzStorageBlobDeleteRetentionPolicy](/powershell/module/az.storage/enable-azstorageblobdeleteretentionpolicy) command, specifying the retention period in days.
$properties.DeleteRetentionPolicy.Enabled
$properties.DeleteRetentionPolicy.Days ```
-# [CLI](#tab/azure-CLI)
+### [Azure CLI](#tab/azure-CLI)
To enable blob soft delete with Azure CLI, call the [az storage account blob-service-properties update](/cli/azure/storage/account/blob-service-properties#az_storage_account_blob_service_properties_update) command, specifying the retention period in days.
az storage account blob-service-properties show --account-name <storage-account>
+## Enable blob soft delete (hierarchical namespace)
+
+Blob soft delete can also protect blobs and directories in accounts that have the hierarchical namespace feature enabled on them.
+
+> [!IMPORTANT]
+> Soft delete in accounts that have the hierarchical namespace feature enabled is currently in PREVIEW, , and is available globally in all Azure regions.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
+>
+> To enroll in the preview, see [this form](https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fforms.office.com%2FPages%2FResponsePage.aspx%3Fid%3Dv4j5cvGGr0GRqy180BHbR4mEEwKhLjlBjU3ziDwLH-pUOUxPTkFSSjJDRlBZNlpZSjhGUktFVzFDRi4u&data=04%7C01%7CSachin.Sheth%40microsoft.com%7C6e6a6d56c2014cdf749308d90e915f1e%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637556839790913940%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=qnYxVDdI7whCqBW4johgutS3patACP6ubleUrMGFtf8%3D&reserved=0).
+
+<a id="enable-blob-soft-delete-hierarchical-namespace"></a>
+
+### [Portal](#tab/azure-portal)
+
+To enable blob soft delete for your storage account by using the Azure portal, follow these steps:
+
+1. In the [Azure portal](https://portal.azure.com/), navigate to your storage account.
+1. Locate the **Data Protection** option under **Data Management**.
+1. In the **Recovery** section, select **Enable soft delete for blobs**.
+1. Specify a retention period between 1 and 365 days. Microsoft recommends a minimum retention period of seven days.
+1. Save your changes.
+
+> [!div class="mx-imgBorder"]
+> ![Screenshot showing how to enable soft delete in the Azure portal in accounts that have a hierarchical namespace](./media/soft-delete-blob-enable/blob-soft-delete-configuration-portal-hierarchical-namespace.png)
+
+### [PowerShell](#tab/azure-powershell)
+
+1. Install the latest **PowershellGet** module. Then, close and reopen the PowerShell console.
+
+ ```powershell
+ install-Module PowerShellGet ΓÇôRepository PSGallery ΓÇôForce
+ ```
+
+2. Install **Az.Storage** preview module.
+
+ ```powershell
+ Install-Module Az.Storage -Repository PsGallery -RequiredVersion 3.7.1-preview -AllowClobber -AllowPrerelease -Force
+ ```
+ For more information about how to install PowerShell modules, see [Install the Azure PowerShell module](/powershell/azure/install-az-ps)
+
+3. Obtain storage account authorization by using either a storage account key, a connection string, or Azure Active Directory (Azure AD). See [Connect to the account](data-lake-storage-directory-file-acl-powershell.md#connect-to-the-account).
+
+ The following example obtains authorization by using a storage account key.
+
+ ```powershell
+ $ctx = New-AzStorageContext -StorageAccountName '<storage-account-name>' -StorageAccountKey '<storage-account-key>'
+ ```
+
+4. To enable blob soft delete with PowerShell, use the [Enable-AzStorageDeleteRetentionPolicy](/powershell/module/az.storage/enable-azstoragedeleteretentionpolicy) command, and specify the retention period in days.
+
+ The following example enables soft delete for an account, and sets the retention period to 4 days.
+
+ ```powershell
+ Enable-AzStorageDeleteRetentionPolicy -RetentionDays 4 -Context $ctx
+ ```
+5. To check the current settings for blob soft delete, use the `Get-AzStorageServiceProperty` command:
+
+ ```powershell
+ Get-AzStorageServiceProperty -ServiceType Blob -Context $ctx
+ ```
+
+### [Azure CLI](#tab/azure-CLI)
+
+1. Open the [Azure Cloud Shell](/azure/cloud-shell/overview), or if you've [installed](/cli/azure/install-azure-cli) the Azure CLI locally, open a command console application such as Windows PowerShell.
+
+2. Install the `storage-preview` extension.
+
+ ```azurecli
+ az extension add -n storage-preview
+ ```
+3. Connect to your storage account. See [Connect to the account](data-lake-storage-directory-file-acl-cli.md#connect-to-the-account).
+
+ > [!NOTE]
+ > The example presented in this article show Azure Active Directory (Azure AD) authorization. To learn more about authorization methods, see [Authorize access to blob or queue data with Azure CLI](./authorize-data-operations-cli.md).
+
+4. To enable soft delete with Azure CLI, call the `az storage fs service-properties update` command, specifying the retention period in days.
+
+ The following example enables blob and directory soft delete and sets the retention period to 5 days.
+
+ ```azurecli
+ az storage fs service-properties update --delete-retention --delete-retention-period 5 --auth-mode login
+ ```
+
+5. To check the current settings for blob soft delete, call the `az storage fs service-properties update` command:
+
+ ```azurecli
+ az storage fs service-properties update --delete-retention false --connection-string $con
+ ```
+++ ## Next steps - [Soft delete for blobs](soft-delete-blob-overview.md)
storage Soft Delete Blob Manage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/soft-delete-blob-manage.md
Previously updated : 06/07/2021 Last updated : 06/29/2021
Blob soft delete protects an individual blob and its versions, snapshots, and me
Blob soft delete is part of a comprehensive data protection strategy for blob data. To learn more about Microsoft's recommendations for data protection, see [Data protection overview](data-protection-overview.md).
-## Manage soft-deleted blobs with the Azure portal
+## Manage soft-deleted blobs
+
+### Manage soft-deleted blobs with the Azure portal
You can use the Azure portal to view and restore soft-deleted blobs and snapshots.
-### View deleted blobs
+#### View deleted blobs
When blobs are soft-deleted, they are invisible in the Azure portal by default. To view soft-deleted blobs, navigate to the **Overview** page for the container and toggle the **Show deleted blobs** setting. Soft-deleted blobs are displayed with a status of **Deleted**.
Next, select the deleted blob from the list of blobs to display its properties.
:::image type="content" source="media/soft-delete-blob-manage/soft-deleted-blob-properties-portal.png" alt-text="Screenshot showing properties of soft-deleted blob in Azure portal":::
-### View deleted snapshots
+#### View deleted snapshots
Deleting a blob also deletes any snapshots associated with the blob. If a soft-deleted blob has snapshots, the deleted snapshots can also be displayed in the portal. Display the soft-deleted blob's properties, then navigate to the **Snapshots** tab, and toggle **Show deleted snapshots**. :::image type="content" source="media/soft-delete-blob-manage/soft-deleted-blob-snapshots-portal.png" alt-text="Screenshot showing ":::
-### Restore soft-deleted objects when versioning is disabled
+#### Restore soft-deleted objects when versioning is disabled
To restore a soft-deleted blob in the Azure portal when blob versioning is not enabled, first display the blob's properties, then select the **Undelete** button on the **Overview** tab. Restoring a blob also restores any snapshots that were deleted during the soft-delete retention period.
To promote a soft-deleted snapshot to the base blob, first make sure that the bl
:::image type="content" source="media/soft-delete-blob-manage/promote-snapshot.png" alt-text="Screenshot showing how to promote a snapshot to the base blob":::
-### Restore soft-deleted blobs when versioning is enabled
+#### Restore soft-deleted blobs when versioning is enabled
To restore a soft-deleted blob in the Azure portal when versioning is enabled, select the soft-deleted blob to display its properties, then select the **Versions** tab. Select the version that you want to promote to be the current version, then select **Make current version**.
To restore deleted versions or snapshots when versioning is enabled, display the
> [!NOTE] > When versioning is enabled, selecting the **Undelete** button on a deleted blob restores any soft-deleted versions or snapshots, but does not restore the base blob. To restore the base blob, you must promote a previous version.
-## Manage soft-deleted blobs with code
+### Manage soft-deleted blobs with code
You can use the Azure Storage client libraries to restore a soft-deleted blob or snapshot. The following examples show how to use the .NET client library.
-### Restore soft-deleted objects when versioning is disabled
+#### Restore soft-deleted objects when versioning is disabled
-# [.NET v12 SDK](#tab/dotnet)
+##### [.NET v12 SDK](#tab/dotnet)
To restore deleted blobs when versioning is not enabled, call the [Undelete Blob](/rest/api/storageservices/undelete-blob) operation on those blobs. The **Undelete Blob** operation restores soft-deleted blobs and any deleted snapshots associated with those blobs.
To restore a specific soft-deleted snapshot, first call the **Undelete Blob** op
:::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/dotnet-v12/DataProtection.cs" id="Snippet_RecoverSpecificBlobSnapshot":::
-# [.NET v11 SDK](#tab/dotnet11)
+##### [.NET v11 SDK](#tab/dotnet11)
To restore deleted blobs when versioning is not enabled, call the [Undelete Blob](/rest/api/storageservices/undelete-blob) operation on those blobs. The **Undelete Blob** operation restores soft-deleted blobs and any deleted snapshots associated with those blobs.
blockBlob.StartCopy(copySource);
-### Restore soft-deleted blobs when versioning is enabled
+#### Restore soft-deleted blobs when versioning is enabled
To restore a soft-deleted blob when versioning is enabled, copy a previous version over the base blob with a [Copy Blob](/rest/api/storageservices/copy-blob) or [Copy Blob From URL](/rest/api/storageservices/copy-blob-from-url) operation.
-# [.NET v12 SDK](#tab/dotnet)
+##### [.NET v12 SDK](#tab/dotnet)
:::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/dotnet-v12/DataProtection.cs" id="Snippet_RestorePreviousVersion":::
-# [.NET v11 SDK](#tab/dotnet11)
+##### [.NET v11 SDK](#tab/dotnet11)
Not applicable. Blob versioning is supported only in the Azure Storage client libraries version 12.x and higher.
+## Manage soft-deleted blobs and directories (hierarchical namespace)
+
+You can restore soft deleted blobs and directories in accounts that have a hierarchical namespace.
+
+> [!IMPORTANT]
+> Soft delete in accounts that have the hierarchical namespace feature enabled is currently in PREVIEW, and is available globally in all Azure regions.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
+>
+> To enroll in the preview, see [this form](https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fforms.office.com%2FPages%2FResponsePage.aspx%3Fid%3Dv4j5cvGGr0GRqy180BHbR4mEEwKhLjlBjU3ziDwLH-pUOUxPTkFSSjJDRlBZNlpZSjhGUktFVzFDRi4u&data=04%7C01%7CSachin.Sheth%40microsoft.com%7C6e6a6d56c2014cdf749308d90e915f1e%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637556839790913940%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=qnYxVDdI7whCqBW4johgutS3patACP6ubleUrMGFtf8%3D&reserved=0).
+
+### Manage soft-deleted blobs with the Azure portal
+
+You can use the Azure portal to view and restore soft-deleted blobs and directories.
+
+#### View deleted blobs and directories
+
+When blobs or directories are soft-deleted, they are invisible in the Azure portal by default. To view soft-deleted blobs and directories, navigate to the **Overview** page for the container and toggle the **Show deleted blobs** setting. Soft-deleted blobs and directories are displayed with a status of **Deleted**. The following image shows a soft-deleted directory.
+
+> [!div class="mx-imgBorder"]
+> ![Screenshot showing how to list soft-deleted blobs in Azure portal (hierarchical namespace enabled accounts)](media/soft-delete-blob-manage/soft-deleted-blobs-list-portal-hns.png)
+
+Next, select the deleted directory or blob from the list display its properties. Under the **Overview** tab, notice that the status is set to **Deleted**. The portal also displays the number of days until the blob is permanently deleted.
+
+> [!div class="mx-imgBorder"]
+> ![Screenshot showing properties of soft-deleted blob in Azure portal (hierarchical namespace enabled accounts)](media/soft-delete-blob-manage/soft-deleted-blob-properties-portal-hns.png)
+
+#### Restore soft-delete blobs and directories
+
+To restore a soft-deleted blob or directory in the Azure portal, first display the blob or directory's properties, then select the **Undelete** button on the **Overview** tab. The following image shows the Undelete button on a soft-deleted directory.
+
+> [!div class="mx-imgBorder"]
+> ![Screenshot showing how to restore a soft-deleted blob in Azure portal (hierarchical namespace enabled accounts)](media/soft-delete-blob-manage/undelete-soft-deleted-blob-portal-hns.png)
+
+### Restore soft deleted blobs and directories by using PowerShell
+
+>[!IMPORTANT]
+> This section section applies only to accounts that have a hierarchical namespace.
+
+1. Ensure that you have the **Az.Storage** preview module installed. See [Enable blob soft delete by using PowerShell](soft-delete-blob-enable.md?tabs=azure-powershell#enable-blob-soft-delete-hierarchical-namespace).
+
+2. Obtain storage account authorization by using either a storage account key, a connection string, or Azure Active Directory (Azure AD). See [Connect to the account](data-lake-storage-directory-file-acl-powershell.md#connect-to-the-account).
+
+ The following example obtains authorization by using a storage account key.
+
+ ```powershell
+ $ctx = New-AzStorageContext -StorageAccountName '<storage-account-name>' -StorageAccountKey '<storage-account-key>'
+ ```
+
+3. To restore soft deleted item, use the `Restore-AzDataLakeGen2DeletedItem` command.
+
+ ```powershell
+ $filesystemName = "my-file-system"
+ $dirName="my-directory"
+ $deletedItems = Get-AzDataLakeGen2DeletedItem -Context $ctx -FileSystem $filesystemName -Path $dirName
+ $deletedItems | Restore-AzDataLakeGen2DeletedItem
+ ```
++
+### Restore soft deleted blobs and directories by using Azure CLI
+
+>[!IMPORTANT]
+> This section section applies only to accounts that have a hierarchical namespace.
+
+1. Make sure that you have the `storage-preview` extension installed. See [Enable blob soft delete by using PowerShell](soft-delete-blob-enable.md?tabs=azure-CLI#enable-blob-soft-delete-hierarchical-namespace).
+
+2. Get a list of deleted items.
+
+ ```azurecli
+ $filesystemName = "my-file-system"
+ az storage fs list-deleted-path -f $filesystemName --auth-mode login
+ ```
+
+3. To restore an item, use the `az storage fs undelete-path` command.
+
+ ```azurecli
+ $dirName="my-directory"
+ az storage fs undelete-path -f $filesystemName --deleted-path-name $dirName ΓÇödeletion-id "<deletionId>" --auth-mode login
+ ```
+
+### Restore soft deleted blobs and directories by using .NET
+
+>[!IMPORTANT]
+> This section section applies only to accounts that have a hierarchical namespace.
+
+1. Open a command prompt and change directory (`cd`) into your project folder For example:
+
+ ```console
+ cd myProject
+ ```
+
+2. Install the `Azure.Storage.Files.DataLake -v 12.7.0` version of the [Azure.Storage.Files.DataLake](https://www.nuget.org/packages/Azure.Storage.Files.DataLake/) NuGet package by using the `dotnet add package` command.
+
+ ```console
+ dotnet add package Azure.Storage.Files.DataLake -v -v 12.7.0 -s https://pkgs.dev.azure.com/azure-sdk/public/_packaging/azure-sdk-for-net/nuget/v3/index.json
+ ```
+
+3. Then, add these using statements to the top of your code file.
+
+ ```csharp
+ using Azure;
+ using Azure.Storage;
+ using Azure.Storage.Files.DataLake;
+ using Azure.Storage.Files.DataLake.Models;
+ using NUnit.Framework;
+ using System;
+ using System.Collections.Generic;
+ using System.Threading.Tasks;
+ ```
+
+4. The following code deletes a directory, and then restores a soft deleted directory.
+
+ This method assumes that you've created a [DataLakeServiceClient](/dotnet/api/azure.storage.files.datalake.datalakeserviceclient) instance. To learn how to create a [DataLakeServiceClient](/dotnet/api/azure.storage.files.datalake.datalakeserviceclient) instance, see [Connect to the account](data-lake-storage-directory-file-acl-dotnet.md#connect-to-the-account).
+
+ ```csharp
+ public void RestoreDirectory(DataLakeServiceClient serviceClient)
+ {
+ DataLakeFileSystemClient fileSystemClient =
+ serviceClient.GetFileSystemClient("my-container");
+
+ DataLakeDirectoryClient directory =
+ fileSystem.GetDirectoryClient("my-directory");
+
+ // Delete the Directory
+ await directory.DeleteAsync();
+
+ // List Deleted Paths
+ List<PathHierarchyDeletedItem> deletedItems = new List<PathHierarchyDeletedItem>();
+ await foreach (PathHierarchyDeletedItem deletedItem in fileSystemClient.GetDeletedPathsAsync())
+ {
+ deletedItems.Add(deletedItem);
+ }
+
+ Assert.AreEqual(1, deletedItems.Count);
+ Assert.AreEqual("my-directory", deletedItems[0].Path.Name);
+ Assert.IsTrue(deletedItems[0].IsPath);
+
+ // Restore deleted directory.
+ Response<DataLakePathClient> restoreResponse = await fileSystemClient.RestorePathAsync(
+ deletedItems[0].Path.Name,
+ deletedItems[0].Path.DeletionId);
+
+ }
+
+ ```
+
+### Restore soft deleted blobs and directories by using Java
+
+>[!IMPORTANT]
+> This section section applies only to accounts that have a hierarchical namespace.
+
+1. To get started, open the *pom.xml* file in your text editor. Add the following dependency element to the group of dependencies.
+
+ ```xml
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-storage-file-datalake</artifactId>
+ <version>12.6.0</version>
+ </dependency>
+ ```
+
+2. Then, add these imports statements to your code file.
+
+ ```java
+ Put imports here
+ ```
+
+3. The following snippet restores a soft deleted file named `my-file`.
+
+ This method assumes that you've created a **DataLakeServiceClient** instance. To learn how to create a **DataLakeServiceClient** instance, see [Connect to the account](data-lake-storage-directory-file-acl-java.md#connect-to-the-account).
+
+ ```java
+
+ public void RestoreFile(DataLakeServiceClient serviceClient){
+
+ DataLakeFileSystemClient fileSystemClient =
+ serviceClient.getFileSystemClient("my-container");
+
+ DataLakeFileClient fileClient =
+ fileSystemClient.getFileClient("my-file");
+
+ String deletionId = null;
+
+ for (PathDeletedItem item : fileSystemClient.listDeletedPaths()) {
+
+ if (item.getName().equals(fileClient.getFilePath())) {
+ deletionId = item.getDeletionId();
+ }
+ }
+
+ fileSystemClient.restorePath(fileClient.getFilePath(), deletionId);
+ }
+
+ ```
+
+### Restore soft deleted blobs and directories by using Python
+
+>[!IMPORTANT]
+> This section section applies only to accounts that have a hierarchical namespace.
+
+1. Install version `12.4.0` or higher of the Azure Data Lake Storage client library for Python by using [pip](https://pypi.org/project/pip/). This command installs the latest version of the Azure Data Lake Storage client library for Python.
+
+ ```
+ pip install azure-storage-file-datalake
+ ```
+
+2. Add these import statements to the top of your code file.
+
+ ```python
+ import os, uuid, sys
+ from azure.storage.filedatalake import DataLakeServiceClient
+ from azure.storage.filedatalake import FileSystemClient
+ ```
+
+3. The following code deletes a directory, and then restores a soft deleted directory.
+
+ The code example below contains an object named `service_client` of type **DataLakeServiceClient**. To see examples of how to create a **DataLakeServiceClient** instance, see [Connect to the account](data-lake-storage-directory-file-acl-python.md#connect-to-the-account).
+
+ ```python
+ def restoreDirectory():
+
+ try:
+ global file_system_client
+
+ file_system_client = service_client.create_file_system(file_system="my-file-system")
+
+ directory_path = 'my-directory'
+ directory_client = file_system_client.create_directory(directory_path)
+ resp = directory_client.delete_directory()
+
+ restored_directory_client = file_system_client.undelete_path(directory_client, resp['deletion_id'])
+ props = restored_directory_client.get_directory_properties()
+
+ print(props)
+
+ except Exception as e:
+ print(e)
+
+ ```
+ ## Next steps - [Soft delete for Blob storage](soft-delete-blob-overview.md)
storage Soft Delete Blob Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/soft-delete-blob-overview.md
Previously updated : 04/08/2021 Last updated : 06/29/2021
Blob soft delete protects an individual blob, snapshot, or version from accidental deletes or overwrites by maintaining the deleted data in the system for a specified period of time. During the retention period, you can restore a soft-deleted object to its state at the time it was deleted. After the retention period has expired, the object is permanently deleted.
+> [!IMPORTANT]
+> Soft delete in accounts that have the hierarchical namespace feature enabled is currently in PREVIEW, , and is available globally in all Azure regions.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
+>
+> To enroll in the preview, see [this form](https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fforms.office.com%2FPages%2FResponsePage.aspx%3Fid%3Dv4j5cvGGr0GRqy180BHbR4mEEwKhLjlBjU3ziDwLH-pUOUxPTkFSSjJDRlBZNlpZSjhGUktFVzFDRi4u&data=04%7C01%7CSachin.Sheth%40microsoft.com%7C6e6a6d56c2014cdf749308d90e915f1e%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637556839790913940%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=qnYxVDdI7whCqBW4johgutS3patACP6ubleUrMGFtf8%3D&reserved=0).
## Recommended data protection configuration
Attempting to delete a soft-deleted object does not affect its expiry time.
If you disable blob soft delete, you can continue to access and recover soft-deleted objects in your storage account until the soft delete retention period has elapsed.
-Blob versioning is available for general-purpose v2, block blob, and Blob storage accounts. Storage accounts with a hierarchical namespace enabled for use with Azure Data Lake Storage Gen2 are not currently supported.
+Blob versioning is available for general-purpose v2, block blob, and Blob storage accounts. Storage accounts with a hierarchical namespace aren't currently supported.
Version 2017-07-29 and higher of the Azure Storage REST API support blob soft delete. > [!IMPORTANT]
-> You can use blob soft delete only to restore an individual blob, snapshot, or version. To restore a container and its contents, container soft delete must also be enabled for the storage account. Microsoft recommends enabling container soft delete and blob versioning together with blob soft delete to ensure complete protection for blob data. For more information, see [Data protection overview](data-protection-overview.md).
+> You can use blob soft delete only to restore an individual blob, snapshot, directory (in a hierarchical namespace) or version. To restore a container and its contents, container soft delete must also be enabled for the storage account. Microsoft recommends enabling container soft delete and blob versioning together with blob soft delete to ensure complete protection for blob data. For more information, see [Data protection overview](data-protection-overview.md).
> > Blob soft delete does not protect against the deletion of a storage account. To protect a storage account from deletion, configure a lock on the storage account resource. For more information about locking a storage account, see [Apply an Azure Resource Manager lock to a storage account](../common/lock-account-resource.md).
If a blob has snapshots, the blob cannot be deleted unless the snapshots are als
You can also delete one or more active snapshots without deleting the base blob. In this case, the snapshot is soft-deleted.
+If a directory is deleted in an account that has the hierarchical namespace feature enabled on it, the directory and all its contents are marked as soft-deleted.
+ Soft-deleted objects are invisible unless they are explicitly displayed or listed. For more information about how to list soft-deleted objects, see [Manage and restore soft-deleted blobs](soft-delete-blob-manage.md). ### How overwrites are handled when soft delete is enabled
+>[!IMPORTANT]
+> This section doesn't apply to accounts that have a hierarchical namespace.
+ Calling an operation such as [Put Blob](/rest/api/storageservices/put-blob), [Put Block List](/rest/api/storageservices/put-block-list), or [Copy Blob](/rest/api/storageservices/copy-blob) overwrites the data in a blob. When blob soft delete is enabled, overwriting a blob automatically creates a soft-deleted snapshot of the blob's state prior to the write operation. When the retention period expires, the soft-deleted snapshot is permanently deleted. Soft-deleted snapshots are invisible unless soft-deleted objects are explicitly displayed or listed. For more information about how to list soft-deleted objects, see [Manage and restore soft-deleted blobs](soft-delete-blob-manage.md).
For premium storage accounts, soft-deleted snapshots do not count toward the per
### Restoring soft-deleted objects
-You can restore soft-deleted blobs by calling the [Undelete Blob](/rest/api/storageservices/undelete-blob) operation within the retention period. The **Undelete Blob** operation restores a blob and any soft-deleted snapshots associated with it. Any snapshots that were deleted during the retention period are restored.
+You can restore soft-deleted blobs or directories (in a hierarchical namespace) by calling the [Undelete Blob](/rest/api/storageservices/undelete-blob) operation within the retention period. The **Undelete Blob** operation restores a blob and any soft-deleted snapshots associated with it. Any snapshots that were deleted during the retention period are restored.
+
+In accounts that have a hierarchical namespace, the **Undelete Blob** operation can also be used to restore a soft-deleted directory and all its contents.
Calling **Undelete Blob** on a blob that is not soft-deleted will restore any soft-deleted snapshots that are associated with the blob. If the blob has no snapshots and is not soft-deleted, then calling **Undelete Blob** has no effect.
For more information on how to restore soft-deleted objects, see [Manage and res
## Blob soft delete and versioning
+>[!IMPORTANT]
+> Versioning is not supported for accounts that have a hierarchical namespace.
+ If blob versioning and blob soft delete are both enabled for a storage account, then overwriting a blob automatically creates a new version. The new version is not soft-deleted and is not removed when the soft-delete retention period expires. No soft-deleted snapshots are created. When you delete a blob, the current version of the blob becomes a previous version, and there is no longer a current version. No new version is created and no soft-deleted snapshots are created. Enabling soft delete and versioning together protects blob versions from deletion. When soft delete is enabled, deleting a version creates a soft-deleted version. You can use the **Undelete Blob** operation to restore soft-deleted versions during the soft delete retention period. The **Undelete Blob** operation always restores all soft-deleted versions of the blob. It is not possible to restore only a single soft-deleted version.
Microsoft recommends enabling both versioning and blob soft delete for your stor
## Blob soft delete protection by operation
-The following table describes the expected behavior for delete and write operations when blob soft delete is enabled, either with or without blob versioning:
+The following table describes the expected behavior for delete and write operations when blob soft delete is enabled, either with or without blob versioning.
+
+### Storage account (no hierarchical namespace)
| REST API operations | Soft delete enabled | Soft delete and versioning enabled | |--|--|--|
The following table describes the expected behavior for delete and write operati
| [Set Blob Metadata](/rest/api/storageservices/set-blob-metadata) | No change. Overwritten blob metadata is not recoverable. | A new version that captures the blob's state prior to the operation is automatically generated. | | [Set Blob Tier](/rest/api/storageservices/set-blob-tier) | The base blob is moved to the new tier. Any active or soft-deleted snapshots remain in the original tier. No soft-deleted snapshot is created. | The base blob is moved to the new tier. Any active or soft-deleted versions remain in the original tier. No new version is created. |
+### Storage account (hierarchical namespace)
+
+|**REST API operation**|**Soft Delete enabled**|
+|||
+|[Path - Delete](/rest/api/storageservices/datalakestoragegen2/path/delete) |A soft deleted blob or directory is created. The soft deleted object is deleted after the retention period.|
+|[Delete Blob](/rest/api/storageservices/delete-blob)|A soft deleted object is created. The soft deleted object is deleted after the retention period. Soft delete will not be supported for blobs with snapshots and snapshots.|
+|[Path - Create](/rest/api/storageservices/datalakestoragegen2/path/create) that renames a blob or directory | Existing destination blob or empty directory will get soft deleted and the source will replace it. The soft deleted object is deleted after the retention period.|
+ ## Pricing and billing All soft deleted data is billed at the same rate as active data. You will not be charged for data that is permanently deleted after the retention period elapses.
For more information on pricing for Blob Storage, see the [Blob Storage pricing]
## Blob soft delete and virtual machine disks
-Blob soft delete is available for both premium and standard unmanaged disks, which are page blobs under the covers. Soft delete can help you recover data deleted or overwritten by the **Delete Blob**, **Put Blob**, **Put Block List**, and **Copy Blob** operations only.
+Blob soft delete is available for both premium and standard unmanaged disks, which are page blobs under the covers. Soft delete can help you recover data deleted or overwritten by the [Delete Blob](/rest/api/storageservices/delete-blob), [Put Blob](/rest/api/storageservices/put-blob), [Put Block List](/rest/api/storageservices/put-block-list), and [Copy Blob](/rest/api/storageservices/copy-blob) operations only.
-Data that is overwritten by a call to **Put Page** is not recoverable. An Azure virtual machine writes to an unmanaged disk using calls to **Put Page**, so using soft delete to undo writes to an unmanaged disk from an Azure VM is not a supported scenario.
+Data that is overwritten by a call to [Put Page](/rest/api/storageservices/put-page) is not recoverable. An Azure virtual machine writes to an unmanaged disk using calls to [Put Page](/rest/api/storageservices/put-page), so using soft delete to undo writes to an unmanaged disk from an Azure VM is not a supported scenario.
## Next steps
storage File Sync Choose Cloud Tiering Policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/file-sync/file-sync-choose-cloud-tiering-policies.md
Azure File Sync is supported on NTFS volumes with Windows Server 2012 R2 and new
|256 TB ΓÇô 512 TB| 128 KB | |512 TB ΓÇô 1 PB | 256 KB | |1 PB ΓÇô 2 PB | 512 KB |
-|2 TB ΓÇô 4 PB | 1024 KB |
-|4 TB ΓÇô 8 TB | 2048 KB (max size) |
+|2 PB ΓÇô 4 PB | 1024 KB |
+|4 PB ΓÇô 8 PB | 2048 KB (max size) |
|> 8 TB | not supported | It is possible that upon creation of the volume, you manually formatted the volume with a different cluster size. If your volume stems from an older version of Windows, default cluster sizes may also be different. [This article has more details on default cluster sizes.](https://support.microsoft.com/help/140365/default-cluster-size-for-ntfs-fat-and-exfat) Even if you choose a cluster size smaller than 4 KB, an 8 KB limit as the smallest file size that can be tiered, still applies. (Even if technically 2x cluster size would equate to less than 8 KB.)
stream-analytics Cicd Tools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/stream-analytics/cicd-tools.md
description: This article describes how to use Azure Stream Analytics CI/CD tool
- Previously updated : 09/10/2020 Last updated : 06/29/2021 # Automate builds, tests, and deployments of an Azure Stream Analytics job using CI/CD tools
synapse-analytics Overview Cognitive Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/machine-learning/overview-cognitive-services.md
+
+ Title: Cognitive Services in Azure Synapse Analytics
+description: Enrich your data with artificial intelligence (AI) in Azure Synapse Analytics using pretrained models from Azure Cognitive Services.
+++++++ Last updated : 06/30/2021++++
+# Cognitive Services in Azure Synapse Analytics
+
+Using pretrained models from Azure Cognitive Services, you can enrich your data with artificial intelligence (AI) in Azure Synapse Analytics.
+
+[Azure Cognitive Services](/azure/cognitive-services/what-are-cognitive-services) are cloud-based services that add cognitive intelligence to your data even if you don't have AI or data science skills. There are a few ways you can use these services with your data in Synapse Analytics:
+
+- The Cognitive Services wizard in Synapse Analytics generates PySpark code in a Synapse notebook that connects to a cognitive service using data in a Spark table. Then, using pretrained machine learning models, the service does the work for you to add AI to your data.
+
+- The tutorial [Build machine learning applications using Microsoft Machine Learning for Apache Spark (Preview)](tutorial-build-applications-use-mmlspark.md) demonstrates how to call a number of cognitive services using Microsoft Machine Learning for Apache Spark (MMLSpark).
+
+- Starting from the PySpark code generated by the wizard, or the example MMLSpark code provided in the tutorial, you can write your own code to use other cognitive services with your data. See [What are Azure Cognitive Services?](/azure/cognitive-services/what-are-cognitive-services) for more information about available services.
+
+## Get started
+
+The tutorial, [Pre-requisites for using Cognitive Services in Azure Synapse](tutorial-configure-cognitive-services-synapse.md), walks you through a couple steps you need to perform before using Cognitive Services in Synapse Analytics.
+
+## Tutorials
+
+The following tutorials provide complete examples of using Cognitive Services in Synapse Analytics.
+
+- [Sentiment analysis with Cognitive Services](tutorial-cognitive-services-sentiment.md) - Using an example data set of customer comments, you build a Spark table with a column that indicates the sentiment of the comments in each row.
+
+- [Anomaly detection with Cognitive Services](tutorial-cognitive-services-anomaly.md) - Using an example data set of time series data, you build a Spark table with a column that indicates whether the data in each row is an anomaly.
+
+- [Build machine learning applications using Microsoft Machine Learning for Apache Spark (Preview)](tutorial-build-applications-use-mmlspark.md) - This tutorial demonstrates how to use MMLSpark to access several models from Cognitive Services.
+
+## Next steps
+
+- [Machine Learning capabilities in Azure Synapse Analytics](what-is-machine-learning.md)
+- [What are Cognitive Services?](/azure/cognitive-services/what-are-cognitive-services)
+- [Use a sample notebook from the Synapse Analytics gallery](quickstart-gallery-sample-notebook.md)
virtual-desktop Configure Adfs Sso https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/configure-adfs-sso.md
Previously updated : 05/28/2021 Last updated : 06/30/2021 # Configure AD FS single sign-on for Azure Virtual Desktop
-> [!IMPORTANT]
-> AD FS single sign-on is currently in public preview.
-> This preview version is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- This article will walk you through the process of configuring Active Directory Federation Service (AD FS) single sign-on (SSO) for Azure Virtual Desktop. > [!NOTE]
This article will walk you through the process of configuring Active Directory F
## Requirements
-> [!IMPORTANT]
-> During public preview, you must configure your host pool to be in the [validation environment](create-validation-host-pool.md).
- Before configuring AD FS single sign-on, you must have the following setup running in your environment: * You must deploy the **Active Directory Certificate Services (CA)** role. All servers running the role must be domain-joined, have the latest Windows updates installed, and be configured as [enterprise certificate authorities](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc731183%28v%3dws.10%29).
This script only has one required parameter, *ADFSAuthority*, which is the URL t
## Configure your Azure Virtual Desktop host pool
-> [!IMPORTANT]
-> During public preview, you must configure your host pool to be in the [validation environment](create-validation-host-pool.md).
- It's time to configure the AD FS SSO parameters on your Azure Virtual Desktop host pool. To do this, [set up your PowerShell environment](powershell-module.md) for Azure Virtual Desktop if you haven't already and connect to your account. After that, update the SSO information for your host pool by running one of the following two cmdlets in the same PowerShell window on the AD FS VM:
virtual-machines Generalize https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/generalize.md
First you'll deprovision the VM by using the Azure VM agent to delete machine-sp
4. After the command completes, enter **exit** to close the SSH client. The VM will still be running at this point.
-The the VM needs to be marked as generalized on the platform.
+Then the VM needs to be marked as generalized on the platform.
```azurecli-interactive az vm generalize \
virtual-machines Setup Mpi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/hpc/setup-mpi.md
The above syntax assumes a shared home directory, else .ssh directory must be co
- Learn about the [InfiniBand enabled](../../sizes-hpc.md#rdma-capable-instances) [H-series](../../sizes-hpc.md) and [N-series](../../sizes-gpu.md) VMs - Review the [HBv3-series overview](hbv3-series-overview.md) and [HC-series overview](hc-series-overview.md).
+- Read [Optimal MPI process placement for HB-series VMs](https://techcommunity.microsoft.com/t5/azure-global/optimal-mpi-process-placement-for-azure-hb-series-vms/ba-p/2450663).
- Read about the latest announcements, HPC workload examples, and performance results at the [Azure Compute Tech Community Blogs](https://techcommunity.microsoft.com/t5/azure-compute/bg-p/AzureCompute). - For a higher level architectural view of running HPC workloads, see [High Performance Computing (HPC) on Azure](/azure/architecture/topics/high-performance-computing/).
virtual-network Monitor Virtual Network Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-network/monitor-virtual-network-reference.md
This section refers to all of the Azure Monitor Logs Kusto tables relevant to Az
|Resource Type | Notes | |-|--|
-| Virtual network | [Microsoft.Network/virtualNetworks](/azure/azure-monitor/reference/tables/tables-resourcetype.md#virtual-networks) |
-| Network interface | [Microsoft.Network/networkInterface](/azure/azure-monitor/reference/tables/tables-resourcetype.md#network-interfaces) |
-| Public IP address | [Microsoft.Network/publicIP](/azure/azure-monitor/reference/tables/tables-resourcetype.md#public-ip-addresses) |
+| Virtual network | [Microsoft.Network/virtualNetworks](/azure/azure-monitor/reference/tables/tables-resourcetype#virtual-networks) |
+| Network interface | [Microsoft.Network/networkInterface](/azure/azure-monitor/reference/tables/tables-resourcetype#network-interfaces) |
+| Public IP address | [Microsoft.Network/publicIP](/azure/azure-monitor/reference/tables/tables-resourcetype#public-ip-addresses) |
### Diagnostics tables