Updates from: 07/08/2021 03:16:58
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Userinfo Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/userinfo-endpoint.md
Previously updated : 03/09/2021 Last updated : 07/07/2021
The user info UserJourney specifies:
} ```
-1. The OutputClaims element of the **UserInfoAuthorization** technical profile specifies the attributes you want to read from the access token. The **ClaimTypeReferenceId** is the reference to a claim type. The optional **PartnerClaimType** is the name of the of the claim defined in the access token.
+1. The OutputClaims element of the **UserInfoAuthorization** technical profile specifies the attributes you want to read from the access token. The **ClaimTypeReferenceId** is the reference to a claim type. The optional **PartnerClaimType** is the name of the claim defined in the access token.
A successful response would look like:
} ```
+## Provide optional claims
+
+To provide more claims to your app, follow these steps:
+
+1. [Add user attributes and customize user input](configure-user-input.md).
+1. Modify the [Relying party policy technical profile](relyingparty.md#technicalprofile) OutputClaims element with the claims you want to provide. Use the `DefaultValue` attribute to set a default value. You can also set the default value to a [claim resolver](claim-resolver-overview.md), such as `{Context:CorrelationId}`. To force the use of the default value, set the `AlwaysUseDefaultValue` attribute to `true`. The following example adds the city claim with a default value.
+
+ ```xml
+ <RelyingParty>
+ ...
+ <TechnicalProfile Id="PolicyProfile">
+ ...
+ <OutputClaims>
+ <OutputClaim ClaimTypeReferenceId="city" DefaultValue="Berlin" />
+ </OutputClaims>
+ ...
+ </TechnicalProfile>
+ </RelyingParty>
+ ```
+
+1. Modify the UserInfoIssuer technical profile InputClaims element with the claims you want to provide. Use the `PartnerClaimType` attribute to change the name of the claim return to your app. The following example adds the city claim and change the name of some of the claims.
+
+ ```xml
+ <TechnicalProfile Id="UserInfoIssuer">
+ ...
+ <InputClaims>
+ <InputClaim ClaimTypeReferenceId="objectId" />
+ <InputClaim ClaimTypeReferenceId="city" />
+ <InputClaim ClaimTypeReferenceId="givenName" />
+ <InputClaim ClaimTypeReferenceId="surname" PartnerClaimType="familyName" />
+ <InputClaim ClaimTypeReferenceId="displayName" PartnerClaimType="name" />
+ <InputClaim ClaimTypeReferenceId="signInNames.emailAddress" PartnerClaimType="email" />
+ </InputClaims>
+ ...
+ </TechnicalProfile>
+ ```
+ ## Next Steps - You can find an example of a UserInfo endpoint policy on [GitHub](https://github.com/azure-ad-b2c/samples/tree/master/policies/user-info-endpoint).
active-directory Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/known-issues.md
Previously updated : 05/28/2021 Last updated : 07/07/2021
Known issues to be aware of when working with app provisioning. You can provide
## Authorization
-**Unable to save after successful connection test**
-
-If you can successfully test a connection, but canΓÇÖt save, then you've exceeded the allowable storage limit for credentials. To learn more, see [Problem saving administrator credentials](./user-provisioning.md).
- **Unable to save** The tenant URL, secret token, and notification email must be filled in to save. You can't provide just one of them.
Attribute-mapping expressions can have a maximum of 10,000 characters.
Directory extensions, appRoleAssignments, userType, and accountExpires are not supported as scoping filters.
+**Multi-value directory extensions**
+
+Multi-value directory extensions cannot be used in attribute mappings or scoping filters.
## Service issues
The following attributes and objects are not supported:
- The attributes that the target application supports are discovered and surfaced in the Azure portal in Attribute Mappings. Newly added attributes will continue to be discovered. However, if an attribute type has changed (for example, string to boolean), and the attribute is part of the mappings, the type will not change automatically in the Azure portal. Customers will need to go into advanced settings in mappings and manually update the attribute type. ## Next steps-- [How provisioning works](how-provisioning-works.md)
+- [How provisioning works](how-provisioning-works.md)
active-directory Concept Mfa Plan https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/concept-mfa-plan.md
- Title: Plan an Azure Active Directory (Azure AD) Multi-Factor Authentication (MFA) deployment
-description: Learn how to plan and implement an Azure AD MFA roll-out.
----- Previously updated : 07/01/2021--------
-# Plan an Azure Active Directory Multi-Factor Authentication deployment
-
-Azure Active Directory (Azure AD) Multi-Factor Authentication (MFA) helps safeguard access to data and applications, providing another layer of security by using a second form of authentication. Organizations can enable multifactor authentication with [Conditional Access](../conditional-access/overview.md) to make the solution fit their specific needs.
-
-This deployment guide shows you how to plan and implement an [Azure AD MFA](concept-mfa-howitworks.md) roll-out.
-
-## Prerequisites for deploying Azure AD MFA
-
-Before you begin your deployment, ensure you meet the following prerequisites for your relevant scenarios.
-
-| Scenario | Prerequisite |
-|-|--|
-|**Cloud-only** identity environment with modern authentication | **No prerequisite tasks** |
-|**Hybrid identity** scenarios | Deploy [Azure AD Connect](../hybrid/whatis-hybrid-identity.md) and synchronize user identities between the on-premises Active Directory Domain Services (AD DS) and Azure AD. |
-| **On-premises legacy applications** published for cloud access| Deploy [Azure AD Application Proxy](../app-proxy/application-proxy-deployment-plan.md) |
-
-## Choose authentication methods for MFA
-
-There are many methods that can be used for a second-factor authentication. You can choose from the list of available authentication methods, evaluating each in terms of security, usability, and availability.
-
->[!IMPORTANT]
->Enable more than one MFA method so that users have a backup method available in case their primary method is unavailable.
-Methods include:
--- [Windows Hello for Business](/windows/security/identity-protection/hello-for-business/hello-overview)-- [Microsoft Authenticator app](concept-authentication-authenticator-app.md)-- [FIDO2 security key (preview)](concept-authentication-passwordless.md#fido2-security-keys)-- [OATH hardware tokens (preview)](concept-authentication-oath-tokens.md#oath-hardware-tokens-preview)-- [OATH software tokens](concept-authentication-oath-tokens.md#oath-software-tokens)-- [SMS verification](concept-authentication-phone-options.md#mobile-phone-verification)-- [Voice call verification](concept-authentication-phone-options.md)-
-When choosing authenticating methods that will be used in your tenant consider the security and usability of these methods:
-
-![Choose the right authentication method](media/concept-authentication-methods/authentication-methods.png)
-
-To learn more about the strength and security of these methods and how they work, see the following resources:
--- [What authentication and verification methods are available in Azure Active Directory?](concept-authentication-methods.md)-- [Video: Choose the right authentication methods to keep your organization safe](https://youtu.be/LB2yj4HSptc)-
-You can use this [PowerShell script](/samples/azure-samples/azure-mfa-authentication-method-analysis/azure-mfa-authentication-method-analysis/) to analyze usersΓÇÖ MFA configurations and suggest the appropriate MFA authentication method.
-
-For the best flexibility and usability, use the Microsoft Authenticator app. This authentication method provides the best user experience and multiple modes, such as passwordless, MFA push notifications, and OATH codes. The Microsoft Authenticator app also meets the National Institute of Standards and Technology (NIST) [Authenticator Assurance Level 2 requirements](../standards/nist-authenticator-assurance-level-2.md).
-
-You can control the authentication methods available in your tenant. For example, you may want to block some of the least secure methods, such as SMS.
-
-| Authentication method | Manage from | Scoping |
-|--|-||
-| Microsoft Authenticator (Push notification and passwordless phone sign-in) | MFA settings or
-Authentication methods policy | Authenticator passwordless phone sign-in can be scoped to users and groups |
-| FIDO2 security key | Authentication methods policy | Can be scoped to users and groups |
-| Software or Hardware OATH tokens | MFA settings | |
-| SMS verification | MFA settings | Manage SMS sign-in for primary authentication in authentication policy. SMS sign-in can be scoped to users and groups. |
-| Voice calls | Authentication methods policy | |
--
-## Plan Conditional Access policies
-
-Azure AD MFA is enforced with Conditional Access policies. These policies allow you to prompt users for multifactor authentication when needed for security and stay out of usersΓÇÖ way when not needed.
-
-![Conceptual Conditional Access process flow](media/concept-mfa-plan/conditional-access-overview-how-it-works.png)
-
-In the Azure portal, you configure Conditional Access policies under **Azure Active Directory** > **Security** > **Conditional Access**.
-
-To learn more about creating Conditional Access policies, see [Conditional Access policy to prompt for Azure AD MFA when a user signs in to the Azure portal](tutorial-enable-azure-mfa.md). This helps you to:
--- Become familiar with the user interface-- Get a first impression of how Conditional Access works-
-For end-to-end guidance on Azure AD Conditional Access deployment, see the [Conditional Access deployment plan](../conditional-access/plan-conditional-access.md).
-
-### Common policies for Azure AD MFA
-
-Common use cases to require Azure AD MFA include:
--- For [administrators](../conditional-access/howto-conditional-access-policy-admin-mfa.md)-- To [specific applications](tutorial-enable-azure-mfa.md)-- For [all users](../conditional-access/howto-conditional-access-policy-all-users-mfa.md)-- For [Azure management](../conditional-access/howto-conditional-access-policy-azure-management.md)-- From [network locations you don't trust](../conditional-access/untrusted-networks.md)-
-### Named locations
-
-To manage your Conditional Access policies, the location condition of a Conditional Access policy enables you to tie access controls settings to the network locations of your users. We recommend to use [Named Locations](../conditional-access/location-condition.md) so that you can create logical groupings of IP address ranges or countries and regions. This creates a policy for all apps that blocks sign in from that named location. Be sure to exempt your administrators from this policy.
-
-### Risk-based policies
-
-If your organization uses [Azure AD Identity Protection](../identity-protection/overview-identity-protection.md) to detect risk signals, consider using [risk-based policies](../identity-protection/howto-identity-protection-configure-risk-policies.md) instead of named locations. Policies can be created to force password changes when there is a threat of compromised identity or require multifactor authentication when a sign-in is deemed [risky by events](../identity-protection/overview-identity-protection.md#risk-detection-and-remediation) such as leaked credentials, sign-ins from anonymous IP addresses, and more.
-
-Risk policies include:
--- [Require all users to register for Azure AD MFA](../identity-protection/howto-identity-protection-configure-mfa-policy.md)-- [Require a password change for users that are high-risk](../identity-protection/howto-identity-protection-configure-risk-policies.md#enable-policies)-- [Require MFA for users with medium or high sign-in risk](../identity-protection/howto-identity-protection-configure-risk-policies.md#enable-policies)-
-## Plan user session lifetime
-
-When planning your MFA deployment, itΓÇÖs important to think about how frequently you would like to prompt your users. Asking users for credentials often seems like a sensible thing to do, but it can backfire. If users are trained to enter their credentials without thinking, they can unintentionally supply them to a malicious credential prompt.
-Azure AD has multiple settings that determine how often you need to reauthenticate. Understand the needs of your business and users and configure settings that provide the best balance for your environment.
-
-We recommend using devices with Primary Refresh Tokens (PRT) for improved end user experience and reduce the session lifetime with sign-in frequency policy only on specific business use cases.
-
-For more information, see [Optimize reauthentication prompts and understand session lifetime for Azure AD MFA](concepts-azure-multi-factor-authentication-prompts-session-lifetime.md).
-
-## Plan user registration
-
-A major step in every MFA deployment is getting users registered to use MFA. Authentication methods such as Voice and SMS allow pre-registration, while others like the Authenticator App require user interaction. Administrators must determine how users will register their methods.
-
-### Combined registration for SSPR and Azure AD MFA
-We recommend using the [combined registration experience](howto-registration-mfa-sspr-combined.md) for Azure AD MFA and [Azure AD self-service password reset (SSPR)](concept-sspr-howitworks.md). SSPR allows users to reset their password in a secure way using the same methods they use for Azure AD MFA. Combined registration is a single step for end users.
-
-### Registration with Identity Protection
-Azure AD Identity Protection contributes both a registration policy for and automated risk detection and remediation policies to the Azure AD MFA story. Policies can be created to force password changes when there is a threat of compromised identity or require MFA when a sign-in is deemed risky.
-If you use Azure AD Identity Protection, [configure the Azure AD MFA registration policy](../identity-protection/howto-identity-protection-configure-mfa-policy.md) to prompt your users to register the next time they sign in interactively.
-
-### Registration without Identity Protection
-If you donΓÇÖt have licenses that enable Azure AD Identity Protection, users are prompted to register the next time that MFA is required at sign-in.
-To require users to use MFA, you can use Conditional Access policies and target frequently used applications like HR systems.
-If a userΓÇÖs password is compromised, it could be used to register for MFA, taking control of their account. We therefore recommend [securing the security registration process with conditional access policies](../conditional-access/howto-conditional-access-policy-registration.md) requiring trusted devices and locations.
-You can further secure the process by also requiring a [Temporary Access Pass](howto-authentication-temporary-access-pass.md). A time-limited passcode issued by an admin that satisfies strong authentication requirements and can be used to onboard other authentication methods, including Passwordless ones.
-
-### Increase the security of registered users
-If you have users registered for MFA using SMS or voice calls, you may want to move them to more secure methods such as the Microsoft Authenticator app. Microsoft now offers a public preview of functionality that allows you to prompt users to set up the Microsoft Authenticator app during sign-in. You can set these prompts by group, controlling who is prompted, enabling targeted campaigns to move users to the more secure method.
-
-### Plan recovery scenarios
-As mentioned before, ensure users are registered for more than one MFA method, so that if one is unavailable, they have a backup.
-If the user does not have a backup method available, you can:
--- Provide them a Temporary Access Pass so that they can manage their own authentication methods. You can also provide a Temporary Access Pass to enable temporary access to resources. -- Update their methods as an administrator. To do so, select the user in the Azure portal, then select Authentication methods and update their methods.
-User communications
-
-ItΓÇÖs critical to inform users about upcoming changes, Azure AD MFA registration requirements, and any necessary user actions.
-We provide [communication templates](https://aka.ms/mfatemplates) and [end-user documentation](../user-help/security-info-setup-signin.md) to help draft your communications. Send users to [https://myprofile.microsoft.com](https://myprofile.microsoft.com/) to register by selecting the **Security Info** link on that page.
-
-## Plan integration with on-premises systems
-
-Applications that authenticate directly with Azure AD and have modern authentication (WS-Fed, SAML, OAuth, OpenID Connect) can make use of Conditional Access policies.
-Some legacy and on-premises applications do not authenticate directly against Azure AD and require additional steps to use Azure AD MFA. You can integrate them by using Azure AD Application proxy or [Network policy services](/windows-server/networking/core-network-guide/core-network-guide#BKMK_optionalfeatures).
-
-### Integrate with AD FS resources
-
-We recommend migrating applications secured with Active Directory Federation Services (AD FS) to Azure AD. However, if you are not ready to migrate these to Azure AD, you can use the Azure MFA adapter with AD FS 2016 or newer.
-If your organization is federated with Azure AD, you can [configure Azure AD MFA as an authentication provider with AD FS resources](/windows-server/identity/ad-fs/operations/configure-ad-fs-and-azure-mfa) both on-premises and in the cloud.
-
-### RADIUS clients and Azure AD MFA
-
-For applications that are using RADIUS authentication, we recommend moving client applications to modern protocols such as SAML, Open ID Connect, or OAuth on Azure AD. If the application cannot be updated, then you can deploy [Network Policy Server (NPS) with the Azure MFA extension](howto-mfa-nps-extension.md). The network policy server (NPS) extension acts as an adapter between RADIUS-based applications and Azure AD MFA to provide a second factor of authentication.
-
-#### Common integrations
-
-Many vendors now support SAML authentication for their applications. When possible, we recommend federating these applications with Azure AD and enforcing MFA through Conditional Access. If your vendor doesnΓÇÖt support modern authentication ΓÇô you can use the NPS extension.
-Common RADIUS client integrations include applications such as [Remote Desktop Gateways](howto-mfa-nps-extension-rdg.md) and [VPN servers](howto-mfa-nps-extension-vpn.md).
-
-Others might include:
--- Citrix Gateway-
- [Citrix Gateway](https://docs.citrix.com/advanced-concepts/implementation-guides/citrix-gateway-microsoft-azure.html#microsoft-azure-mfa-deployment-methods) supports both RADIUS and NPS extension integration, and a SAML integration.
--- Cisco VPN
- - The Cisco VPN supports both RADIUS and [SAML authentication for SSO](../saas-apps/cisco-anyconnect.md).
- - By moving from RADIUS authentication to SAML, you can integrate the Cisco VPN without deploying the NPS extension.
--- All VPNs-
-## Deploy Azure AD MFA
-
-Your MFA rollout plan should include a pilot deployment followed by deployment waves that are within your support capacity. Begin your rollout by applying your Conditional Access policies to a small group of pilot users. After evaluating the effect on the pilot users, process used, and registration behaviors, you can either add more groups to the policy or add more users to the existing groups.
-
-Follow the steps below:
-
-1. Meet the necessary prerequisites
-1. Configure chosen authentication methods
-1. Configure your Conditional Access policies
-1. Configure session lifetime settings
-1. Configure Azure AD MFA registration policies
-
-## Manage Azure AD MFA
-This section provides reporting and troubleshooting information for Azure AD MFA.
-
-### Reporting and Monitoring
-
-Azure AD has reports that provide technical and business insights, follow the progress of your deployment and check if your users are successful at sign-in with MFA. Have your business and technical application owners assume ownership of and consume these reports based on your organizationΓÇÖs requirements.
-
-You can monitor authentication method registration and usage across your organization using the [Authentication Methods Activity dashboard](howto-authentication-methods-activity.md). This helps you understand what methods are being registered and how they're being used.
-
-#### Sign-in report to review MFA events
-
-The Azure AD sign-in reports include authentication details for events when a user is prompted for multi-factor authentication, and if any Conditional Access policies were in use. You can also use PowerShell for reporting on users registered for MFA.
-
-NPS extension and AD FS logs can be viewed from **Security** > **MFA** > **Activity report**.
-
-For more information, and additional MFA reports, see [Review Azure AD Multi-Factor Authentication events](howto-mfa-reporting.md#view-the-azure-ad-sign-ins-report).
-
-### Troubleshoot Azure AD MFA
-See [Troubleshooting Azure AD MFA](https://support.microsoft.com/help/2937344/troubleshooting-azure-multi-factor-authentication-issues) for common issues.
-
-## Next steps
-
-[Deploy other identity features](../fundamentals/active-directory-deployment-plans.md)
-
active-directory Howto Authentication Use Email Signin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-authentication-use-email-signin.md
Previously updated : 5/3/2021 Last updated : 07/07/2021 -+
Here's what you need to know about email as an alternate login ID:
In the current preview state, the following limitations apply to email as an alternate login ID:
-* **User experience -** Users may see their UPN, even when they signed-in with their non-UPN email. The following example behavior may be seen:
+* **User experience** - Users may see their UPN, even when they signed-in with their non-UPN email. The following example behavior may be seen:
* User is prompted to sign in with UPN when directed to Azure AD sign-in with `login_hint=<non-UPN email>`. * When a user signs-in with a non-UPN email and enters an incorrect password, the *"Enter your password"* page changes to display the UPN. * On some Microsoft sites and apps, such as Microsoft Office, the *Account Manager* control typically displayed in the upper right may display the user's UPN instead of the non-UPN email used to sign in.
-* **Unsupported flows -** Some flows are currently not compatible with non-UPN emails, such as the following:
+* **Unsupported flows** - Some flows are currently not compatible with non-UPN emails, such as the following:
* Identity Protection doesn't match non-UPN emails with *Leaked Credentials* risk detection. This risk detection uses the UPN to match credentials that have been leaked. For more information, see [Azure AD Identity Protection risk detection and remediation][identity-protection]. * B2B invites sent to a non-UPN email are not fully supported. After accepting an invite sent to a non-UPN email, sign-in with the non-UPN email may not work for the guest user on the resource tenant endpoint. * When a user is signed-in with a non-UPN email, they cannot change their password. Azure AD self-service password reset (SSPR) should work as expected. During SSPR, the user may see their UPN if they verify their identity via alternate email.
-* **Unsupported scenarios -** The following scenarios are not supported. Sign-in with non-UPN email for:
- * Hybrid Azure AD joined devices
- * Azure AD joined devices
- * Azure AD registered devices
- * Seamless SSO
- * Applications using Resource Owner Password Credentials (ROPC)
+* **Unsupported scenarios** - The following scenarios are not supported. Sign-in with non-UPN email for:
+ * [Hybrid Azure AD joined devices](../devices/concept-azure-ad-join-hybrid.md)
+ * [Azure AD joined devices](../devices/concept-azure-ad-join.md)
+ * [Azure AD registered devices](../devices/concept-azure-ad-register.md)
+ * [Seamless SSO](../hybrid/how-to-connect-sso.md)
+ * [Applications using Resource Owner Password Credentials (ROPC)](../develop/v2-oauth-ropc.md)
* Applications using legacy authentication such as POP3 and SMTP * Skype for Business * Microsoft Office on macOS * Microsoft Teams on web * OneDrive, when the sign-in flow does not involve Multi-Factor Authentication
-* **Unsupported apps -** Some third-party applications may not work as expected if they assume that the `unique_name` or `preferred_username` claims are immutable or will always match a specific user attribute (e.g. UPN).
+* **Unsupported apps** - Some third-party applications may not work as expected if they assume that the `unique_name` or `preferred_username` claims are immutable or will always match a specific user attribute, such as UPN.
-* **Logging -** Changes made to the feature's configuration in HRD policy are not explicitly shown in the audit logs. In addition, the *Sign-in identifier type* field in the sign-in logs may not be always accurate and should not be used to determine whether the feature has been used for sign-in.
+* **Logging** - Changes made to the feature's configuration in HRD policy are not explicitly shown in the audit logs. In addition, the *Sign-in identifier type* field in the sign-in logs may not be always accurate and should not be used to determine whether the feature has been used for sign-in.
-* **Staged rollout policy -** The following limitations apply only when the feature is enabled using staged rollout policy:
+* **Staged rollout policy** - The following limitations apply only when the feature is enabled using staged rollout policy:
* The feature does not work as expected for users that are included in other staged rollout policies. * Staged rollout policy supports a maximum of 10 groups per feature. * Staged rollout policy does not support nested groups. * Staged rollout policy does not support dynamic groups. * Contact objects inside the group will block the group from being added to a staged rollout policy.
-* **Duplicate values -** Within a tenant, a cloud-only user's UPN can be the same value as another user's proxy address synced from the on-premises directory. In this scenario, with the feature enabled, the cloud-only user will not be able to sign in with their UPN. More on this issue in the [Troubleshoot](#troubleshoot) section.
+* **Duplicate values** - Within a tenant, a cloud-only user's UPN can be the same value as another user's proxy address synced from the on-premises directory. In this scenario, with the feature enabled, the cloud-only user will not be able to sign in with their UPN. More on this issue in the [Troubleshoot](#troubleshoot) section.
## Overview of alternate login ID options To sign in to Azure AD, users enter a value that uniquely identifies their account. Historically, you could only use the Azure AD UPN as the sign-in identifier.
active-directory Howto Mfa Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-mfa-getstarted.md
Previously updated : 05/03/2021 Last updated : 07/07/2021 --++
-# Plan an Azure AD Multi-Factor Authentication deployment
+# Plan an Azure Active Directory Multi-Factor Authentication deployment
-People are connecting to organizational resources in increasingly complicated scenarios. People connect from organization-owned, personal, and public devices on and off the corporate network using smart phones, tablets, PCs, and laptops, often on multiple platforms. In this always-connected, multi-device and multi-platform world, the security of user accounts is more important than ever. Passwords, no matter their complexity, used across devices, networks, and platforms are no longer sufficient to ensure the security of the user account, especially when users tend to reuse passwords across accounts. Sophisticated phishing and other social engineering attacks can result in usernames and passwords being posted and sold across the dark web.
+Azure Active Directory (Azure AD) Multi-Factor Authentication (MFA) helps safeguard access to data and applications, providing another layer of security by using a second form of authentication. Organizations can enable multifactor authentication with [Conditional Access](../conditional-access/overview.md) to make the solution fit their specific needs.
-[Azure AD Multi-Factor Authentication (MFA)](concept-mfa-howitworks.md) helps safeguard access to data and applications. It provides an additional layer of security using a second form of authentication. Organizations can use [Conditional Access](../conditional-access/overview.md) to make the solution fit their specific needs.
+This deployment guide shows you how to plan and implement an [Azure AD MFA](concept-mfa-howitworks.md) roll-out.
-This deployment guide shows you how to plan and then test an Azure AD Multi-Factor Authentication roll-out.
+## Prerequisites for deploying Azure AD MFA
-To quickly see Azure AD Multi-Factor Authentication in action and then come back to understand additional deployment considerations:
-
-> [!div class="nextstepaction"]
-> [Enable Azure AD Multi-Factor Authentication](tutorial-enable-azure-mfa.md)
-
-## Prerequisites
-
-Before starting a deployment of Azure AD Multi-Factor Authentication, there are prerequisite items that should be considered.
+Before you begin your deployment, ensure you meet the following prerequisites for your relevant scenarios.
| Scenario | Prerequisite |
-| | |
-| **Cloud-only** identity environment with modern authentication | **No additional prerequisite tasks** |
-| **Hybrid** identity scenarios | [Azure AD Connect](../hybrid/whatis-hybrid-identity.md) is deployed and user identities are synchronized or federated with the on-premises Active Directory Domain Services with Azure Active Directory. |
-| On-premises legacy applications published for cloud access | Azure AD [Application Proxy](../app-proxy/application-proxy.md) is deployed. |
-| Using Azure AD MFA with RADIUS Authentication | A [Network Policy Server (NPS)](howto-mfa-nps-extension.md) is deployed. |
-| Users have Microsoft Office 2010 or earlier, or Apple Mail for iOS 11 or earlier | Upgrade to [Microsoft Office 2013 or later](https://support.microsoft.com/help/4041439/modern-authentication-configuration-requirements-for-transition-from-o) and Apple mail for iOS 12 or later. Conditional Access is not supported by legacy authentication protocols. |
-
-## Plan user rollout
-
-Your MFA rollout plan should include a pilot deployment followed by deployment waves that are within your support capacity. Begin your rollout by applying your Conditional Access policies to a small group of pilot users. After evaluating the effect on the pilot users, process used, and registration behaviors, you can either add more groups to the policy or add more users to the existing groups.
-
-### User communications
-
-It is critical to inform users, in planned communications, about upcoming changes, Azure AD MFA registration requirements, and any necessary user actions. We recommend communications are developed in concert with representatives from within your organization, such as a Communications, Change Management, or Human Resources departments.
-
-Microsoft provides [communication templates](https://aka.ms/mfatemplates) and [end-user documentation](../user-help/security-info-setup-signin.md) to help draft your communications. You can send users to [https://myprofile.microsoft.com](https://myprofile.microsoft.com) to register directly by selecting the **Security Info** links on that page.
-
-## Deployment considerations
-
-Azure AD Multi-Factor Authentication is deployed by enforcing policies with Conditional Access. A Conditional Access policy can require users to perform multi-factor authentication when certain criteria are met such as:
-
-* All users, a specific user, member of a group, or assigned role
-* Specific cloud application being accessed
-* Device platform
-* State of device
-* Network location or geo-located IP address
-* Client applications
-* Sign-in risk (Requires Identity Protection)
-* Compliant device
-* Hybrid Azure AD joined device
-* Approved client application
-
-Use the customizable posters and email templates in [multi-factor authentication rollout materials](https://www.microsoft.com/download/details.aspx?id=57600&WT.mc_id=rss_alldownloads_all) to roll out multi-factor authentication to your organization.
-
-## Enable Multi-Factor Authentication with Conditional Access
-
-Conditional Access policies enforce registration, requiring unregistered users to complete registration at first sign-in, an important security consideration.
-
-[Azure AD Identity Protection](../identity-protection/howto-identity-protection-configure-risk-policies.md) contributes both a registration policy for and automated risk detection and remediation policies to the Azure AD Multi-Factor Authentication story. Policies can be created to force password changes when there is a threat of compromised identity or require MFA when a sign-in is deemed risky by the following [events](../identity-protection/overview-identity-protection.md):
-
-* Leaked credentials
-* Sign-ins from anonymous IP addresses
-* Impossible travel to atypical locations
-* Sign-ins from unfamiliar locations
-* Sign-ins from infected devices
-* Sign-ins from IP addresses with suspicious activities
-
-Some of the risk detections detected by Azure Active Directory Identity Protection occur in real time and some require offline processing. Administrators can choose to block users who exhibit risky behaviors and remediate manually, require a password change, or require a multi-factor authentication as part of their Conditional Access policies.
-
-## Define network locations
-
-We recommend that organizations use Conditional Access to define their network using [named locations](../conditional-access/location-condition.md#named-locations). If your organization is using Identity Protection, consider using risk-based policies instead of named locations.
-
-### Configuring a named location
-
-1. Open **Azure Active Directory** in the Azure portal
-2. Select **Security**
-3. Under **Manage**, choose **Named Locations**
-4. Select **New Location**
-5. In the **Name** field, provide a meaningful name
-6. Select whether you are defining the location using *IP ranges* or *Countries/Regions*
- 1. If using *IP Ranges*
- 1. Decide whether to *Mark as trusted location*. Signing in from a trusted location lowers a user's sign-in risk. Only mark this location as trusted if you know the IP ranges entered are established and credible in your organization.
- 2. Specify the IP Ranges
- 2. If using *Countries/Regions*
- 1. Expand the drop-down menu and select the countries or regions you wish to define for this named location.
- 2. Decide whether to *Include unknown areas*. Unknown areas are IP addresses that can't be mapped to a country/region.
-7. Select **Create**
-
-## Plan authentication methods
+|-|--|
+|**Cloud-only** identity environment with modern authentication | **No prerequisite tasks** |
+|**Hybrid identity** scenarios | Deploy [Azure AD Connect](../hybrid/whatis-hybrid-identity.md) and synchronize user identities between the on-premises Active Directory Domain Services (AD DS) and Azure AD. |
+| **On-premises legacy applications** published for cloud access| Deploy [Azure AD Application Proxy](../app-proxy/application-proxy-deployment-plan.md) |
-Administrators can choose the [authentication methods](../authentication/concept-authentication-methods.md) that they want to make available for users. It is important to allow more than a single authentication method so that users have a backup method available in case their primary method is unavailable. The following methods are available for administrators to enable:
+## Choose authentication methods for MFA
-> [!TIP]
-> Microsoft recommends using the Microsoft Authenticator (mobile app) as the primary method for Azure AD Multi-Factor Authentication for a more secure and improved user experience. The Microsoft Authenticator app also [meets](https://azure.microsoft.com/resources/microsoft-nist/) the National Institute of Standards and Technology Authenticator Assurance Levels.
+There are many methods that can be used for a second-factor authentication. You can choose from the list of available authentication methods, evaluating each in terms of security, usability, and availability.
-### Notification through mobile app
+>[!IMPORTANT]
+>Enable more than one MFA method so that users have a backup method available in case their primary method is unavailable.
+Methods include:
-A push notification is sent to the Microsoft Authenticator app on your mobile device. The user views the notification and selects **Approve** to complete verification. Push notifications through a mobile app provide the least intrusive option for users. They are also the most reliable and secure option because they use a data connection rather than telephony.
+- [Windows Hello for Business](/windows/security/identity-protection/hello-for-business/hello-overview)
+- [Microsoft Authenticator app](concept-authentication-authenticator-app.md)
+- [FIDO2 security key (preview)](concept-authentication-passwordless.md#fido2-security-keys)
+- [OATH hardware tokens (preview)](concept-authentication-oath-tokens.md#oath-hardware-tokens-preview)
+- [OATH software tokens](concept-authentication-oath-tokens.md#oath-software-tokens)
+- [SMS verification](concept-authentication-phone-options.md#mobile-phone-verification)
+- [Voice call verification](concept-authentication-phone-options.md)
-> [!NOTE]
-> If your organization has staff working in or traveling to China, the **Notification through mobile app** method on **Android devices** does not work in that country/region. Alternate methods should be made available for those users.
+When choosing authenticating methods that will be used in your tenant consider the security and usability of these methods:
-### Verification code from mobile app
+![Choose the right authentication method](media/concept-authentication-methods/authentication-methods.png)
-A mobile app like the Microsoft Authenticator app generates a new OATH verification code every 30 seconds. The user enters the verification code into the sign-in interface. The mobile app option can be used whether or not the phone has a data or cellular signal.
+To learn more about the strength and security of these methods and how they work, see the following resources:
-### Call to phone
+- [What authentication and verification methods are available in Azure Active Directory?](concept-authentication-methods.md)
+- [Video: Choose the right authentication methods to keep your organization safe](https://youtu.be/LB2yj4HSptc)
-An automated voice call is placed to the user. The user answers the call and presses **#** on the phone keypad to approve their authentication. Call to phone is a great backup method for notification or verification code from a mobile app.
+You can use this [PowerShell script](/samples/azure-samples/azure-mfa-authentication-method-analysis/azure-mfa-authentication-method-analysis/) to analyze usersΓÇÖ MFA configurations and suggest the appropriate MFA authentication method.
-### Text message to phone
+For the best flexibility and usability, use the Microsoft Authenticator app. This authentication method provides the best user experience and multiple modes, such as passwordless, MFA push notifications, and OATH codes. The Microsoft Authenticator app also meets the National Institute of Standards and Technology (NIST) [Authenticator Assurance Level 2 requirements](../standards/nist-authenticator-assurance-level-2.md).
-A text message that contains a verification code is sent to the user, the user is prompted to enter the verification code into the sign-in interface.
+You can control the authentication methods available in your tenant. For example, you may want to block some of the least secure methods, such as SMS.
-### Choose verification options
+| Authentication method | Manage from | Scoping |
+|--|-||
+| Microsoft Authenticator (Push notification and passwordless phone sign-in) | MFA settings or
+Authentication methods policy | Authenticator passwordless phone sign-in can be scoped to users and groups |
+| FIDO2 security key | Authentication methods policy | Can be scoped to users and groups |
+| Software or Hardware OATH tokens | MFA settings | |
+| SMS verification | MFA settings | Manage SMS sign-in for primary authentication in authentication policy. SMS sign-in can be scoped to users and groups. |
+| Voice calls | Authentication methods policy | |
-1. Browse to **Azure Active Directory**, **Users**, **Multi-Factor Authentication**.
- ![Accessing the Multi-Factor Authentication portal from Azure AD Users blade in Azure portal](media/howto-mfa-getstarted/users-mfa.png)
-
-1. In the new tab that opens browse to **service settings**.
-1. Under **verification options**, check all of the boxes for methods available to users.
-
- ![Configuring verification methods in the Multi-Factor Authentication service settings tab](media/howto-mfa-getstarted/mfa-servicesettings-verificationoptions.png)
-
-1. Click on **Save**.
-1. Close the **service settings** tab.
+## Plan Conditional Access policies
-> [!WARNING]
-> Do not disable methods for your organization if you are using [Security Defaults](../fundamentals/concept-fundamentals-security-defaults.md). Disabling methods may lead to locking yourself out of your tenant. Leave all **Methods available to users** enabled in the MFA service settings portal.
+Azure AD MFA is enforced with Conditional Access policies. These policies allow you to prompt users for multifactor authentication when needed for security and stay out of usersΓÇÖ way when not needed.
-## Plan registration policy
+![Conceptual Conditional Access process flow](media/howto-mfa-getstarted/conditional-access-overview-how-it-works.png)
-Administrators must determine how users will register their methods. Organizations should [enable the new combined registration experience](howto-registration-mfa-sspr-combined.md) for Azure AD MFA and self-service password reset (SSPR). SSPR allows users to reset their password in a secure way using the same methods they use for multi-factor authentication. We recommend this combined registration because it's a great experience for users, with the ability to register once for both services. Enabling the same methods for SSPR and Azure AD MFA will allow your users to be registered to use both features.
+In the Azure portal, you configure Conditional Access policies under **Azure Active Directory** > **Security** > **Conditional Access**.
-### Registration with Identity Protection
+To learn more about creating Conditional Access policies, see [Conditional Access policy to prompt for Azure AD MFA when a user signs in to the Azure portal](tutorial-enable-azure-mfa.md). This helps you to:
-If your organization is using Azure Active Directory Identity Protection, [configure the MFA registration policy](../identity-protection/howto-identity-protection-configure-mfa-policy.md) to prompt your users to register the next time they sign in interactively.
+- Become familiar with the user interface
+- Get a first impression of how Conditional Access works
-### Registration without Identity Protection
+For end-to-end guidance on Azure AD Conditional Access deployment, see the [Conditional Access deployment plan](../conditional-access/plan-conditional-access.md).
-If your organization does not have licenses that enable Identity Protection, users are prompted to register the next time that MFA is required at sign-in. Users may not be registered for MFA if they don't use applications protected with MFA. It's important to get all users registered so that bad actors cannot guess the password of a user and register for MFA on their behalf, effectively taking control of the account.
+### Common policies for Azure AD MFA
-#### Enforcing registration
+Common use cases to require Azure AD MFA include:
-Using the following steps a Conditional Access policy can force users to register for Multi-Factor Authentication
+- For [administrators](../conditional-access/howto-conditional-access-policy-admin-mfa.md)
+- To [specific applications](tutorial-enable-azure-mfa.md)
+- For [all users](../conditional-access/howto-conditional-access-policy-all-users-mfa.md)
+- For [Azure management](../conditional-access/howto-conditional-access-policy-azure-management.md)
+- From [network locations you don't trust](../conditional-access/untrusted-networks.md)
-1. Create a group, add all users not currently registered.
-2. Using Conditional Access, enforce multi-factor authentication for this group for access to all resources.
-3. Periodically, reevaluate the group membership, and remove users who have registered from the group.
+### Named locations
-You may identify registered and non-registered Azure AD MFA users with PowerShell commands that rely on the [MSOnline PowerShell module](/powershell/azure/active-directory/install-msonlinev1).
+To manage your Conditional Access policies, the location condition of a Conditional Access policy enables you to tie access controls settings to the network locations of your users. We recommend to use [Named Locations](../conditional-access/location-condition.md) so that you can create logical groupings of IP address ranges or countries and regions. This creates a policy for all apps that blocks sign in from that named location. Be sure to exempt your administrators from this policy.
-#### Identify registered users
+### Risk-based policies
-```PowerShell
-Get-MsolUser -All | where {$_.StrongAuthenticationMethods -ne $null} | Select-Object -Property UserPrincipalName | Sort-Object userprincipalname
-```
+If your organization uses [Azure AD Identity Protection](../identity-protection/overview-identity-protection.md) to detect risk signals, consider using [risk-based policies](../identity-protection/howto-identity-protection-configure-risk-policies.md) instead of named locations. Policies can be created to force password changes when there is a threat of compromised identity or require multifactor authentication when a sign-in is deemed [risky by events](../identity-protection/overview-identity-protection.md#risk-detection-and-remediation) such as leaked credentials, sign-ins from anonymous IP addresses, and more.
-#### Identify non-registered users
+Risk policies include:
-```PowerShell
-Get-MsolUser -All | where {$_.StrongAuthenticationMethods.Count -eq 0} | Select-Object -Property UserPrincipalName | Sort-Object userprincipalname
-```
+- [Require all users to register for Azure AD MFA](../identity-protection/howto-identity-protection-configure-mfa-policy.md)
+- [Require a password change for users that are high-risk](../identity-protection/howto-identity-protection-configure-risk-policies.md#enable-policies)
+- [Require MFA for users with medium or high sign-in risk](../identity-protection/howto-identity-protection-configure-risk-policies.md#enable-policies)
### Convert users from per-user MFA to Conditional Access based MFA
function Set-MfaState {
Get-MsolUser -All | Set-MfaState -State Disabled ```
-> [!NOTE]
-> We recently changed the behavior and PowerShell script above accordingly. Previously, the script saved off the MFA methods, disabled MFA, and restored the methods. This is no longer necessary now that the default behavior for disable doesn't clear the methods.
+## Plan user session lifetime
-## Plan Conditional Access policies
+When planning your MFA deployment, itΓÇÖs important to think about how frequently you would like to prompt your users. Asking users for credentials often seems like a sensible thing to do, but it can backfire. If users are trained to enter their credentials without thinking, they can unintentionally supply them to a malicious credential prompt.
+Azure AD has multiple settings that determine how often you need to reauthenticate. Understand the needs of your business and users and configure settings that provide the best balance for your environment.
-To plan your Conditional Access policy strategy, which will determine when MFA and other controls are required, refer to [Common Conditional Access policies](../conditional-access/concept-conditional-access-policy-common.md).
-
-It is important that you prevent being inadvertently locked out of your Azure AD tenant. You can mitigate the impact of this inadvertent lack of administrative access by [creating two or more emergency access accounts in your tenant](../roles/security-emergency-access.md) and excluding them from your Conditional Access policy.
-
-### Create Conditional Access policy
-
-1. Sign in to the [Azure portal](https://portal.azure.com) using a global administrator account.
-1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**.
-1. Select **New policy**.
- ![Create a Conditional Access policy to enable MFA for Azure portal users in pilot group](media/howto-mfa-getstarted/conditionalaccess-newpolicy.png)
-1. Provide a meaningful name for your policy.
-1. Under **users and groups**:
- * On the **Include** tab, select the **All users** radio button
- * On the **Exclude** tab, check the box for **Users and groups** and choose your emergency access accounts.
- * Click **Done**.
-1. Under **Cloud apps**, select the **All cloud apps** radio button.
- * OPTIONALLY: On the **Exclude** tab, choose cloud apps that your organization does not require MFA for.
- * Click **Done**.
-1. Under **Conditions** section:
- * OPTIONALLY: If you have enabled Azure Identity Protection, you can choose to evaluate sign-in risk as part of the policy.
- * OPTIONALLY: If you have configured trusted locations or named locations, you can specify to include or exclude those locations from the policy.
-1. Under **Grant**, make sure the **Grant access** radio button is selected.
- * Check the box for **Require multi-factor authentication**.
- * Click **Select**.
-1. Skip the **Session** section.
-1. Set the **Enable policy** toggle to **On**.
-1. Click **Create**.
+We recommend using devices with Primary Refresh Tokens (PRT) for improved end user experience and reduce the session lifetime with sign-in frequency policy only on specific business use cases.
-## Plan integration with on-premises systems
+For more information, see [Optimize reauthentication prompts and understand session lifetime for Azure AD MFA](concepts-azure-multi-factor-authentication-prompts-session-lifetime.md).
-Some legacy and on-premises applications that do not authenticate directly against Azure AD require additional steps to use MFA including:
+## Plan user registration
-* Legacy on-premises applications, which will need to use Application proxy.
-* On-premises RADIUS applications, which will need to use MFA adapter with NPS server.
-* On-premises AD FS applications, which will need to use MFA adapter with AD FS 2016 or newer.
+A major step in every MFA deployment is getting users registered to use MFA. Authentication methods such as Voice and SMS allow pre-registration, while others like the Authenticator App require user interaction. Administrators must determine how users will register their methods.
-Applications that authenticate directly with Azure AD and have modern authentication (WS-Fed, SAML, OAuth, OpenID Connect) can make use of Conditional Access policies directly.
+### Combined registration for SSPR and Azure AD MFA
+We recommend using the [combined registration experience](howto-registration-mfa-sspr-combined.md) for Azure AD MFA and [Azure AD self-service password reset (SSPR)](concept-sspr-howitworks.md). SSPR allows users to reset their password in a secure way using the same methods they use for Azure AD MFA. Combined registration is a single step for end users.
-### Use Azure AD MFA with Azure AD Application Proxy
-
-Applications residing on-premises can be published to your Azure AD tenant via [Azure AD Application Proxy](../app-proxy/application-proxy.md) and can take advantage of Azure AD Multi-Factor Authentication if they are configured to use Azure AD pre-authentication.
-
-These applications are subject to Conditional Access policies that enforce Azure AD Multi-Factor Authentication, just like any other Azure AD-integrated application.
-
-Likewise, if Azure AD Multi-Factor Authentication is enforced for all user sign-ins, on-premises applications published with Azure AD Application Proxy will be protected.
-
-### Integrating Azure AD Multi-Factor Authentication with Network Policy Server
-
-The Network Policy Server (NPS) extension for Azure AD MFA adds cloud-based MFA capabilities to your authentication infrastructure using your existing servers. With the NPS extension, you can add phone call, text message, or phone app verification to your existing authentication flow. This integration has the following limitations:
+### Registration with Identity Protection
+Azure AD Identity Protection contributes both a registration policy for and automated risk detection and remediation policies to the Azure AD MFA story. Policies can be created to force password changes when there is a threat of compromised identity or require MFA when a sign-in is deemed risky.
+If you use Azure AD Identity Protection, [configure the Azure AD MFA registration policy](../identity-protection/howto-identity-protection-configure-mfa-policy.md) to prompt your users to register the next time they sign in interactively.
-* With the CHAPv2 protocol, only authenticator app push notifications and voice call are supported.
-* Conditional Access policies cannot be applied.
+### Registration without Identity Protection
+If you donΓÇÖt have licenses that enable Azure AD Identity Protection, users are prompted to register the next time that MFA is required at sign-in.
+To require users to use MFA, you can use Conditional Access policies and target frequently used applications like HR systems.
+If a userΓÇÖs password is compromised, it could be used to register for MFA, taking control of their account. We therefore recommend [securing the security registration process with conditional access policies](../conditional-access/howto-conditional-access-policy-registration.md) requiring trusted devices and locations.
+You can further secure the process by also requiring a [Temporary Access Pass](howto-authentication-temporary-access-pass.md). A time-limited passcode issued by an admin that satisfies strong authentication requirements and can be used to onboard other authentication methods, including Passwordless ones.
-The NPS extension acts as an adapter between RADIUS and cloud-based Azure AD MFA to provide a second factor of authentication to protect [VPN](howto-mfa-nps-extension-vpn.md), [Remote Desktop Gateway connections](howto-mfa-nps-extension-rdg.md), or other RADIUS capable applications. Users that register for Azure AD MFA in this environment will be challenged for all authentication attempts, the lack of Conditional Access policies means MFA is always required.
+### Increase the security of registered users
+If you have users registered for MFA using SMS or voice calls, you may want to move them to more secure methods such as the Microsoft Authenticator app. Microsoft now offers a public preview of functionality that allows you to prompt users to set up the Microsoft Authenticator app during sign-in. You can set these prompts by group, controlling who is prompted, enabling targeted campaigns to move users to the more secure method.
-#### Implementing your NPS server
+### Plan recovery scenarios
+As mentioned before, ensure users are registered for more than one MFA method, so that if one is unavailable, they have a backup.
+If the user does not have a backup method available, you can:
-If you have an NPS instance deployed and in use already, reference [Integrate your existing NPS Infrastructure with Azure AD Multi-Factor Authentication](howto-mfa-nps-extension.md). If you are setting up NPS for the first time, refer to [Network Policy Server (NPS)](/windows-server/networking/technologies/nps/nps-top) for instructions. Troubleshooting guidance can be found in the article [Resolve error messages from the NPS extension for Azure AD Multi-Factor Authentication](howto-mfa-nps-extension-errors.md).
+- Provide them a Temporary Access Pass so that they can manage their own authentication methods. You can also provide a Temporary Access Pass to enable temporary access to resources.
+- Update their methods as an administrator. To do so, select the user in the Azure portal, then select Authentication methods and update their methods.
+User communications
-#### Prepare NPS for users that aren't enrolled for MFA
+ItΓÇÖs critical to inform users about upcoming changes, Azure AD MFA registration requirements, and any necessary user actions.
+We provide [communication templates](https://aka.ms/mfatemplates) and [end-user documentation](../user-help/security-info-setup-signin.md) to help draft your communications. Send users to [https://myprofile.microsoft.com](https://myprofile.microsoft.com/) to register by selecting the **Security Info** link on that page.
-Choose what happens when users that aren't enrolled with MFA try to authenticate. Use the registry setting `REQUIRE_USER_MATCH` in the registry path `HKLM\Software\Microsoft\AzureMFA` to control the feature behavior. This setting has a single configuration option.
+## Plan integration with on-premises systems
-| Key | Value | Default |
-| | | |
-| `REQUIRE_USER_MATCH` | TRUE / FALSE | Not set (equivalent to TRUE) |
+Applications that authenticate directly with Azure AD and have modern authentication (WS-Fed, SAML, OAuth, OpenID Connect) can make use of Conditional Access policies.
+Some legacy and on-premises applications do not authenticate directly against Azure AD and require additional steps to use Azure AD MFA. You can integrate them by using Azure AD Application proxy or [Network policy services](/windows-server/networking/core-network-guide/core-network-guide#BKMK_optionalfeatures).
-The purpose of this setting is to determine what to do when a user is not enrolled for MFA. The effects of changing this setting are listed in the table below.
+### Integrate with AD FS resources
-| Settings | User MFA Status | Effects |
-| | | |
-| Key does not exist | Not enrolled | MFA challenge is unsuccessful |
-| Value set to True / not set | Not enrolled | MFA challenge is unsuccessful |
-| Key set to False | Not enrolled | Authentication without MFA |
-| Key set to False or True | Enrolled | Must authenticate with MFA |
+We recommend migrating applications secured with Active Directory Federation Services (AD FS) to Azure AD. However, if you are not ready to migrate these to Azure AD, you can use the Azure MFA adapter with AD FS 2016 or newer.
+If your organization is federated with Azure AD, you can [configure Azure AD MFA as an authentication provider with AD FS resources](/windows-server/identity/ad-fs/operations/configure-ad-fs-and-azure-mfa) both on-premises and in the cloud.
-### Integrate with Active Directory Federation Services
+### RADIUS clients and Azure AD MFA
-If your organization is federated with Azure AD, you can use [Azure AD Multi-Factor Authentication to secure AD FS resources](multi-factor-authentication-get-started-adfs.md), both on-premises and in the cloud. Azure AD MFA enables you to reduce passwords and provide a more secure way to authenticate. Starting with Windows Server 2016, you can now configure Azure AD MFA for primary authentication.
+For applications that are using RADIUS authentication, we recommend moving client applications to modern protocols such as SAML, Open ID Connect, or OAuth on Azure AD. If the application cannot be updated, then you can deploy [Network Policy Server (NPS) with the Azure MFA extension](howto-mfa-nps-extension.md). The network policy server (NPS) extension acts as an adapter between RADIUS-based applications and Azure AD MFA to provide a second factor of authentication.
-Unlike with AD FS in Windows Server 2012 R2, the AD FS 2016 Azure AD MFA adapter integrates directly with Azure AD and does not require an on-premises Azure MFA server. The Azure AD MFA adapter is built into Windows Server 2016, and there is no need for an additional installation.
+#### Common integrations
-When using Azure AD MFA with AD FS 2016 and the target application is subject to Conditional Access policy, there are additional considerations:
+Many vendors now support SAML authentication for their applications. When possible, we recommend federating these applications with Azure AD and enforcing MFA through Conditional Access. If your vendor doesnΓÇÖt support modern authentication ΓÇô you can use the NPS extension.
+Common RADIUS client integrations include applications such as [Remote Desktop Gateways](howto-mfa-nps-extension-rdg.md) and [VPN servers](howto-mfa-nps-extension-vpn.md).
-* Conditional Access is available when the application is a relying party to Azure AD, federated with AD FS 2016 or newer.
-* Conditional Access is not available when the application is a relying party to AD FS 2016 or AD FS 2019 and is managed or federated with AD FS 2016 or AD FS 2019.
-* Conditional Access is also not available when AD FS 2016 or AD FS 2019 is configured to use Azure AD MFA as the primary authentication method.
+Others might include:
-#### AD FS logging
+- Citrix Gateway
-Standard AD FS 2016 and 2019 logging in both the Windows Security Log and the AD FS Admin log, contains information about authentication requests and their success or failure. Event log data within these events will indicate whether Azure AD MFA was used. For example, an AD FS Auditing Event ID 1200 may contain:
+ [Citrix Gateway](https://docs.citrix.com/advanced-concepts/implementation-guides/citrix-gateway-microsoft-azure.html#microsoft-azure-mfa-deployment-methods) supports both RADIUS and NPS extension integration, and a SAML integration.
-```
-<MfaPerformed>true</MfaPerformed>
-<MfaMethod>MFA</MfaMethod>
-```
+- Cisco VPN
+ - The Cisco VPN supports both RADIUS and [SAML authentication for SSO](../saas-apps/cisco-anyconnect.md).
+ - By moving from RADIUS authentication to SAML, you can integrate the Cisco VPN without deploying the NPS extension.
-#### Renew and manage certificates
+- All VPNs
-On each AD FS server, in the local computer My Store, there will be a self-signed Azure AD MFA certificate titled OU=Microsoft AD FS Azure MFA, which contains the certificate expiration date. Check the validity period of this certificate on each AD FS server to determine the expiration date.
+## Deploy Azure AD MFA
-If the validity period of your certificates is nearing expiration, [generate and verify a new MFA certificate on each AD FS server](/windows-server/identity/ad-fs/operations/configure-ad-fs-and-azure-mfa#configure-the-ad-fs-servers).
+Your MFA rollout plan should include a pilot deployment followed by deployment waves that are within your support capacity. Begin your rollout by applying your Conditional Access policies to a small group of pilot users. After evaluating the effect on the pilot users, process used, and registration behaviors, you can either add more groups to the policy or add more users to the existing groups.
-The following guidance details how to manage the Azure AD MFA certificates on your AD FS servers. When you configure AD FS with Azure AD MFA, the certificates generated via the `New-AdfsAzureMfaTenantCertificate` PowerShell cmdlet are valid for two years. Renew and install the renewed certificates prior to expiration to ovoid disruptions in MFA service.
+Follow the steps below:
-## Implement your plan
+1. Meet the necessary prerequisites
+1. Configure chosen authentication methods
+1. Configure your Conditional Access policies
+1. Configure session lifetime settings
+1. Configure Azure AD MFA registration policies
-Now that you have planned your solution, you can implement by following the steps below:
+## Manage Azure AD MFA
+This section provides reporting and troubleshooting information for Azure AD MFA.
-1. Meet any necessary prerequisites
- 1. Deploy [Azure AD Connect](../hybrid/whatis-hybrid-identity.md) for any hybrid scenarios
- 1. Deploy [Azure AD Application Proxy](../app-proxy/application-proxy.md) for on any on-premises apps published for cloud access
- 1. Deploy [NPS](/windows-server/networking/technologies/nps/nps-top) for any RADIUS authentication
- 1. Ensure users have upgraded to supported versions of Microsoft Office with modern authentication enabled
-1. Configure chosen [authentication methods](#choose-verification-options)
-1. Define your [named network locations](../conditional-access/location-condition.md#named-locations)
-1. Select groups to begin rolling out MFA.
-1. Configure your [Conditional Access policies](#create-conditional-access-policy)
-1. Configure your MFA registration policy
- 1. [Combined MFA and SSPR](howto-registration-mfa-sspr-combined.md)
- 1. With [Identity Protection](../identity-protection/howto-identity-protection-configure-mfa-policy.md)
-1. Send user communications and get users to enroll at [https://aka.ms/mfasetup](https://aka.ms/mfasetup)
-1. [Keep track of who's enrolled](#identify-non-registered-users)
+### Reporting and Monitoring
-> [!TIP]
-> Government cloud users can enroll at [https://aka.ms/GovtMFASetup](https://aka.ms/GovtMFASetup)
+Azure AD has reports that provide technical and business insights, follow the progress of your deployment and check if your users are successful at sign-in with MFA. Have your business and technical application owners assume ownership of and consume these reports based on your organizationΓÇÖs requirements.
-## Manage your solution
+You can monitor authentication method registration and usage across your organization using the [Authentication Methods Activity dashboard](howto-authentication-methods-activity.md). This helps you understand what methods are being registered and how they're being used.
-Reports for Azure AD MFA
+#### Sign-in report to review MFA events
-Azure AD Multi-Factor Authentication provides reports through the Azure portal:
+The Azure AD sign-in reports include authentication details for events when a user is prompted for multi-factor authentication, and if any Conditional Access policies were in use. You can also use PowerShell for reporting on users registered for MFA.
-| Report | Location | Description |
-| | | |
-| Usage and fraud alerts | Azure AD > Sign-ins | Provides information on overall usage, user summary, and user details; as well as a history of fraud alerts submitted during the date range specified. |
+NPS extension and AD FS logs can be viewed from **Security** > **MFA** > **Activity report**.
-## Troubleshoot MFA issues
+For more information, and additional MFA reports, see [Review Azure AD Multi-Factor Authentication events](howto-mfa-reporting.md#view-the-azure-ad-sign-ins-report).
-Find solutions for common issues with Azure AD MFA at the [Troubleshooting Azure AD Multi-Factor Authentication article](https://support.microsoft.com/help/2937344/troubleshooting-azure-multi-factor-authentication-issues) on the Microsoft Support Center.
+### Troubleshoot Azure AD MFA
+See [Troubleshooting Azure AD MFA](https://support.microsoft.com/help/2937344/troubleshooting-azure-multi-factor-authentication-issues) for common issues.
## Next steps
-To see Azure AD Multi-Factor Authentication in action, complete the following tutorial:
+[Deploy other identity features](../fundamentals/active-directory-deployment-plans.md)
-> [!div class="nextstepaction"]
-> [Enable Azure AD Multi-Factor Authentication](tutorial-enable-azure-mfa.md)
active-directory Howto Mfa Nps Extension Vpn https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-mfa-nps-extension-vpn.md
Previously updated : 11/21/2019 Last updated : 07/07/2021
To troubleshoot these issues, an ideal place to start is to examine the Security
## Configure Multi-Factor Authentication
-For assistance configuring users for Multi-Factor Authentication see the articles [Planning a cloud-based Azure AD Multi-Factor Authentication deployment](howto-mfa-getstarted.md#create-conditional-access-policy) and [Set up my account for two-step verification](../user-help/multi-factor-authentication-end-user-first-time.md)
+For assistance configuring users for Multi-Factor Authentication see the articles [Planning a cloud-based Azure AD Multi-Factor Authentication deployment](howto-mfa-getstarted.md#plan-conditional-access-policies) and [Set up my account for two-step verification](../user-help/multi-factor-authentication-end-user-first-time.md)
## Install and configure the NPS extension
active-directory Howto Mfa Nps Extension https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-mfa-nps-extension.md
Previously updated : 08/31/2020 Last updated : 07/07/2021
If you need to create and configure a test account, use the following steps:
1. Sign in to [https://aka.ms/mfasetup](https://aka.ms/mfasetup) with a test account. 2. Follow the prompts to set up a verification method.
-3. In the Azure portal as an admin user, [create a Conditional Access policy](howto-mfa-getstarted.md#create-conditional-access-policy) to require multi-factor authentication for the test account.
+3. In the Azure portal as an admin user, [create a Conditional Access policy](howto-mfa-getstarted.md#plan-conditional-access-policies) to require multi-factor authentication for the test account.
> [!IMPORTANT] >
active-directory Msal Compare Msal Js And Adal Js https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-compare-msal-js-and-adal-js.md
Title: Differences between MSAL.js and ADAL.js| Azure
+ Title: "Migrate your JavaScript application from ADAL.js to MSAL.js | Azure"
-description: Learn about the differences between Microsoft Authentication Library for JavaScript (MSAL.js) and Azure AD Authentication Library for JavaScript (ADAL.js) and how to choose which to use.
+description: How to update your existing JavaScript application to use the Microsoft Authentication Library (MSAL) for authentication and authorization instead of the Active Directory Authentication Library (ADAL).
-+ -+ Previously updated : 04/10/2019---
-#Customer intent: As an application developer, I want to learn about the differences between the ADAL.js and MSAL.js libraries so I can migrate my applications to MSAL.js.
Last updated : 07/06/2021+
+#Customer intent: As an application developer, I want to learn how to change the code in my JavaScript application from using ADAL.js as its authentication library to MSAL.js.
-# Differences between MSAL.js and ADAL.js
+# How to migrate a JavaScript app from ADAL.js to MSAL.js
-Both the Microsoft Authentication Library for JavaScript (MSAL.js) and Azure AD Authentication Library for JavaScript (ADAL.js) are used to authenticate Azure AD entities and request tokens from Azure AD. Up until now, most developers have worked with Azure AD for developers (v1.0) to authenticate Azure AD identities (work and school accounts) by requesting tokens using ADAL. Now, using MSAL.js, you can authenticate a broader set of Microsoft identities (Azure AD identities and Microsoft accounts, and social and local accounts through Azure AD B2C) through the Microsoft identity platform.
+[Microsoft Authentication Library for JavaScript](https://github.com/AzureAD/microsoft-authentication-library-for-js) (MSAL.js, also known as *msal-browser*) 2.x is the authentication library we recommend using with JavaScript applications on the Microsoft identity platform. This article highlights the changes you need to make to migrate an app that uses the ADAL.js to use MSAL.js 2.x
-This article describes how to choose between the Microsoft Authentication Library for JavaScript (MSAL.js) and Azure AD Authentication Library for JavaScript (ADAL.js) and compares the two libraries.
+> [!NOTE]
+> We strongly recommend MSAL.js 2.x over MSAL.js 1.x. The auth code grant flow is more secure and allows single-page applications to maintain a good user experience despite the privacy measures browsers like Safari have implemented to block 3rd party cookies, among other benefits.
-## Choosing between ADAL.js and MSAL.js
+## Prerequisites
-In most cases you want to use the Microsoft identity platform and MSAL.js, which is the latest generation of Microsoft authentication libraries. Using MSAL.js, you acquire tokens for users signing in to your application with Azure AD (work and school accounts), Microsoft (personal) accounts (MSA), or Azure AD B2C.
+- You must set the **Platform** / **Reply URL Type** to **Single-page application** on App Registration portal (if you have other platforms added in your app registration, such as **Web**, you need to make sure the redirect URIs do not overlap. See: [Redirect URI restrictions](./reply-url.md))
+- You must provide [polyfills](./msal-js-use-ie-browser.md) for ES6 features that MSAL.js relies on (e.g. promises) in order to run your apps on **Internet Explorer**
+- Make sure you have migrated your Azure AD apps to [v2 endpoint](../azuread-dev/azure-ad-endpoint-comparison.md) if you haven't already
-If you are already familiar with the v1.0 endpoint (and ADAL.js), you might want to read [What's different about the v2.0 endpoint?](../azuread-dev/azure-ad-endpoint-comparison.md).
+## Install and import MSAL
-However, you still need to use ADAL.js if your application needs to sign in users with earlier versions of [Active Directory Federation Services (ADFS)](/windows-server/identity/active-directory-federation-services).
+There are two ways to install the MSAL.js 2.x library:
-## Key differences in authentication with MSAL.js
+### Via NPM:
-### Core API
+```console
+npm install @azure/msal-browser
+```
-* ADAL.js uses [AuthenticationContext](https://github.com/AzureAD/azure-activedirectory-library-for-js/wiki/Config-authentication-context#authenticationcontext) as the representation of an instance of your application's connection to the authorization server or identity provider through an authority URL. On the contrary, MSAL.js API is designed around user agent client application(a form of public client application in which the client code is executed in a user-agent such as a web browser). It provides the `UserAgentApplication` class to represent an instance of the application's authentication context with the authorization server. For more details, see [Initialize using MSAL.js](msal-js-initializing-client-applications.md).
+Then, depending on your module system, import it as shown below:
-* In ADAL.js, the methods to acquire tokens are associated with a single authority set in the `AuthenticationContext`. In MSAL.js, the acquire token requests can take different authority values than what is set in the `UserAgentApplication`. This allows MSAL.js to acquire and cache tokens separately for multiple tenants and user accounts in the same application.
+```javascript
+import * as msal from "@azure/msal-browser"; // ESM
-* The method to acquire and renew tokens silently without prompting users is named `acquireToken` in ADAL.js. In MSAL.js, this method is named `acquireTokenSilent` to be more descriptive of this functionality.
+const msal = require('@azure/msal-browser'); // CommonJS
+```
-### Authority value `common`
+### Via CDN:
-In v1.0, using the `https://login.microsoftonline.com/common` authority will allow users to sign in with any Azure AD account (for any organization).
+Load the script in the header section of your HTML document:
-In v2.0, using the `https://login.microsoftonline.com/common` authority, will allow users to sign in with any Azure AD organization account or a Microsoft personal account (MSA). To restrict the sign in to only Azure AD accounts (same behavior as with ADAL.js), use `https://login.microsoftonline.com/organizations`. For details, see the `authority` config option in [Initialize using MSAL.js](msal-js-initializing-client-applications.md).
+```html
+<!DOCTYPE html>
+<html>
+ <head>
+ <script type="text/javascript" src="https://alcdn.msauth.net/browser/2.14.2/js/msal-browser.min.js"></script>
+ </head>
+</html>
+```
-### Scopes for acquiring tokens
-* Scope instead of resource parameter in authentication requests to acquire tokens
+For alternative CDN links and best practices when using CDN, see: [CDN Usage](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/cdn-usage.md)
- v2.0 protocol uses scopes instead of resource in the requests. In other words, when your application needs to request tokens with permissions for a resource such as MS Graph, the difference in values passed to the library methods is as follows:
+## Initialize MSAL
- v1.0: resource = https\://graph.microsoft.com
+In ADAL.js, you instantiate the [AuthenticationContext](https://github.com/AzureAD/azure-activedirectory-library-for-js/wiki/Config-authentication-context#authenticationcontext) class, which then exposes the methods you can use to achieve authentication (`login`, `acquireTokenPopup` etc.). This object serves as the representation of your application's connection to the authorization server or identity provider. When initializing, the only mandatory parameter is the **clientId**:
- v2.0: scope = https\://graph.microsoft.com/User.Read
+```javascript
+window.config = {
+ clientId: "YOUR_CLIENT_ID"
+};
- You can request scopes for any resource API using the URI of the API in this format: appidURI/scope For example: https:\//mytenant.onmicrosoft.com/myapi/api.read
+var authContext = new AuthenticationContext(config);
+```
- Only for the MS Graph API, a scope value `user.read` maps to https:\//graph.microsoft.com/User.Read and can be used interchangeably.
+In MSAL.js, you instantiate the [PublicClientApplication](https://azuread.github.io/microsoft-authentication-library-for-js/ref/classes/_azure_msal_browser.publicclientapplication.html) class instead. Like ADAL.js, the constructor expects a [configuration object](#configure-msal) that contains the `clientId` parameter at minimum. See for more: [Initialize MSAL.js](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/initialization.md)
- ```javascript
- var request = {
- scopes = ["User.Read"];
- };
+```javascript
+const msalConfig = {
+ auth: {
+ clientId: 'YOUR_CLIENT_ID'
+ }
+};
- acquireTokenPopup(request);
- ```
+const msalInstance = new msal.PublicClientApplication(msalConfig);
+```
-* Dynamic scopes for incremental consent.
+In both ADAL.js and MSAL.js, the authority URI defaults to `https://login.microsoftonline.com/common` if you do not specify it.
- When building applications using v1.0, you needed to register the full set of permissions(static scopes) required by the application for the user to consent to at the time of login. In v2.0, you can use the scope parameter to request the permissions at the time you want them. These are called dynamic scopes. This allows the user to provide incremental consent to scopes. So if at the beginning you just want the user to sign in to your application and you donΓÇÖt need any kind of access, you can do so. If later you need the ability to read the calendar of the user, you can then request the calendar scope in the acquireToken methods and get the user's consent. For example:
+> [!NOTE]
+> If you use the `https://login.microsoftonline.com/common` authority in v2.0, you will allow users to sign in with any Azure AD organization or a personal Microsoft account (MSA). In MSAL.js, if you want to restrict login to any Azure AD account (same behavior as with ADAL.js), use `https://login.microsoftonline.com/organizations` instead.
- ```javascript
- var request = {
- scopes = ["https://graph.microsoft.com/User.Read", "https://graph.microsoft.com/Calendar.Read"];
- };
+## Configure MSAL
- acquireTokenPopup(request);
- ```
+Some of the [configuration options in ADAL.js](https://github.com/AzureAD/azure-activedirectory-library-for-js/wiki/Config-authentication-context) that are used when initializing [AuthenticationContext](https://github.com/AzureAD/azure-activedirectory-library-for-js/wiki/Config-authentication-context#authenticationcontext) are deprecated in MSAL.js, while some new ones are introduced. See the [full list of available options](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/configuration.md). Importantly, many of these options, except for `clientId`, can be overridden during token acquisition, allowing you to set them on a *per-request* basis. For instance, you can use a different **authority URI** or **redirect URI** than the one you set during initialization when acquiring tokens.
-* Scopes for V1.0 APIs
+Additionally, you no longer need to specify the login experience (i.e. whether using popup windows or redirecting the page) via the configuration options. Instead, MSAL.js exposes `loginPopup` and `loginRedirect` methods through the `PublicClientApplication` instance.
- When getting tokens for V1.0 APIs using MSAL.js, you can request all the static scopes registered on the API by appending `.default` to the App ID URI of the API as scope. For example:
+## Enable logging
- ```javascript
- var request = {
- scopes = [ appidURI + "/.default"];
- };
+In ADAL.js, you configure logging separately at any place in your code:
- acquireTokenPopup(request);
- ```
+```javascript
+window.config = {
+ clientId: "YOUR_CLIENT_ID"
+};
+
+var authContext = new AuthenticationContext(config);
+
+var Logging = {
+ level: 3,
+ log: function (message) {
+ console.log(message);
+ },
+ piiLoggingEnabled: false
+};
++
+authContext.log(Logging)
+```
+
+In MSAL.js, logging is part of the configuration options and is created during the initialization of `PublicClientApplication`:
+
+```javascript
+const msalConfig = {
+ auth: {
+ // authentication related parameters
+ },
+ cache: {
+ // cache related parameters
+ },
+ system: {
+ loggerOptions: {
+ loggerCallback(loglevel, message, containsPii) {
+ console.log(message);
+ },
+ piiLoggingEnabled: false,
+ logLevel: msal.LogLevel.Verbose,
+ }
+ }
+}
+
+const msalInstance = new msal.PublicClientApplication(msalConfig);
+```
+
+## Switch to MSAL API
+
+Some of the public methods in ADAL.js have equivalents in MSAL.js:
+
+| ADAL | MSAL | Notes |
+|-|--|-|
+| `acquireToken` | `acquireTokenSilent` | Renamed and now expects an [account](https://azuread.github.io/microsoft-authentication-library-for-js/ref/modules/_azure_msal_common.html#accountinfo) object |
+| `acquireTokenPopup` | `acquireTokenPopup` | Now async and returns a promise |
+| `acquireTokenRedirect` | `acquireTokenRedirect` | Now async and returns a promise |
+| `handleWindowCallback` | `handleRedirectPromise` | Needed if using redirect experience |
+| `getCachedUser` | `getAllAccounts` | Renamed and now returns an array of accounts.|
+
+Others were deprecated, while MSAL.js offers new methods:
+
+| ADAL | MSAL | Notes |
+|--||--|
+| `login` | N/A | Deprecated. Use `loginPopup` or `loginRedirect` |
+| `logOut` | N/A | Deprecated. Use `logoutPopup` or `logoutRedirect`|
+| N/A | `loginPopup` | |
+| N/A | `loginRedirect` | |
+| N/A | `logoutPopup` | |
+| N/A | `logoutRedirect` | |
+| N/A | `getAccountByHomeId` | Filters accounts by home ID (oid + tenant ID) |
+| N/A | `getAccountLocalId` | Filters accounts by local ID (useful for ADFS) |
+| N/A | `getAccountUsername` | Filters accounts by username (if exists) |
+
+In addition, as MSAL.js is implemented in TypeScript unlike ADAL.js, it exposes various types and interfaces that you can make use of in your projects. See the [MSAL.js API reference](https://azuread.github.io/microsoft-authentication-library-for-js/ref/) for more.
+
+## Use scopes instead of resources
+
+An important difference between the Azure AD **v1.0** vs. **v2.0** endpoints is about how the resources are accessed. When using ADAL.js with the **v1.0** endpoint, you would first register a permission on app registration portal, and then request an access token for a resource (such as Microsoft Graph) as shown below:
+
+```javascript
+authContext.acquireTokenRedirect("https://graph.microsoft.com", function (error, token) {
+ // do something with the access token
+});
+```
+
+MSAL.js supports both **v1.0** and **v2.0** endpoints. The **v2.0** endpoint employs a *scope-centric* model to access resources. Thus, when you request an access token for a resource, you also need to specify the scope for that resource:
+
+```javascript
+msalInstance.acquireTokenRedirect({
+ scopes: ["https://graph.microsoft.com/User.Read"]
+});
+```
+
+One advantage of the scope-centric model is the ability to use *dynamic scopes*. When building applications using the v1.0 endpoint, you needed to register the full set of permissions (called *static scopes*) required by the application for the user to consent to at the time of login. In v2.0, you can use the scope parameter to request the permissions at the time you want them (hence, *dynamic scopes*). This allows the user to provide **incremental consent** to scopes. So if at the beginning you just want the user to sign in to your application and you donΓÇÖt need any kind of access, you can do so. If later you need the ability to read the calendar of the user, you can then request the calendar scope in the acquireToken methods and get the user's consent. See for more: [Resources and scopes](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/resources-and-scopes.md)
+
+## Use promises instead of callbacks
+
+In ADAL.js, callbacks are used for any operation after the authentication succeeds and a response is obtained:
+
+```javascript
+authContext.acquireTokenPopup(resource, extraQueryParameter, claims, function (error, token) {
+ // do something with the access token
+});
+```
+
+In MSAL.js, promises are used instead:
+
+```javascript
+msalInstance.acquireTokenPopup({
+ scopes: ["User.Read"] // shorthand for https://graph.microsoft.com/User.Read
+ }).then((response) => {
+ // do something with the auth response
+ }).catch((error) => {
+ // handle errors
+ });
+```
+
+You can also use the **async/await** syntax that comes with ES8:
+
+```javascript
+const getAccessToken = async() => {
+ try {
+ const authResponse = await msalInstance.acquireTokenPopup({
+ scopes: ["User.Read"]
+ });
+ } catch (error) {
+ // handle errors
+ }
+}
+```
+
+## Cache and retrieve tokens
+
+Like ADAL.js, MSAL.js caches tokens and other authentication artifacts in browser storage, using the [Web Storage API](https://developer.mozilla.org/docs/Web/API/Web_Storage_API). You are recommended to use `sessionStorage` option (see: [configuration](#configure-msal)) because it is more secure in storing tokens that are acquired by your users, but `localStorage` will give you [Single Sign On](./msal-js-sso.md) across tabs and user sessions.
+
+Importantly, you are not supposed to access the cache directly. Instead, you should use an appropriate MSAL.js API for retrieving authentication artifacts like access tokens or user accounts.
+
+## Renew tokens with refresh tokens
+
+ADAL.js uses the [OAuth 2.0 implicit flow](./v2-oauth2-implicit-grant-flow.md), which does not return refresh tokens for security reasons (refresh tokens have longer lifetime than access tokens and are therefore more dangerous in the hands of malicious actors). Hence, ADAL.js performs token renewal using a hidden Iframe so that the user is not repeatedly prompted to authenticate.
+
+With the auth code flow with PKCE support, apps using MSAL.js 2.x obtain refresh tokens along with ID and access tokens, which can be used to renew them. The usage of refresh tokens is abstracted away, and the developers are not supposed to build logic around them. Instead, MSAL manages token renewal using refresh tokens by itself. Your previous token cache with ADAL.js will not be transferable to MSAL.js, as the token cache schema has changed and incompatible with the schema used in ADAL.js.
+
+## Handle errors and exceptions
+
+When using MSAL.js, the most common type of error you might face is the `interaction_in_progress` error. This error is thrown when an interactive API (`loginPopup`, `loginRedirect`, `acquireTokenPopup`, `acquireTokenRedirect`) is invoked while another interactive API is still in progress. The `login*` and `acquireToken*` APIs are *async* so you will need to ensure that the resulting promises have resolved before invoking another one.
+
+Another common error is `interaction_required`. This error is often resolved by simply initiating an interactive token acquisition prompt. For instance, the web API you are trying to access might have a [conditional access](../conditional-access/overview.md) policy in place, requiring the user to perform [multifactor authentication](../authentication/concept-mfa-howitworks.md) (MFA). In that case, handling `interaction_required` error by triggering `acquireTokenPopup` or `acquireTokenRedirect` will prompt the user for MFA, allowing them to fullfil it.
+
+Yet another common error you might face is `consent_required`, which occurs when permissions required for obtaining an access token for a protected resource are not consented by the user. As in `interaction_required`, the solution for `consent_required` error is often initiating an interactive token acquisition prompt, using either `acquireTokenPopup` or `acquireTokenRedirect`.
+
+See for more: [Common MSAL.js errors and how to handle them](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/errors.md)
+
+## Use the Events API
+
+MSAL.js (>=v2.4) introduces an events API that you can make use of in your apps. These events are related to the authentication process and what MSAL is doing at any moment, and can be used to update UI, show error messages, check if any interaction is in progress and so on. For instance, below is an event callback that will be called when login process fails for any reason:
+
+```javascript
+const callbackId = msalInstance.addEventCallback((message) => {
+ // Update UI or interact with EventMessage here
+ if (message.eventType === EventType.LOGIN_FAILURE) {
+ if (message.error instanceof AuthError) {
+ // Do something with the error
+ }
+ }
+});
+```
+
+For performance, it is important to unregister event callbacks when they are no longer needed. See for more: [MSAL.js Events API](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/events.md)
+
+## Handle multiple accounts
+
+ADAL.js has the concept of a *user* to represent the currently authenticated entity. MSAL.js replaces *users* with *accounts*, given the fact that a user can have more than one account associated with her. This also means that you now need to control for multiple accounts and choose the appropriate one to work with. The snippet below illustrates this process:
+
+```javascript
+let homeAccountId = null; // Initialize global accountId (can also be localAccountId or username) used for account lookup later, ideally stored in app state
+
+// This callback is passed into `acquireTokenPopup` and `acquireTokenRedirect` to handle the interactive auth response
+function handleResponse(resp) {
+ if (resp !== null) {
+ homeAccountId = resp.account.homeAccountId; // alternatively: resp.account.homeAccountId or resp.account.username
+ } else {
+ const currentAccounts = myMSALObj.getAllAccounts();
+ if (currentAccounts.length < 1) { // No cached accounts
+ return;
+ } else if (currentAccounts.length > 1) { // Multiple account scenario
+ // Add account selection logic here
+ } else if (currentAccounts.length === 1) {
+ homeAccountId = currentAccounts[0].homeAccountId; // Single account scenario
+ }
+ }
+}
+```
+
+For more information, see: [Accounts in MSAL.js](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/accounts.md)
+
+## Use the wrappers libraries
+
+If you are developing for Angular and React frameworks, you can use [MSAL Angular v2](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-angular) and [MSAL React](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-react), respectively. These wrappers expose the same public API as MSAL.js while offering framework-specific methods and components that can streamline the authentication and token acquisition processes.
+
+## Run the app
+
+Once your changes are done, run the app and test your authentication scenario:
+
+```console
+npm start
+```
+
+## Example: Securing web apps with ADAL Node vs. MSAL Node
+
+The snippets below demonstrates the minimal code required for a single-page application authenticating users with the Microsoft identity platform and getting an access token for Microsoft Graph using first ADAL.js and then MSAL.js:
+
+<table>
+<tr><td> Using ADAL.js </td><td> Using MSAL.js </td></tr>
+<tr>
+<td>
+
+```html
+
+<head>
+ <meta charset="UTF-8">
+ <meta http-equiv="X-UA-Compatible" content="IE=edge">
+ <meta name="viewport" content="width=device-width, initial-scale=1.0">
+
+ <script
+ type="text/javascript"
+ src="https://secure.aadcdn.microsoftonline-p.com/lib/1.0.18/js/adal.min.js">
+ </script>
+</head>
+
+<div>
+ <button id="loginButton">Login</button>
+ <button id="logoutButton" style="visibility: hidden;">Logout</button>
+ <button id="tokenButton" style="visibility: hidden;">Get Token</button>
+</div>
+
+<body>
+ <script>
+
+ const loginButton = document.getElementById("loginButton");
+ const logoutButton = document.getElementById("logoutButton");
+ const tokenButton = document.getElementById("tokenButton");
+
+ var authContext = new AuthenticationContext({
+ instance: 'https://login.microsoftonline.com/',
+ clientId: "ENTER_CLIENT_ID",
+ tenant: "ENTER_TENANT_ID",
+ cacheLocation: "sessionStorage",
+ redirectUri: "http://localhost:3000",
+ popUp: true,
+ callback: function (errorDesc, token, error, tokenType) {
+ console.log('Hello ' + authContext.getCachedUser().profile.upn)
+
+ loginButton.style.visibility = "hidden";
+ logoutButton.style.visibility = "visible";
+ tokenButton.style.visibility = "visible";
+ }
+ });
+
+ authContext.log({
+ level: 3,
+ log: function (message) {
+ console.log(message);
+ },
+ piiLoggingEnabled: false
+ });
+
+ loginButton.addEventListener('click', function () {
+ authContext.login();
+ });
+
+ logoutButton.addEventListener('click', function () {
+ authContext.logOut();
+ });
+
+ tokenButton.addEventListener('click', () => {
+ authContext.acquireTokenPopup(
+ "https://graph.microsoft.com",
+ null, null,
+ function (error, token) {
+ console.log(error, token);
+ }
+ )
+ });
+ </script>
+</body>
+
+</html>
+
+```
+
+</td>
+<td>
+
+```html
+
+<head>
+ <meta charset="UTF-8">
+ <meta http-equiv="X-UA-Compatible" content="IE=edge">
+ <meta name="viewport" content="width=device-width, initial-scale=1.0">
+
+ <script
+ type="text/javascript"
+ src="https://alcdn.msauth.net/browser/2.14.2/js/msal-browser.min.js">
+ </script>
+</head>
+
+<div>
+ <button id="loginButton">Login</button>
+ <button id="logoutButton" style="visibility: hidden;">Logout</button>
+ <button id="tokenButton" style="visibility: hidden;">Get Token</button>
+</div>
+
+<body>
+ <script>
+ const loginButton = document.getElementById("loginButton");
+ const logoutButton = document.getElementById("logoutButton");
+ const tokenButton = document.getElementById("tokenButton");
+
+ const pca = new msal.PublicClientApplication({
+ auth: {
+ clientId: "ENTER_CLIENT_ID",
+ authority: "https://login.microsoftonline.com/ENTER_TENANT_ID",
+ redirectUri: "http://localhost:3000",
+ },
+ cache: {
+ cacheLocation: "sessionStorage"
+ },
+ system: {
+ loggerOptions: {
+ loggerCallback(loglevel, message, containsPii) {
+ console.log(message);
+ },
+ piiLoggingEnabled: false,
+ logLevel: msal.LogLevel.Verbose,
+ }
+ }
+ });
+
+ loginButton.addEventListener('click', () => {
+ pca.loginPopup().then((response) => {
+ console.log(`Hello ${response.account.username}!`);
+
+ loginButton.style.visibility = "hidden";
+ logoutButton.style.visibility = "visible";
+ tokenButton.style.visibility = "visible";
+ })
+ });
+
+ logoutButton.addEventListener('click', () => {
+ pca.logoutPopup().then((response) => {
+ window.location.reload();
+ })
+ });
+
+ tokenButton.addEventListener('click', () => {
+ pca.acquireTokenPopup({
+ scopes: ["User.Read"]
+ }).then((response) => {
+ console.log(response);
+ })
+ });
+ </script>
+</body>
+
+</html>
+
+```
+
+</td>
+</tr>
+</table>
## Next steps
-For more information, refer to [v1.0 and v2.0 comparison](../azuread-dev/azure-ad-endpoint-comparison.md).
+
+- [MSAL.js API reference](https://azuread.github.io/microsoft-authentication-library-for-js/ref/)
+- [MSAL.js code samples](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/samples)
active-directory Scenario Protected Web Api Verification Scope App Roles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-protected-web-api-verification-scope-app-roles.md
But this protection isn't enough. It guarantees only that ASP.NET and ASP.NET Co
If a client app calls your API on behalf of a user, the API needs to request a bearer token that has specific scopes for the API. For more information, see [Code configuration | Bearer token](scenario-protected-web-api-app-configuration.md#bearer-token).
-### .NET Core
+### [ASP.NET Core](#tab/aspnetcore)
+
+In ASP.NET Core, you can use Microsoft.Identity.Web to verify scopes in each controller action. You can also verify them at the level of the controller or for the whole application.
#### Verify the scopes on each controller action
+You can verify the scopes in the controller action by using the `[RequiredScope]` attribute. This attribute
+has several overrides. One that takes the required scopes directly, and one that takes a key to the configuration.
+
+##### Verify the scopes on a controller action with hardcoded scopes
+
+The following code snippet shows the usage of the `[RequiredScope]` attribute with hardcoded scopes.
+ ```csharp
+using Microsoft.Identity.Web
+
+[Authorize]
+public class TodoListController : Controller
+{
+ /// <summary>
+ /// The web API will accept only tokens that have the `access_as_user` scope for
+ /// this API.
+ /// </summary>
+ static readonly string[] scopeRequiredByApi = new string[] { "access_as_user" };
+
+ // GET: api/values
+ [HttpGet]
+ [RequiredScope(scopeRequiredByApi)
+ public IEnumerable<TodoItem> Get()
+ {
+ // Do the work and return the result.
+ // ...
+ }
+ // ...
+}
+```
+
+##### Verify the scopes on a controller action with scopes defined in configuration
+
+You can also declare these required scopes in the configuration, and reference the configuration key:
+
+For instance if, in the appsettings.json you have the following configuration:
+
+```JSon
+{
+ "AzureAd" : {
+ // more settings
+ "Scopes" : "access_as_user access_as_admin"
+ }
+}
+```
+
+Then, reference it in the `[RequiredScope]` attribute:
+
+```csharp
+using Microsoft.Identity.Web
+
+[Authorize]
+public class TodoListController : Controller
+{
+ // GET: api/values
+ [HttpGet]
+ [RequiredScope(RequiredScopesConfigurationKey = "AzureAd:Scopes")
+ public IEnumerable<TodoItem> Get()
+ {
+ // Do the work and return the result.
+ // ...
+ }
+ // ...
+}
+```
+
+##### Verify scopes conditionally
+
+There are cases where you want to verify scopes conditionally. You can do this using the `VerifyUserHasAnyAcceptedScope` extension method on the `HttpContext`.
+
+```csharp
+using Microsoft.Identity.Web
+ [Authorize] public class TodoListController : Controller {
public class TodoListController : Controller
} ```
-The `VerifyUserHasAnyAcceptedScope` method does something like the following steps:
+#### Verify the scopes at the level of the controller
+
+You can also verify the scopes for the whole controller
+
+##### Verify the scopes on a controller with hardcoded scopes
+
+The following code snippet shows the usage of the `[RequiredScope]` attribute with hardcoded scopes on the controller.
+
+```csharp
+using Microsoft.Identity.Web
+
+[Authorize]
+[RequiredScope(scopeRequiredByApi)]
+public class TodoListController : Controller
+{
+ /// <summary>
+ /// The web API will accept only tokens 1) for users, 2) that have the `access_as_user` scope for
+ /// this API.
+ /// </summary>
+ const string[] scopeRequiredByApi = new string[] { "access_as_user" };
+
+ // GET: api/values
+ [HttpGet]
+ public IEnumerable<TodoItem> Get()
+ {
+ // Do the work and return the result.
+ // ...
+ }
+ // ...
+}
+```
+
+##### Verify the scopes on a controller with scopes defined in configuration
+
+Like on action, you can also declare these required scopes in the configuration, and reference the configuration key:
+
+```csharp
+using Microsoft.Identity.Web
+
+[Authorize]
+[RequiredScope(RequiredScopesConfigurationKey = "AzureAd:Scopes")
+public class TodoListController : Controller
+{
+ // GET: api/values
+ [HttpGet]
+ public IEnumerable<TodoItem> Get()
+ {
+ // Do the work and return the result.
+ // ...
+ }
+ // ...
+}
+```
+
+#### Verify the scopes more globally
+
+Defining granular scopes for your web API and verifying the scopes in each controller action is the recommended approach. However it's also possible to verify the scopes at the level of the application or a controller. For details, see [Claim-based authorization](/aspnet/core/security/authorization/claims) in the ASP.NET core documentation.
+
+#### What is verified?
+
+The `[RequiredScope]` attribute and `VerifyUserHasAnyAcceptedScope` method, does something like the following steps:
- Verify there's a claim named `http://schemas.microsoft.com/identity/claims/scope` or `scp`. - Verify the claim has a value that contains the scope expected by the API.
+### [ASP.NET Classic](#tab/aspnet)
-#### Verify the scopes more globally
+In an ASP.NET application, you can validate scopes in the following way:
+
+```CSharp
+[Authorize]
+public class TodoListController : ApiController
+{
+ public IEnumerable<TodoItem> Get()
+ {
+ ValidateScopes(new[] {"read"; "admin" } );
+ ...
+ }
+```
+
+Below is a simplified version of `ValidateScopes`:
-Defining granular scopes for your web API and verifying the scopes in each controller action is the recommended approach. However, it's also possible to verify the scopes at the level of the application or a controller by using ASP.NET Core. For details, see [Claim-based authorization](/aspnet/core/security/authorization/claims) in the ASP.NET core documentation.
+```csharp
+private void ValidateScopes(IEnumerable<string> acceptedScopes)
+{
+ //
+ // The `role` claim tells you what permissions the client application has in the service.
+ // In this case, we look for a `role` value of `access_as_application`.
+ //
+ Claim scopeClaim = ClaimsPrincipal.Current.FindFirst("scp");
+ if (scopeClaim == null || !scopeClaim.Value.Split(' ').Intersect(acceptedScopes).Any())
+ {
+ throw new HttpResponseException(new HttpResponseMessage
+ { StatusCode = HttpStatusCode.Forbidden,
+ ReasonPhrase = $"The 'scp' claim does not contain '{scopeClaim}' or was not found"
+ });
+ }
+}
+```
-### .NET MVC
+For a full version of `ValidateScopes` for ASP.NET Core, [*ScopesRequiredHttpContextExtensions.cs*](https://github.com/AzureAD/microsoft-identity-web/blob/master/src/Microsoft.Identity.Web/Resource/ScopesRequiredHttpContextExtensions.cs)
-For ASP.NET, just replace `HttpContext.User` with `ClaimsPrincipal.Current`, and replace the claim type `"http://schemas.microsoft.com/identity/claims/scope"` with `"scp"`. Also see the code snippet later in this article.
+ ## Verify app roles in APIs called by daemon apps
If your web API is called by a [daemon app](scenario-daemon-overview.md), that a
You now need to have your API verify that the token it receives contains the `roles` claim and that this claim has the expected value. The verification code is similar to the code that verifies delegated permissions, except that your controller action tests for roles instead of scopes:
-### ASP.NET Core
+### [ASP.NET Core](#tab/aspnetcore)
+
+The following code snippet shows how to verify the application role
```csharp
+using Microsoft.Identity.Web
+ [Authorize] public class TodoListController : ApiController {
public class TodoListController : ApiController
} ```
-The `ValidateAppRole` method is defined in Microsoft.Identity.Web in [RolesRequiredHttpContextExtensions.cs](https://github.com/AzureAD/microsoft-identity-web/blob/d2ad0f5f830391a34175d48621a2c56011a45082/src/Microsoft.Identity.Web/Resource/RolesRequiredHttpContextExtensions.cs#L28).
+Instead, you can use the [Authorize("role")] attributes on the controller or an action (or a razor page).
+
+```CSharp
+[Authorize("role")]
+MyController : ApiController
+{
+ //
+}
+```
+
+But for this, you'll need to map the Role claim to "roles" in the Startup.cs file:
++
+```CSharp
+ services.Configure<OpenIdConnectOptions>(OpenIdConnectDefaults.AuthenticationScheme, options =>
+ {
+ // The claim in the Jwt token where App roles are available.
+ options.TokenValidationParameters.RoleClaimType = "roles";
+ });
+```
+
+This isn't the best solution if you also need to do authorization based on groups.
+
+For details, see the web app incremental tutorial on [authorization by roles and groups](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/tree/master/5-WebApp-AuthZ).
-### ASP.NET MVC
+### [ASP.NET Classic](#tab/aspnet)
+
+In an ASP.NET application, you can validate app roles in the following way:
+
+```CSharp
+[Authorize]
+public class TodoListController : ApiController
+{
+ public IEnumerable<TodoItem> Get()
+ {
+ ValidateAppRole("access_as_application");
+ ...
+ }
+```
+
+A simplified version of `ValidateAppRole` is:
```csharp private void ValidateAppRole(string appRole)
private void ValidateAppRole(string appRole)
}); } }
-}
```
+For a full version of `ValidateAppRole` for ASP.NET Core, see [*RolesRequiredHttpContextExtensions.cs*](https://github.com/AzureAD/microsoft-identity-web/blob/master/src/Microsoft.Identity.Web/Resource/RolesRequiredHttpContextExtensions.cs) code.
+++ ### Accepting app-only tokens if the web API should be called only by daemon apps Users can also use roles claims in user assignment patterns, as shown in [How to: Add app roles in your application and receive them in the token](howto-add-app-roles-in-azure-ad-apps.md). If the roles are assignable to both, checking roles will let apps sign in as users and users to sign in as apps. We recommend that you declare different roles for users and apps to prevent this confusion.
bool isAppOnlyToken = oid == sub;
Checking the inverse condition allows only apps that sign in a user to call your API.
+### Using ACL-based authorization
+
+Alternatively to app-roles based authorization, you can
+protect your web API with an Access Control List (ACL) based authorization pattern to [control tokens without the `roles` claim](v2-oauth2-client-creds-grant-flow.md#controlling-tokens-without-the-roles-claim).
+
+If you are using Microsoft.Identity.Web on ASP.NET core, you'll need to declare that you are using ACL-based authorization, otherwise Microsoft Identity Web will throw an exception when neither roles nor scopes are in the Claims provided:
+
+```Text
+System.UnauthorizedAccessException: IDW10201: Neither scope or roles claim was found in the bearer token.
+```
+
+ To avoid this exception, set the `AllowWebApiToBeAuthorizedByACL` configuration property to true, in the appsettings.json or programmatically.
+
+```Json
+{
+ "AzureAD"
+ {
+ // other properties
+ "AllowWebApiToBeAuthorizedByACL" : true,
+ // other properties
+ }
+}
+```
+
+If you set `AllowWebApiToBeAuthorizedByACL` to true, this is **your responsibility** to ensure the ACL mechanism.
+ ## Next steps Move on to the next article in this scenario,
active-directory V2 Oauth2 On Behalf Of Flow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/v2-oauth2-on-behalf-of-flow.md
The above access token is a v1.0-formatted token for Microsoft Graph. This is be
### Error response example
-An error response is returned by the token endpoint when trying to acquire an access token for the downstream API, if the downstream API has a Conditional Access policy (such as [multi-factor authentication](../authentication/concept-mfa-howitworks.md)) set on it. The middle-tier service should surface this error to the client application so that the client application can provide the user interaction to satisfy the Conditional Access policy.
+An error response is returned by the token endpoint when trying to acquire an access token for the downstream API, if the downstream API has a Conditional Access policy (such as [multifactor authentication](../authentication/concept-mfa-howitworks.md)) set on it. The middle-tier service should surface this error to the client application so that the client application can provide the user interaction to satisfy the Conditional Access policy.
```json { "error":"interaction_required",
- "error_description":"AADSTS50079: Due to a configuration change made by your administrator, or because you moved to a new location, you must enroll in multi-factor authentication to access 'bf8d80f9-9098-4972-b203-500f535113b1'.\r\nTrace ID: b72a68c3-0926-4b8e-bc35-3150069c2800\r\nCorrelation ID: 73d656cf-54b1-4eb2-b429-26d8165a52d7\r\nTimestamp: 2017-05-01 22:43:20Z",
+ "error_description":"AADSTS50079: Due to a configuration change made by your administrator, or because you moved to a new location, you must enroll in multifactor authentication to access 'bf8d80f9-9098-4972-b203-500f535113b1'.\r\nTrace ID: b72a68c3-0926-4b8e-bc35-3150069c2800\r\nCorrelation ID: 73d656cf-54b1-4eb2-b429-26d8165a52d7\r\nTimestamp: 2017-05-01 22:43:20Z",
"error_codes":[50079], "timestamp":"2017-05-01 22:43:20Z", "trace_id":"b72a68c3-0926-4b8e-bc35-3150069c2800",
A service-to-service request for a SAML assertion contains the following paramet
| assertion |required | The value of the access token used in the request.| | client_id |required | The app ID assigned to the calling service during registration with Azure AD. To find the app ID in the Azure portal, select **Active Directory**, choose the directory, and then select the application name. | | client_secret |required | The key registered for the calling service in Azure AD. This value should have been noted at the time of registration. |
-| scope |required | A space-separated list of scopes for the token request. For more information, see [scopes](v2-permissions-and-consent.md). Eg. https://testapp.contoso.com/user_impersonation openid |
+| scope |required | A space-separated list of scopes for the token request. For more information, see [scopes](v2-permissions-and-consent.md). For example, 'https://testapp.contoso.com/user_impersonation openid' |
| requested_token_use |required | Specifies how the request should be processed. In the On-Behalf-Of flow, the value must be **on_behalf_of**. | | requested_token_type | required | Specifies the type of token requested. The value can be **urn:ietf:params:oauth:token-type:saml2** or **urn:ietf:params:oauth:token-type:saml1** depending on the requirements of the accessed resource. |
active-directory V2 Permissions And Consent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/v2-permissions-and-consent.md
Previously updated : 05/25/2021 Last updated : 07/06/2021
The access token is valid for a short time. It usually expires in one hour. At t
For more information about how to get and use refresh tokens, see the [Microsoft identity platform protocol reference](active-directory-v2-protocols.md).
-## Incremental and dynamic consent
+## Consent types
+
+Applications in Microsoft identity platform rely on consent in order to gain access to necessary resources or APIs. There are a number of kinds of consent that your app may need to know about in order to be successful. If you are defining permissions, you will also need to understand how your users will gain access to your app or API.
+
+### Static user consent
+
+In the static user consent scenario, you must specify all the permissions it needs in the app's configuration in the Azure portal. If the user (or administrator, as appropriate) has not granted consent for this app, then Microsoft identity platform will prompt the user to provide consent at this time.
+
+Static permissions also enables administrators to [consent on behalf of all users](#requesting-consent-for-an-entire-tenant) in the organization.
+
+While static permissions of the app defined in the Azure portal keep the code nice and simple, it presents some possible issues for developers:
+
+- The app needs to request all the permissions it would ever need upon the user's first sign-in. This can lead to a long list of permissions that discourages end users from approving the app's access on initial sign-in.
+
+- The app needs to know all of the resources it would ever access ahead of time. It is difficult to create apps that could access an arbitrary number of resources.
+
+### Incremental and dynamic user consent
+ With the Microsoft identity platform endpoint, you can ignore the static permissions defined in the app registration information in the Azure portal and request permissions incrementally instead. You can ask for a bare minimum set of permissions upfront and request more over time as the customer uses additional app features. To do so, you can specify the scopes your app needs at any time by including the new scopes in the `scope` parameter when [requesting an access token](#requesting-individual-user-consent) - without the need to pre-define them in the application registration information. If the user hasn't yet consented to new scopes added to the request, they'll be prompted to consent only to the new permissions. Incremental, or dynamic consent, only applies to delegated permissions and not to application permissions. Allowing an app to request permissions dynamically through the `scope` parameter gives developers full control over your user's experience. You can also front load your consent experience and ask for all permissions in one initial authorization request. If your app requires a large number of permissions, you can gather those permissions from the user incrementally as they try to use certain features of the app over time.
-[Admin consent](#using-the-admin-consent-endpoint) done on behalf of an organization still requires the static permissions registered for the app, so you should set those permissions for apps in the app registration portal if you need an admin to give consent on behalf of the entire organization. This reduces the cycles required by the organization admin to set up the application.
+> [!IMPORTANT]
+> Dynamic consent can be convenient, but presents a big challenge for permissions that require admin consent, since the admin consent experience doesn't know about those permissions at consent time. If you require admin privileged permissions or if your app uses dynamic consent, you must register all of the permissions in the Azure portal (not just the subset of permissions that require admin consent). This enables tenant admins to consent on behalf of all their users.
+
+### Admin consent
+
+[Admin consent](#using-the-admin-consent-endpoint) is required when your app needs access to certain high-privilege permissions. Admin consent ensures that administrators have some additional controls before authorizing apps or users to access highly privileged data from the organization.
+
+[Admin consent done on behalf of an organization](#requesting-consent-for-an-entire-tenant) still requires the static permissions registered for the app. Set those permissions for apps in the app registration portal if you need an admin to give consent on behalf of the entire organization. This reduces the cycles required by the organization admin to set up the application.
## Requesting individual user consent
When the user approves the permission request, consent is recorded. The user doe
When an organization purchases a license or subscription for an application, the organization often wants to proactively set up the application for use by all members of the organization. As part of this process, an administrator can grant consent for the application to act on behalf of any user in the tenant. If the admin grants consent for the entire tenant, the organization's users don't see a consent page for the application.
-To request consent for delegated permissions for all users in a tenant, your app can use the admin consent endpoint.
+Admin consent done on behalf of an organization requires the static permissions registered for the app. Set those permissions for apps in the app registration portal if you need an admin to give consent on behalf of the entire organization.
+
+To request consent for delegated permissions for all users in a tenant, your app can use the [admin consent endpoint](#using-the-admin-consent-endpoint).
Additionally, applications must use the admin consent endpoint to request application permissions.
active-directory Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/troubleshoot.md
If you are using federation authentication and the user does not already exist i
To resolve this issue, the external userΓÇÖs admin must synchronize the userΓÇÖs account to Azure Active Directory.
+### External user has a proxyAddress that conflicts with a proxyAddress of an existing local user
+
+When we check whether a user is able to be invited to your tenant, one of the things we check for is for a collision in the proxyAddress. This includes any proxyAddresses for the user in their home tenant and any proxyAddress for local users in your tenant. For external users, we will add the email to the proxyAddress of the existing B2B user. For local users, you can ask them to sign in using the account they already have.
+ ## I can't invite an email address because of a conflict in proxyAddresses This happens when another object in the directory has the same invited email address as one of its proxyAddresses. To fix this conflict, remove the email from the [user](/graph/api/resources/user?view=graph-rest-1.0&preserve-view=true) object, and also delete the associated [contact](/graph/api/resources/contact?view=graph-rest-1.0&preserve-view=true) object before trying to invite this email again.
active-directory Concept Fundamentals Security Defaults https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/concept-fundamentals-security-defaults.md
Previously updated : 05/03/2021 Last updated : 07/07/2021
These free security defaults allow registration and use of Azure AD Multi-Factor
- *** App passwords are only available in per-user MFA with legacy authentication scenarios only if enabled by administrators. > [!WARNING]
-> Do not disable methods for your organization if you are using Security Defaults. Disabling methods may lead to locking yourself out of your tenant. Leave all **Methods available to users** enabled in the [MFA service settings portal](../authentication/howto-mfa-getstarted.md#choose-verification-options).
+> Do not disable methods for your organization if you are using Security Defaults. Disabling methods may lead to locking yourself out of your tenant. Leave all **Methods available to users** enabled in the [MFA service settings portal](../authentication/howto-mfa-getstarted.md#choose-authentication-methods-for-mfa).
### Disabled MFA status
active-directory Entitlement Management Access Package Incompatible https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-access-package-incompatible.md
+
+ Title: Configure separation of duties for an access package in Azure AD entitlement management - Azure Active Directory
+description: Learn how to configure separation of duties enforcement for requests for an access package in Azure Active Directory entitlement management.
+
+documentationCenter: ''
++
+editor:
++
+ na
+ms.devlang: na
++ Last updated : 07/2/2021++++
+#Customer intent: As a global administrator or access package manager, I want to configure that a user cannot request an access package if they already have incompatible access.
++
+# Configure separation of duties checks for an access package in Azure AD entitlement management (Preview)
+
+In each of an access package's policies, you can specify who is able to request that access package, such as all member users in your organization, or only users who are already a member of a particular group. However, you may wish to further restrict access, in order to avoid a user from obtaining excessive access.
+
+With the separation of duties settings on an access package, you can configure that a user cannot request an access package, if they already have an assignment to another access package, or are a member of a group.
+
+For example, you have an access package, *Marketing Campaign*, that people across your organization and other organizations can request access to, to work with your organization's marketing department on that marketing campaign. Since employees in the marketing department should already have access to that marketing campaign material, you wouldn't want employees in the marketing department to request access to that access package. Or, you may already have a dynamic group, *Marketing department employees*, with all of the marketing employees in it. You could indicate that the access package is incompatible with the membership of that dynamic group. Then, if a marketing department employee is looking for an access package to request, they couldn't request access to the *Marketing campaign* access package.
+
+Similarly, you may have an application with two roles - **Western Sales** and **Eastern Sales** - and want to ensure that a user can only have one sales territory at a time. If you have two access packages, one access package **Western Territory** giving the **Western Sales** role and the other access package **Eastern Territory** giving the **Eastern Sales** role, then you can configure
+ - the **Western Territory** access package has the **Eastern Territory** package as incompatible, and
+ - the **Eastern Territory** access package has the **Western Territory** package as incompatible.
+
+## Prerequisites
+
+To use Azure AD entitlement management and assign users to access packages, you must have one of the following licenses:
+
+- Azure AD Premium P2
+- Enterprise Mobility + Security (EMS) E5 license
+
+## Configure another access package or group membership as incompatible for requesting access to an access package
+
+**Prerequisite role**: Global administrator, Identity Governance administrator, User administrator, Catalog owner or Access package manager
+
+Follow these steps to change the list of incompatible groups or other access packages for an existing access package:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Click **Azure Active Directory**, and then click **Identity Governance**.
+
+1. In the left menu, click **Access packages** and then open the access package which users will request.
+
+1. In the left menu, click **Separation of duties (preview)**.
+
+1. If you wish to prevent users who have another access package assignment already from requesting this access package, click on **Add access package** and select the access package that the user would already be assigned.
+
+1. If you wish to prevent users who have an existing group membership from requesting this access package, then click on **Add group** and select the group that the user would already be in.
+
+## View other access packages that are configured as incompatible with this one
+
+**Prerequisite role**: Global administrator, Identity Governance administrator, User administrator, Catalog owner or Access package manager
+
+Follow these steps to view the list of other access packages that have indicated that they are incompatible with an existing access package:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Click **Azure Active Directory**, and then click **Identity Governance**.
+
+1. In the left menu, click **Access packages** and then open the access package.
+
+1. In the left menu, click **Separation of duties (preview)**.
+
+1. Click on **Incompatible With**.
+
+## Next steps
+
+- [View, add, and remove assignments for an access package](entitlement-management-access-package-assignments.md)
+- [View reports and logs](entitlement-management-reports.md)
active-directory Entitlement Management Access Package Request Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-access-package-request-policy.md
na
ms.devlang: na Previously updated : 09/16/2020 Last updated : 07/01/2021
To change the request and approval settings for an access package, you need to o
1. If you are editing a policy click **Update**. If you are adding a new policy, click **Create**.
+## Prevent requests from users with incompatible access (preview)
+
+In addition to the policy checks on who can request, you may wish to further restrict access, in order to avoid a user who already has some access - via a group or another access package - from obtaining excessive access.
+
+if you want to configure that a user cannot request an access package, if they already have an assignment to another access package, or are a member of a group, use the steps at [Configure separation of duties checks for an access package](entitlement-management-access-package-incompatible.md).
+ ## Next steps - [Change the approval settings for an access package](entitlement-management-access-package-approval-policy.md)
active-directory Entitlement Management Access Reviews Create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-access-reviews-create.md
This setting determines how often access reviews will occur.
1. Set the **Duration** to define how many days each review of the recurring series will be open for input from reviewers. For example, you might schedule an annual review that starts on January 1st and is open for review for 30 days so that reviewers have until the end of the month to respond.
-1. Next to **Reviewers**, select **Self-review** if you want users to perform their own access review or select **Specific reviewer(s)** if you want to designate a reviewer. You can also select **Manager (Preview)** if you want to designate the revieweeΓÇÖs manager to be the reviewer. If you select this option, you need to add a **fallback** to forward the review to in case the manager cannot be found in the system.
+1. Next to **Reviewers**, select **Self-review** if you want users to perform their own access review or select **Specific reviewer(s)** if you want to designate a reviewer. You can also select **Manager** if you want to designate the revieweeΓÇÖs manager to be the reviewer. If you select this option, you need to add a **fallback** to forward the review to in case the manager cannot be found in the system.
![Select Add reviewers](./media/entitlement-management-access-reviews/access-reviews-add-reviewer.png)
This setting determines how often access reviews will occur.
![Add the fallback reviewers](./media/entitlement-management-access-reviews/access-reviews-add-fallback-manager.png)
-1. Click **Show advanced access review settings (Preview)** to show additional settings.
-
- ![Show the advanced review settings](./media/entitlement-management-access-reviews/access-reviews-advanced-settings.png)
- 1. Click **Review + Create** if you are creating a new access package or **Update** if you are editing an access package, at the bottom of the page. > [!NOTE]
active-directory Services Support Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/services-support-managed-identities.md
description: List of services that support managed identities for Azure resource
Previously updated : 01/28/2021 Last updated : 06/28/2021
Refer to the following document to reconfigure a managed identity if you have mo
| Managed identity type | All Generally Available<br>Global Azure Regions | Azure Government | Azure Germany | Azure China 21Vianet | | | :-: | :-: | :-: | :-: |
-| System assigned | ![Available][check] | ![Available][check] | Not available | Not available |
-| User assigned | Not available | Not available | Not available | Not available |
+| System assigned | ![Available][check] | ![Available][check] | Not available | ![Available][check] |
+| User assigned | ![Available][check] | ![Available][check] | Not available | ![Available][check] |
Refer to the following documents to use managed identity with [Azure Automation](../../automation/automation-intro.md):
active-directory Howto Integrate Activity Logs With Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md
# Integrate Azure AD logs with Azure Monitor logs
+Follow the steps in this article to integrate Azure Active Directory (Azure AD) logs with Azure Monitor.
-Azure Monitor logs allows you to query data to find particular events, analyze trends, and perform correlation across various data sources. With the integration of Azure AD activity logs in Azure Monitor logs, you can now perform tasks like:
+Use the integration of Azure AD activity logs in Azure Monitor logs to perform tasks like:
- * Compare your Azure AD sign-in logs against security logs published by Azure Security Center
-
- * Troubleshoot performance bottlenecks on your applicationΓÇÖs sign-in page by correlating application performance data from Azure Application Insights.
+ * Compare your Azure AD sign-in logs against security logs published by Azure Security Center.
+
+ * Troubleshoot performance bottlenecks on your applicationΓÇÖs sign-in page by correlating application performance data from Azure Application Insights.
+
+ * Identify sign-ins from applications that use the Active Directory Authentication Library (ADAL) for authentication. [ADAL is nearing end-of-support](../develop/msal-migration.md).
-The following video from an Ignite session demonstrates the benefits of using Azure Monitor logs for Azure AD logs in practical user scenarios.
+This Microsoft Ignite session video shows the benefits of using Azure Monitor logs for Azure AD logs in practical scenarios:
> [!VIDEO https://www.youtube.com/embed/MP5IaCTwkQg?start=1894]
-In this article, you learn how to integrate Azure Active Directory (Azure AD) logs with Azure Monitor.
- ## Supported reports You can route audit activity logs and sign-in activity logs to Azure Monitor logs for further analysis.
You can route audit activity logs and sign-in activity logs to Azure Monitor log
* **Provisioning logs**: With the [provisioning logs](../app-provisioning/application-provisioning-log-analytics.md), you can monitor which users have been created, updated, and deleted in all your third-party applications. > [!NOTE]
-> B2C-related audit and sign-in activity logs are not supported at this time.
->
+> Azure AD B2C audit and sign-in activity logs are currently unsupported.
## Prerequisites
If you want to know for how long the activity data is stored in a Premium tenant
## Next steps * [Analyze Azure AD activity logs with Azure Monitor logs](howto-analyze-activity-logs-log-analytics.md)
-* [Install and use the log analytics views for Azure Active Directory](howto-install-use-log-analytics-views.md)
+* [Install and use the log analytics views for Azure Active Directory](howto-install-use-log-analytics-views.md)
active-directory Permissions Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/permissions-reference.md
Users with the Modern Commerce User role typically have administrative permissio
**When is the Modern Commerce User role assigned?** * **Self-service purchase in Microsoft 365 admin center** ΓÇô Self-service purchase gives users a chance to try out new products by buying or signing up for them on their own. These products are managed in the admin center. Users who make a self-service purchase are assigned a role in the commerce system, and the Modern Commerce User role so they can manage their purchases in admin center. Admins can block self-service purchases (for Power BI, Power Apps, Power automate) through [PowerShell](/microsoft-365/commerce/subscriptions/allowselfservicepurchase-powershell). For more information, see [Self-service purchase FAQ](/microsoft-365/commerce/subscriptions/self-service-purchase-faq).
-* **Purchases from Microsoft commercial marketplace** ΓÇô Similar to self-service purchase, when a user buys a product or service from Microsoft AppSource or Azure Marketplace, the Modern Commerce User role is assigned if they donΓÇÖt have the Global Administrator or Billing Administrator role. In some cases, users might be blocked from making these purchases. For more information, see [Microsoft commercial marketplace](../../marketplace/marketplace-faq-publisher-guide.md#what-could-block-a-customer-from-completing-a-purchase).
+* **Purchases from Microsoft commercial marketplace** ΓÇô Similar to self-service purchase, when a user buys a product or service from Microsoft AppSource or Azure Marketplace, the Modern Commerce User role is assigned if they donΓÇÖt have the Global Administrator or Billing Administrator role. In some cases, users might be blocked from making these purchases. For more information, see [Microsoft commercial marketplace](../../marketplace/marketplace-faq-publisher-guide.yml#what-could-block-a-customer-from-completing-a-purchase-).
* **Proposals from Microsoft** ΓÇô A proposal is a formal offer from Microsoft for your organization to buy Microsoft products and services. When the person who is accepting the proposal doesnΓÇÖt have a Global Administrator or Billing Administrator role in Azure AD, they are assigned both a commerce-specific role to complete the proposal and the Modern Commerce User role to access admin center. When they access the admin center they can only use features that are authorized by their commerce-specific role. * **Commerce-specific roles** ΓÇô Some users are assigned commerce-specific roles. If a user isn't a Global Administrator or Billing Administrator, they get the Modern Commerce User role so they can access the admin center.
active-directory Teamviewer Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/teamviewer-provisioning-tutorial.md
This section guides you through the steps to configure the Azure AD provisioning
![Screenshot of the Provisioning Mode dropdown list with the Automatic option called out.](common/provisioning-automatic.png)
-5. Under the **Admin Credentials** section, enter `ttps://webapi.teamviewer.com/scim/v2` in the **Tentant URL** field and enter the script token created earlier in the **Secret Token**. Click **Test Connection** to ensure Azure AD can connect to TeamViewer. If the connection fails, ensure your TeamViewer account has Admin permissions and try again.
+5. Under the **Admin Credentials** section, enter `https://webapi.teamviewer.com/scim/v2` in the **Tenant URL** field and enter the script token created earlier in the **Secret Token**. Click **Test Connection** to ensure Azure AD can connect to TeamViewer. If the connection fails, ensure your TeamViewer account has Admin permissions and try again.
![Screenshot shows the Admin Credentials dialog box, where you can enter your Tenant U R L and Secret Token.](./media/teamViewer-provisioning-tutorial/provisioning.png)
Once you've configured provisioning, use the following resources to monitor your
## Next steps
-* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
aks Load Balancer Standard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/load-balancer-standard.md
Requirements for using your own public IP or prefix:
- Custom public IP addresses must be created and owned by the user. Managed public IP addresses created by AKS cannot be reused as a bring your own custom IP as it can cause management conflicts. - You must ensure the AKS cluster identity (Service Principal or Managed Identity) has permissions to access the outbound IP. As per the [required public IP permissions list](kubernetes-service-principal.md#networking).-- Make sure you meet the [pre-requisites and constraints](../virtual-network/public-ip-address-prefix.md#constraints) necessary to configure Outbound IPs or Outbound IP prefixes.
+- Make sure you meet the [pre-requisites and constraints](../virtual-network/public-ip-address-prefix.md#limitations) necessary to configure Outbound IPs or Outbound IP prefixes.
#### Update the cluster with your own outbound public IP
az aks update \
#### Create the cluster with your own public IP or prefixes
-You may wish to bring your own IP addresses or IP prefixes for egress at cluster creation time to support scenarios like adding egress endpoints to an allow list. Append the same parameters shown above to your cluster creation step to define your own public IPs and IP prefixes at the start of a cluster's lifecycle.
+You may wish to bring your own IP addresses or IP prefixes for egress at cluster creation time to support scenarios like adding egress endpoints to an allowlist. Append the same parameters shown above to your cluster creation step to define your own public IPs and IP prefixes at the start of a cluster's lifecycle.
Use the *az aks create* command with the *load-balancer-outbound-ips* parameter to create a new cluster with your public IPs at the start.
aks Use Multiple Node Pools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/use-multiple-node-pools.md
A workload may require splitting a cluster's nodes into separate pools for logic
* If you expand your VNET after creating the cluster you must update your cluster (perform any managed cluster operation but node pool operations don't count) before adding a subnet outside the original cidr. AKS will error out on the agent pool add now though we originally allowed it. If you don't know how to reconcile your cluster file a support ticket. * Calico Network Policy is not supported. * Azure Network Policy is not supported.
-* Kube-proxy expects a single contiguous cidr and uses it this for three optmizations. See this [K.E.P.](https://github.com/kubernetes/enhancements/tree/master/keps/sig-network/2450-Remove-knowledge-of-pod-cluster-CIDR-from-iptables-rules) and --cluster-cidr [here](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/) for details. In azure cni your first node pool's subnet will be given to kube-proxy.
+* Kube-proxy expects a single contiguous cidr and uses it this for three optmizations. See this [K.E.P.](https://github.com/kubernetes/enhancements/tree/master/keps/sig-network/2450-Remove-knowledge-of-pod-cluster-CIDR-from-iptables-rules) and --cluster-cidr [here](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/) for details. In Azure cni your first node pool's subnet will be given to kube-proxy.
To create a node pool with a dedicated subnet, pass the subnet resource ID as an additional parameter when creating a node pool.
Use [proximity placement groups][reduce-latency-ppg] to reduce latency for your
[vmss-commands]: ../virtual-machine-scale-sets/virtual-machine-scale-sets-networking.md#public-ipv4-per-virtual-machine [az-list-ips]: /cli/azure/vmss?view=azure-cli-latest&preserve-view=true#az_vmss_list_instance_public_ips [reduce-latency-ppg]: reduce-latency-ppg.md
-[public-ip-prefix-benefits]: ../virtual-network/public-ip-address-prefix.md#why-create-a-public-ip-address-prefix
+[public-ip-prefix-benefits]: ../virtual-network/public-ip-address-prefix.md
[az-public-ip-prefix-create]: /cli/azure/network/public-ip/prefix?view=azure-cli-latest&preserve-view=true#az_network_public_ip_prefix_create [node-image-upgrade]: node-image-upgrade.md [fips]: /azure/compliance/offerings/offering-fips-140-2
api-management Api Management Using With Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-using-with-vnet.md
You can also enable VNET connectivity by using the following methods.
### API version 2021-01-01-preview
-* Azure Resource Manager [template](https://github.com/Azure/azure-quickstart-templates/tree/master/201-api-management-create-with-external-vnet-publicip)
+* Azure Resource Manager [template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.apimanagement/api-management-create-with-external-vnet-publicip)
- [![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2F201-api-management-create-with-external-vnet-publicip%2Fazuredeploy.json)
+ [![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.apimanagement%2Fapi-management-create-with-external-vnet-publicip%2Fazuredeploy.json)
### API version 2020-12-01
app-service Certificates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/certificates.md
# Certificates and the App Service Environment -
+> [!NOTE]
+> This article is about the App Service Environment v2 which is used with Isolated App Service plans
+>
The App Service Environment(ASE) is a deployment of the Azure App Service that runs within your Azure Virtual Network(VNet). It can be deployed with an internet accessible application endpoint or an application endpoint that is in your VNet. If you deploy the ASE with an internet accessible endpoint, that deployment is called an External ASE. If you deploy the ASE with an endpoint in your VNet, that deployment is called an ILB ASE. You can learn more about the ILB ASE from the [Create and use an ILB ASE](./create-ilb-ase.md) document. The ASE is a single tenant system. Because it is single tenant, there are some features available only with an ASE that are not available in the multi-tenant App Service. ## ILB ASE certificates
-If you are using an External ASE, then your apps are reached at [appname].[asename].p.azurewebsites.net. By default all ASEs, even ILB ASEs, are created with certificates that follow that format. When you have an ILB ASE, the apps are reached based on the domain name that you specify when creating the ILB ASE. In order for the apps to support TLS, you need to upload certificates. Obtain a valid TLS/SSL certificate by using internal certificate authorities, purchasing a certificate from an external issuer, or using a self-signed certificate.
+If you are using an External ASE, then your apps are reached at &lt;appname&gt;.&lt;asename&gt;.p.azurewebsites.net. By default all ASEs, even ILB ASEs, are created with certificates that follow that format. When you have an ILB ASE, the apps are reached based on the domain name that you specify when creating the ILB ASE. In order for the apps to support TLS, you need to upload certificates. Obtain a valid TLS/SSL certificate by using internal certificate authorities, purchasing a certificate from an external issuer, or using a self-signed certificate.
There are two options for configuring certificates with your ILB ASE. You can set a wildcard default certificate for the ILB ASE or set certificates on the individual web apps in the ASE. Regardless of the choice you make, the following certificate attributes must be configured properly:
As a third variant, you can create an ILB ASE certificate that includes all of y
After an ILB ASE is created in the portal, the certificate must be set for the ILB ASE. Until the certificate is set, the ASE will show a banner that the certificate was not set.
-The certificate that you upload must be a .pfx file. After the certificate is uploaded, the ASE will perform a scale operation to set the certificate.
+The certificate that you upload must be a .pfx file. After the certificate is uploaded, there is a time delay of approximately 20 minutes before the certificate is used.
You cannot create the ASE and upload the certificate as one action in the portal or even in one template. As a separate action, you can upload the certificate using a template as described in the [Create an ASE from a template](./create-from-template.md) document.
$password = ConvertTo-SecureString -String "CHANGETHISPASSWORD" -Force -AsPlainT
$fileName = "exportedcert.cer" export-certificate -Cert $certThumbprint -FilePath $fileName -Type CERT
-```
+```
app-service Create External Ase https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/create-external-ase.md
# Create an External App Service environment
-Azure App Service Environment is a deployment of Azure App Service into a subnet in an Azure virtual network (VNet).
- > [!NOTE]
-> Each App Service Environment has a Virtual IP (VIP), which can be used to contact the App Service Environment.
+> This article is about the App Service Environment v2 which is used with Isolated App Service plans
+>
-There are two ways to deploy an App Service Environment (ASE):
+Azure App Service Environment is a deployment of Azure App Service into a subnet in an Azure virtual network (VNet). There are two ways to deploy an App Service Environment (ASE):
-- With a VIP on an external IP address, often called an External ASE.
+- With a VIP on an external public facing IP address, often called an External ASE.
- With the VIP on an internal IP address, often called an ILB ASE because the internal endpoint is an Internal Load Balancer (ILB). This article shows you how to create an External ASE. For an overview of the ASE, see [An introduction to the App Service Environment][Intro]. For information on how to create an ILB ASE, see [Create and use an ILB ASE][MakeILBASE].
app-service Create From Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/create-from-template.md
# Create an ASE by using an Azure Resource Manager template ## Overview
+> [!NOTE]
+> This article is about the App Service Environment v2 which is used with Isolated App Service plans
+>
[!INCLUDE [updated-for-az](../../../includes/updated-for-az.md)]
app-service Create Ilb Ase https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/create-ilb-ase.md
# Create and use an Internal Load Balancer App Service Environment
+> [!NOTE]
+> This article is about the App Service Environment v2 which is used with Isolated App Service plans
+>
The Azure App Service Environment is a deployment of Azure App Service into a subnet in an Azure virtual network (VNet). There are two ways to deploy an App Service Environment (ASE):
app-service Creation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/creation.md
description: Learn how to create an App Service Environment.
ms.assetid: 7690d846-8da3-4692-8647-0bf5adfd862a Previously updated : 06/21/2021 Last updated : 07/06/2021 # Create an App Service Environment > [!NOTE]
-> This article is about the App Service Environment v3 (preview)
+> This article is about the App Service Environment v3 which is used with Isolated v2 App Service plans
> + The [App Service Environment (ASE)][Intro] is a single tenant deployment of the App Service that injects into your Azure Virtual Network (VNet). A deployment of an ASE will require use of one subnet. This subnet can't be used for anything else other than the ASE. ## Before you create your ASE
The subnet needs to be large enough to hold the maximum size that you'll scale y
## Creating an ASE in the portal
-1. To create an ASE, search the marketplace for **App Service Environment (preview)**.
+1. To create an ASE, search the marketplace for **App Service Environment v3**.
2. Basics: Select the Subscription, select or create the Resource Group, and enter the name of your ASE. Select the type of Virtual IP type. If you select Internal, your inbound ASE address will be an address in your ASE subnet. If you select External, your inbound ASE address will be a public internet facing address. The ASE name will be also used for the domain suffix of your ASE. If your ASE name is *contoso* and you have an Internal VIP ASE, then the domain suffix will be *contoso.appserviceenvironment.net*. If your ASE name is *contoso* and you have an external VIP, the domain suffix will be *contoso.p.azurewebsites.net*. ![App Service Environment create basics tab](./media/creation/creation-basics.png) 3. Hosting: Select *Enabled* or *Disabled* for Host Group deployment. Host Group deployment is used to select dedicated hardware. If you select Enabled, your ASE will be deployed onto dedicated hardware. When you deploy onto dedicated hardware, you are charged for the entire dedicated host during ASE creation and then a reduced price for your App Service plan instances. ![App Service Environment hosting selections](./media/creation/creation-hosting.png)
-4. Networking: Select or create your Virtual Network, select or create your subnet. If you are creating an internal VIP ASE, you will have the option to configure Azure DNS private zones to point your domain suffix to your ASE.
+4. Networking: Select or create your Virtual Network, select or create your subnet. If you are creating an internal VIP ASE, you will have the option to configure Azure DNS private zones to point your domain suffix to your ASE. Details on how to manually configure DNS are in the DNS section under [Using an App Service Environment][UsingASE].
![App Service Environment networking selections](./media/creation/creation-networking.png) 5. Review and Create: Check that your configuration is correct and select create. Your ASE can take up to nearly two hours to create.
- ![App Service Environment review and create](./media/creation/creation-review.png)
- After your ASE creation completes, you can select it as a location when creating your apps. To learn more about creating apps in your new ASE or managing your ASE, read [Using an App Service Environment][UsingASE] ## Dedicated hosts
app-service Firewall Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/firewall-integration.md
# Locking down an App Service Environment
+> [!NOTE]
+> This article is about the App Service Environment v2 which is used with Isolated App Service plans
+>
The App Service Environment (ASE) has a number of external dependencies that it requires access to in order to function properly. The ASE lives in the customer Azure Virtual Network (VNet). Customers must allow the ASE dependency traffic, which is a problem for customers that want to lock down all egress from their VNet.
app-service Forced Tunnel Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/forced-tunnel-support.md
# Configure your App Service Environment with forced tunneling
+> [!NOTE]
+> This article is about the App Service Environment v2 which is used with Isolated App Service plans
+>
The App Service Environment (ASE) is a deployment of Azure App Service in a customer's Azure Virtual Network. Many customers configure their Azure virtual networks to be extensions of their on-premises networks with VPNs or Azure ExpressRoute connections. Forced tunneling is when you redirect internet bound traffic to your VPN or a virtual appliance instead. Virtual appliances are often used to inspect and audit outbound network traffic.
app-service Intro https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/intro.md
# Introduction to the App Service Environments #
+> [!NOTE]
+> This article is about the App Service Environment v2 which is used with Isolated App Service plans
+>
++ ## Overview ## The Azure App Service Environment is an Azure App Service feature that provides a fully isolated and dedicated environment for securely running App Service apps at high scale. This capability can host your:
ASEv1 uses a different pricing model from ASEv2. In ASEv1, you pay for each vCPU
[Kudu]: https://azure.microsoft.com/resources/videos/super-secret-kudu-debug-console-for-azure-web-sites/ [ASEWAF]: app-service-app-service-environment-web-application-firewall.md [AppGW]: ../../web-application-firewall/ag/ag-overview.md
-[ASEAZ]: https://azure.github.io/AppService/2019/12/12/App-Service-Environment-Support-for-Availability-Zones.html
+[ASEAZ]: https://azure.github.io/AppService/2019/12/12/App-Service-Environment-Support-for-Availability-Zones.html
app-service Management Addresses https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/management-addresses.md
# App Service Environment management addresses
+> [!NOTE]
+> This article is about the App Service Environment v2 which is used with Isolated App Service plans
+>
The App Service Environment (ASE) is a single tenant deployment of the Azure App Service that runs in your Azure Virtual Network (VNet). While the ASE does run in your VNet, it must still be accessible from a number of dedicated IP addresses that are used by the Azure App Service to manage the service. In the case of an ASE, the management traffic traverses the user-controlled network. If this traffic is blocked or misrouted, the ASE will become suspended. For details on the ASE networking dependencies, read [Networking considerations and the App Service Environment][networking]. For general information on the ASE, you can start with [Introduction to the App Service Environment][intro].
app-service Network Info https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/network-info.md
# Networking considerations for an App Service Environment #
+> [!NOTE]
+> This article is about the App Service Environment v2 which is used with Isolated App Service plans
+>
+ ## Overview ## Azure [App Service Environment][Intro] is a deployment of Azure App Service into a subnet in your Azure virtual network (VNet). There are two deployment types for an App Service environment (ASE):
When Service Endpoints is enabled on a subnet with an Azure SQL instance, all Az
[ASEManagement]: ./management-addresses.md [serviceendpoints]: ../../virtual-network/virtual-network-service-endpoints-overview.md [forcedtunnel]: ./forced-tunnel-support.md
-[serviceendpoints]: ../../virtual-network/virtual-network-service-endpoints-overview.md
+[serviceendpoints]: ../../virtual-network/virtual-network-service-endpoints-overview.md
app-service Networking https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/networking.md
description: App Service Environment networking details
ms.assetid: 6f262f63-aef5-4598-88d2-2f2c2f2bfc24 Previously updated : 06/21/2021 Last updated : 06/30/2021
# App Service Environment networking > [!NOTE]
-> This article is about the App Service Environment v3 (preview)
+> This article is about the App Service Environment v3 which is used with Isolated v2 App Service plans
> + The App Service Environment (ASE) is a single tenant deployment of the Azure App Service that hosts web apps, api apps, and function apps. When you install an ASE, you pick the Azure Virtual Network (VNet) that you want it to be deployed in. All of the inbound and outbound traffic application will be inside the VNet you specify. The ASE is deployed into a single subnet in your VNet. Nothing else can be deployed into that same subnet. The subnet needs to be delegated to Microsoft.Web/HostingEnvironments ## Addresses
As you scale your App Service plans in your ASE, you'll use more addresses out o
## Ports
-The ASE receives application traffic on ports 80 and 443. If those ports are blocked, you can't reach your apps. Port 80 needs to be open from the load balancer to the ASE subnet as this port is used for keep alive traffic.
+The ASE receives application traffic on ports 80 and 443. If those ports are blocked, you can't reach your apps.
+
+> [!NOTE]
+> Port 80 must be allowed from AzureLoadBalancer to the ASE subnet for keep alive traffic between the load balancer and the ASE infrastructure. You can still control port 80 traffic to your ASE virtual IP.
+>
## Extra configurations
app-service Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/overview.md
description: Overview on the App Service Environment
ms.assetid: 3d37f007-d6f2-4e47-8e26-b844e47ee919 Previously updated : 06/21/2021 Last updated : 07/05/2021 -+ # App Service Environment overview > [!NOTE]
-> This article is about the App Service Environment v3 (preview)
+> This article is about the App Service Environment v3 which is used with Isolated v2 App Service plans
> The Azure App Service Environment is an Azure App Service feature that provides a fully isolated and dedicated environment for securely running App Service apps at high scale. This capability can host your:
There are a few features that are not available in ASEv3 that were available in
With ASEv3, there is a different pricing model depending on the type of ASE deployment you have. The three pricing models are: -- ASEv3: If ASE is empty, there is a charge as if you had one ASP with one instance of Windows I1v2. The one instance charge is not an additive charge but is only applied if the ASE is totally empty.-- Availability Zone ASEv3: There is a minimum 9 Windows I1v2 instance charge. There is no added charge for availability zone support if you have 9 or more App Service plan instances. -- Dedicated host ASEv3: With a dedicated host deployment, you are charged for two dedicated hosts per our pricing at ASEv3 creation then a small percentage of the Isolated V2 rate per core charge as you scale.
+- **ASEv3**: If ASE is empty, there is a charge as if you had one ASP with one instance of Windows I1v2. The one instance charge is not an additive charge but is only applied if the ASE is totally empty.
+- **Availability Zone ASEv3**: There is a minimum 9 Windows I1v2 instance charge. There is no added charge for availability zone support if you have 9 or more App Service plan instances.
+- **Dedicated host ASEv3**: With a dedicated host deployment, you are charged for two dedicated hosts per our pricing at ASEv3 creation then a small percentage of the Isolated V2 rate per core charge as you scale.
Reserved Instance pricing for Isolated v2 will be available after GA.
+## Regions
+
+The ASEv3 is available in the following regions.
+
+|Normal ASEv3 regions| Dedicated hosts regions| AZ ASEv3 regions|
+|--|-||
+|Australia East| Australia East| Australia East|
+|Australia Southeast| Australia Southeast |Canada Central|
+|Brazil South |Brazil South |Central US|
+|Canada Central| Canada Central| East US|
+|Central India |Central India| East US 2|
+|Central US |Central US |France Central|
+|East Asia |East Asia| Germany West Central|
+|East US |East US | North Europe|
+|East US 2| East US 2| South Central US|
+|France Central |France Central | Southeast Asia|
+|Germany West Central |Germany West Central| UK South|
+|Korea Central |Korea Central | West Europe|
+|North Europe |North Europe| West US 2|
+|Norway East |Norway East| |
+|South Africa North| South Africa North| |
+|South Central US |South Central US | |
+|Southeast Asia| Southeast Asia | |
+|Switzerland North |Switzerland North| |
+|UK South| UK West| |
+|UK West| West Central US | |
+|West Central US |West Europe| |
+|West Europe |West US | |
+|West US |West US 2| |
+|West US 2 | |
app-service Using An Ase https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/using-an-ase.md
# Use an App Service Environment
+> [!NOTE]
+> This article is about the App Service Environment v2 which is used with Isolated App Service plans
+>
An App Service Environment (ASE) is a deployment of Azure App Service into a subnet in a customer's Azure Virtual Network instance. An ASE consists of:
app-service Using https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/using.md
description: Learn how to use your App Service Environment to host isolated appl
ms.assetid: 377fce0b-7dea-474a-b64b-7fbe78380554 Previously updated : 06/21/2021 Last updated : 07/06/2021 - # Using an App Service Environment
+> [!NOTE]
+> This article is about the App Service Environment v3 which is used with Isolated v2 App Service plans
+>
+ The App Service Environment (ASE) is a single tenant deployment of the Azure App Service that injects directly into an Azure Virtual Network (VNet) of your choosing. It's a system that is only used by one customer. Apps deployed into the ASE are subject to the networking features that are applied to the ASE subnet. There aren't any additional features that need to be enabled on your apps to be subject to those networking features. ## Create an app in an ASE
If you want to use your own DNS server, you need to add the following records:
To configure DNS in Azure DNS Private zones:
-1. create an Azure DNS private zone named <ASE name>.appserviceenvironment.net
+1. create an Azure DNS private zone named &lt;ASE name&gt;.appserviceenvironment.net
1. create an A record in that zone that points * to the inbound IP address 1. create an A record in that zone that points @ to the inbound IP address 1. create an A record in that zone that points *.scm to the inbound IP address
app-service Zone Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/zone-redundancy.md
ms.assetid: 24e3e7eb-c160-49ff-8d46-e947818ef186 Previously updated : 07/15/2020 Last updated : 07/05/2021 - # Availability Zone support for App Service Environments
+> [!NOTE]
+> This article is about the App Service Environment v2 which is used with Isolated App Service plans
+>
+ App Service Environments (ASE) can be deployed into Availability Zones (AZ). Customers can deploy an internal load balancer (ILB) ASEs into a specific AZ within an Azure region. If you pin your ILB ASE to a specific AZ, the resources used by a ILB ASE will either be pinned to the specified AZ, or deployed in a zone redundant manner. An ILB ASE that is explicitly deployed into an AZ is considered a zonal resource because the ILB ASE is pinned to a specific zone. The following ILB ASE dependencies will be pinned to the specified zone:
app-service Reference App Settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/reference-app-settings.md
The following environment variables are related to [health checks](monitor-insta
## Push notifications
-The following environment variables are related to the [push notifications](/previous-versions/azure/app-service-mobile/app-service-mobile-xamarin-forms-get-started-push.md#configure-hub) feature.
+The following environment variables are related to the [push notifications](/previous-versions/azure/app-service-mobile/app-service-mobile-xamarin-forms-get-started-push#configure-hub) feature.
| Setting name | Description | |-|-|
attestation Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/attestation/quickstart-portal.md
Follow the steps in this section to view, add, and delete policy signer certific
1. Go to the Azure portal menu or the home page and select **All resources**. 1. In the filter box, enter the attestation provider name. 1. Select the attestation provider and go to the overview page.
-1. Select **Policy signer certificates** on the resource menu on the left side of the window or on the lower pane. You see a prompt to select certificate for authentication. Please choose the appropriate option to proceed.
+1. Select **Policy signer certificates** on the resource menu on the left side of the window or on the lower pane. If you see a prompt to select certificate for authentication, please choose the appropriate option to proceed.
1. Select **Download policy signer certificates**. The button will be disabled for attestation providers created without the policy signing requirement. 1. The downloaded text file will have all certificates in a JWS format. 1. Verify the certificate count and the downloaded certificates.
Follow the steps in this section to view, add, and delete policy signer certific
1. Go to the Azure portal menu or the home page and select **All resources**. 1. In the filter box, enter the attestation provider name. 1. Select the attestation provider and go to the overview page.
-1. Select **Policy signer certificates** on the resource menu on the left side of the window or on the lower pane.
+1. Select **Policy signer certificates** on the resource menu on the left side of the window or on the lower pane. If you see a prompt to select certificate for authentication, please choose the appropriate option to proceed.
1. Select **Add** on the upper menu. The button will be disabled for attestation providers created without the policy signing requirement. 1. Upload the policy signer certificate file and select **Add**. [See examples of policy signer certificates](./policy-signer-examples.md).
Follow the steps in this section to view, add, and delete policy signer certific
1. Go to the Azure portal menu or the home page and select **All resources**. 1. In the filter box, enter the attestation provider name. 1. Select the attestation provider and go to the overview page.
-1. Select **Policy signer certificates** on the resource menu on the left side of the window or on the lower pane.
+1. Select **Policy signer certificates** on the resource menu on the left side of the window or on the lower pane. If you see a prompt to select certificate for authentication, please choose the appropriate option to proceed.
1. Select **Delete** on the upper menu. The button will be disabled for attestation providers created without the policy signing requirement. 1. Upload the policy signer certificate file and select **Delete**. [See examples of policy signer certificates](./policy-signer-examples.md).
This section describes how to view an attestation policy and how to configure po
1. Go to the Azure portal menu or the home page and select **All resources**. 1. In the filter box, enter the attestation provider name. 1. Select the attestation provider and go to the overview page.
-1. Select **Policy** on the resource menu on the left side of the window or on the lower pane. You see a prompt to select certificate for authentication. Please choose the appropriate option to proceed.
+1. Select **Policy** on the resource menu on the left side of the window or on the lower pane. If you see a prompt to select certificate for authentication, please choose the appropriate option to proceed.
1. Select the preferred **Attestation Type** and view the **Current policy**. ### Configure an attestation policy
Follow these steps to upload a policy in JWT or text format if the attestation p
1. Go to the Azure portal menu or the home page and select **All resources**. 1. In the filter box, enter the attestation provider name. 1. Select the attestation provider and go to the overview page.
-1. Select **Policy** on the resource menu on the left side of the window or on the lower pane.
+1. Select **Policy** on the resource menu on the left side of the window or on the lower pane. If you see a prompt to select certificate for authentication, please choose the appropriate option to proceed.
1. Select **Configure** on the upper menu. 1. Select **Policy Format** as **JWT** or as **Text**.
automation Automation Secure Asset Encryption https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-secure-asset-encryption.md
Previously updated : 01/11/2020 Last updated : 06/25/2021
Secure assets in Azure Automation include credentials, certificates, connections
By default, your Azure Automation account uses Microsoft-managed keys.
-Each secure asset is encrypted and stored in Azure Automation using a unique key (Data Encryption key) that is generated for each automation account. These keys themselves are encrypted and stored in Azure Automation using yet another unique key that is generated for each account called an Account Encryption Key (AEK). These account encryption keys encrypted and stored in Azure Automation using Microsoft-managed Keys.
+Each secure asset is encrypted and stored in Azure Automation using a unique key (Data Encryption key) that is generated for each automation account. These keys themselves are encrypted and stored in Azure Automation using yet another unique key that is generated for each account called an Account Encryption Key (AEK). These account encryption keys encrypted and stored in Azure Automation using Microsoft-managed Keys.
-## Keys that you manage with Key Vault (preview)
+## Keys that you manage with Key Vault
You can manage encryption of secure assets for your Automation account with your own keys. When you specify a customer-managed key at the level of the Automation account, that key is used to protect and control access to the account encryption key for the Automation account. This in turn is used to encrypt and decrypt all the secure assets. Customer-managed keys offer greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your secure assets.
Use Azure Key Vault to store customer-managed keys. You can either create your o
## Use of customer-managed keys for an Automation account
-When you use encryption with customer-managed keys for an Automation account, Azure Automation wraps the account encryption key with the customer-managed key in the associated key vault. Enabling customer-managed keys does not impact performance, and the account is encrypted with the new key immediately, without any delay.
+When you use encryption with customer-managed keys for an Automation account, Azure Automation wraps the account encryption key with the customer-managed key in the associated key vault. Enabling customer-managed keys doesn't impact performance, and the account is encrypted with the new key immediately, without any delay.
A new Automation account is always encrypted using Microsoft-managed keys. It's not possible to enable customer-managed keys at the time that the account is created. Customer-managed keys are stored in Azure Key Vault, and the key vault must be provisioned with access policies that grant key permissions to the managed identity that is associated with the Automation account. The managed identity is available only after the storage account is created.
-When you modify the key being used for Azure Automation secure asset encryption, by enabling or disabling customer-managed keys, updating the key version, or specifying a different key, the encryption of the account encryption key changes but the secure assets in your Azure Automation account do not need to be re-encrypted.
+When you modify the key being used for Azure Automation secure asset encryption, by enabling or disabling customer-managed keys, updating the key version, or specifying a different key, the encryption of the account encryption key changes but the secure assets in your Azure Automation account don't need to be re-encrypted.
-> [!NOTE]
-> To enable customer-managed keys, you need to make Azure Automation REST API calls using api version 2020-01-13-preview
+> [!NOTE]
+> To enable customer-managed key using Azure Automation REST API calls, you need to use api version 2020-01-13-preview.
-### Prerequisites for using customer-managed keys in Azure Automation
+## Prerequisites for using customer-managed keys in Azure Automation
Before enabling customer-managed keys for an Automation account, you must ensure the following prerequisites are met: -- The Automation account and the key vault can be in different subscriptions, but need to be in the same Azure Active Directory tenant.
+- An [Azure Key Vault](../key-vault/general/basic-concepts.md) with the **Soft Delete** and **Do Not Purge** properties enabled. These properties are required to allow for recovery of keys if there's accidental deletion.
+- Only RSA keys are supported with Azure Automation encryption. For more information about keys, see [About Azure Key Vault keys, secrets, and certificates](../key-vault/general/about-keys-secrets-certificates.md).
+- The Automation account and the key vault can be in different subscriptions but need to be in the same Azure Active Directory tenant.
+- When using PowerShell, verify the [Azure Az PowerShell module](/powershell/azure/new-azureps-module-az) is installed. To install or upgrade, see [How to install the Azure Az PowerShell module](/powershell/azure/install-az-ps).
-### Assignment of an identity to the Automation account
+## Generate and assign a new system-assigned identity for an Automation account
To use customer-managed keys with an Automation account, your Automation account needs to authenticate against the key vault storing customer-managed keys. Azure Automation uses system assigned managed identities to authenticate the account with Azure Key Vault. For more information about managed identities, see [What are managed identities for Azure resources?](../active-directory/managed-identities-azure-resources/overview.md)
-Configure a system assigned managed identity to the Automation account using the following REST API call:
+### Using PowerShell
+
+Use PowerShell cmdlet [Set-AzAutomationAccount](/powershell/module/az.automation/set-azautomationaccount) to modify an existing Azure Automation account. The `-AssignSystemIdentity` parameter generates and assigns a new system-assigned identity for the Automation account to use with other services like Azure Key Vault. For more information, see [What are managed identities for Azure resources?](/active-directory/managed-identities-azure-resources/overview) and [About Azure Key Vault](/key-vault/general/overview). Execute the following code:
+
+```powershell
+# Revise variables with your actual values.
+$resourceGroup = "ResourceGroupName"
+$automationAccount = "AutomationAccountName"
+$vaultName = "KeyVaultName"
+$keyName = "KeyName"
+
+Set-AzAutomationAccount `
+ -ResourceGroupName $resourceGroup `
+ -Name $automationAccount `
+ -AssignSystemIdentity
+```
+
+The output should look similar to the following:
++
+Obtain the `PrincipalId` for later use. Execute the following code:
+
+```powershell
+$principalID = (Get-AzAutomationAccount `
+ -ResourceGroupName $resourceGroup `
+ -Name $automationAccount).Identity.PrincipalId
+
+$principalID
+```
+
+### Using REST
+
+Configure a system-assigned managed identity to the Automation account using the following REST API call:
```http PATCH https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resource-group-name/providers/Microsoft.Automation/automationAccounts/automation-account-name?api-version=2020-01-13-preview
System-assigned identity for the Automation account is returned in a response si
} ```
-### Configuration of the Key Vault access policy
+## Configuration of the Key Vault access policy
-Once a managed identity is assigned to the Automation account, you configure access to the key vault storing customer-managed keys. Azure Automation requires **get**, **recover**, **wrapKey**, **UnwrapKey** on the customer-managed keys.
+Once a system assigned managed identity is assigned to the Automation account, you configure access to the key vault storing customer-managed keys. Azure Automation requires the **Get**, **Recover**, **WrapKey**, and **UnwrapKey** operation permissions for the identity to access the customer-managed keys.
-Such an access policy can be set using the following REST API call:
+### Using PowerShell
+
+Use PowerShell cmdlet [Set-AzKeyVaultAccessPolicy](/powershell/module/az.keyvault/set-azkeyvaultaccesspolicy) to grant the necessary permissions. Then use [Add-AzKeyVaultKey](/powershell/module/az.keyvault/add-azkeyvaultkey) to create a key in the key vault. Execute the following code:
+
+```powershell
+Set-AzKeyVaultAccessPolicy `
+ -VaultName $vaultName `
+ -ObjectId $principalID `
+ -PermissionsToKeys Get, Recover, UnwrapKey, WrapKey
+
+Add-AzKeyVaultKey `
+ -VaultName $vaultName `
+ -Name $keyName `
+ -Destination 'Software'
+```
+
+The output should look similar to the following:
++
+### Using REST
+
+The access policy can be set using the following REST API call:
```http PUT https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/sample-group/providers/Microsoft.KeyVault/vaults/sample-vault/accessPolicies/add?api-version=2018-02-14
Request body:
> [!NOTE] > The **tenantId** and **objectId** fields must be provided with values of **identity.tenantId** and **identity.principalId** respectively from the response of managed identity for the Automation account.
-### Change the configuration of Automation account to use customer-managed key
+## Reconfigure Automation account to use customer-managed key
+
+If you want to switch your Automation account from Microsoft-managed keys to customer-managed keys, you can perform this change using Azure PowerShell or with an Azure Resource Manager template.
+
+### Using PowerShell
+
+Use PowerShell cmdlet [Set-AzAutomationAccount](/powershell/module/az.automation/set-azautomationaccount) to reconfigure the Automation account to use customer-managed keys.
+
+```powershell
+$vaultURI = (Get-AzKeyVault -VaultName $vaultName).VaultUri
+$keyVersion = (Get-AzKeyVaultKey -VaultName $vaultName -KeyName $keyName).Version
+
+Set-AzAutomationAccount `
+ -ResourceGroupName $resourceGroup `
+ -Name $automationAccount `
+ -AssignSystemIdentity `
+ -KeyName $keyName `
+ -KeyVaultUri $vaultURI `
+ -KeyVersion $keyVersion `
+ -KeyVaultEncryption
+```
+
+You can verify the change by running the following command:
+
+```powershell
+(Get-AzAutomationAccount `
+ -ResourceGroupName $resourceGroup `
+ -Name $automationAccount).Encryption `
+ | ConvertTo-Json
+```
+
+The output should look similar to the following:
++
+### Using REST
-Finally, you can switch your Automation account from Microsoft-managed keys to customer-managed keys, using the following REST API call:
+Use the following REST API call:
```http PATCH https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resource-group-name/providers/Microsoft.Automation/automationAccounts/automation-account-name?api-version=2020-01-13-preview
Sample response
You can rotate a customer-managed key in Azure Key Vault according to your compliance policies. When the key is rotated, you must update the Automation account to use the new key URI.
-Rotating the key does not trigger re-encryption of secure assets in the Automation account. There is no further action required.
+Rotating the key doesn't trigger re-encryption of secure assets in the Automation account. There's no further action required.
## Revocation of access to a customer-managed key
automation Disable Managed Identity For Automation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/disable-managed-identity-for-automation.md
Removing a system-assigned identity using this method also deletes it from Azure
- For more information about enabling managed identity in Azure Automation, see [Enable and use managed identity for Automation (preview)](enable-managed-identity-for-automation.md). -- For an overview of Automation account security, see [Automation account authentication overview](automation-security-overview.md).
+- For an overview of Automation account security, see [Automation account authentication overview](automation-security-overview.md).
automation Automation Region Dns Records https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/how-to/automation-region-dns-records.md
To support [Private Link](../../private-link/private-link-overview.md) in Azure
| **Region** | **DNS record** | | | |
-| West Central US |`https://<accountId>.webhook.wcus.azure-automation.net`<br>`https://<accountId>.agentsvc.wcus.azure-automation.net`<br>`https://<accountId>.jrds.wcus.azure-automation.net` |
-| West US |`https://<accountId>.webhook.wus.azure-automation.net`<br>`https://<accountId>.agentsvc.wus.azure-automation.net`<br>`https://<accountId>.jrds.wus.azure-automation.net` |
-| West US 2 |`https://<accountId>.webhook.wus2.azure-automation.net`<br>`https://<accountId>.agentsvc.wus2.azure-automation.net`<br>`https://<accountId>.jrds.wus2.azure-automation.net` |
-| Central US |`https://<accountId>.webhook.cus.azure-automation.net`<br>`https://<accountId>.agentsvc.cus.azure-automation.net`<br>`https://<accountId>.jrds.cus.azure-automation.net` |
-| South Central US |`https://<accountId>.webhook.scus.azure-automation.net`<br>`https://<accountId>.agentsvc.scus.azure-automation.net`<br>`https://<accountId>.jrds.scus.azure-automation.net` |
-| North Central US |`https://<accountId>.webhook.ncus.azure-automation.net`<br>`https://<accountId>.agentsvc.ncus.azure-automation.net`<br>`https://<accountId>.jrds.ncus.azure-automation.net` |
-| East US |`https://<accountId>.webhook.eus.azure-automation.net`<br>`https://<accountId>.agentsvc.eus.azure-automation.net`<br>`https://<accountId>.jrds.eus.azure-automation.net` |
-| East US 2 |`https://<accountId>.webhook.eus2.azure-automation.net`<br>`https://<accountId>.agentsvc.eus2.azure-automation.net`<br>`https://<accountId>.jrds.eus2.azure-automation.net` |
+| South Africa North |`https://<accountId>.webhook.san.azure-automation.net`<br>`https://<accountId>.agentsvc.san.azure-automation.net`<br>`https://<accountId>.jrds.san.azure-automation.net` |
+| East Asia |`https://<accountId>.webhook.ea.azure-automation.net`<br>`https://<accountId>.agentsvc.ea.azure-automation.net`<br>`https://<accountId>.jrds.ea.azure-automation.net` |
+| South East Asia |`https://<accountId>.webhook.sea.azure-automation.net`<br>`https://<accountId>.agentsvc.sea.azure-automation.net`<br>`https://<accountId>.jrds.sea.azure-automation.net` |
+| Australia Central |`https://<accountId>.webhook.ac.azure-automation.net`<br>`https://<accountId>.agentsvc.ac.azure-automation.net`<br>`https://<accountId>.jrds.ac.azure-automation.net` |
+| Australia Central 2 |`https://<accountId>.webhook.cbr2.azure-automation.net`<br>`https://<accountId>.agentsvc.cbr2.azure-automation.net`<br>`https://<accountId>.jrds.cbr2.azure-automation.net` |
+| Australia South East |`https://<accountId>.webhook.ase.azure-automation.net`<br>`https://<accountId>.agentsvc.ase.azure-automation.net`<br>`https://<accountId>.jrds.ase.azure-automation.net` |
+| Australia East |`https://<accountId>.webhook.ae.azure-automation.net`<br>`https://<accountId>.agentsvc.ae.azure-automation.net`<br>`https://<accountId>.jrds.ae.azure-automation.net` |
+| Brazil South |`https://<accountId>.webhook.brs.azure-automation.net`<br>`https://<accountId>.agentsvc.brs.azure-automation.net`<br>`https://<accountId>.jrds.brs.azure-automation.net` |
+| Brazil Southeast |`https://<accountId>.webhook.brse.azure-automation.net`<br>`https://<accountId>.agentsvc.brse.azure-automation.net`<br>`https://<accountId>.jrds.brse.azure-automation.net` |
| Canada Central |`https://<accountId>.webhook.cc.azure-automation.net`<br>`https://<accountId>.agentsvc.cc.azure-automation.net`<br>`https://<accountId>.jrds.cc.azure-automation.net` |
+| China East 2 |`https://<accountId>.webhook.sha2.azure-automation.cn`<br>`https://<accountId>.agentsvc.sha2.azure-automation.cn`<br>`https://<accountId>.jrds.sha2.azure-automation.cn` |
+| China North |`https://<accountId>.webhook.bjb.azure-automation.cn`<br>`https://<accountId>.agentsvc.bjb.azure-automation.cn`<br>`https://<accountId>.jrds.bjb.azure-automation.cn` |
+| China North 2 |`https://<accountId>.webhook.bjs2.azure-automation.cn`<br>`https://<accountId>.agentsvc.bjs2.azure-automation.cn`<br>`https://<accountId>.jrds.bjs2.azure-automation.cn` |
| West Europe |`https://<accountId>.webhook.we.azure-automation.net`<br>`https://<accountId>.agentsvc.we.azure-automation.net`<br>`https://<accountId>.jrds.we.azure-automation.net` | | North Europe |`https://<accountId>.webhook.ne.azure-automation.net`<br>`https://<accountId>.agentsvc.ne.azure-automation.net`<br>`https://<accountId>.jrds.ne.azure-automation.net` |
-| South East Asia |`https://<accountId>.webhook.sea.azure-automation.net`<br>`https://<accountId>.agentsvc.sea.azure-automation.net`<br>`https://<accountId>.jrds.sea.azure-automation.net` |
-| East Asia |`https://<accountId>.webhook.ea.azure-automation.net`<br>`https://<accountId>.agentsvc.ea.azure-automation.net`<br>`https://<accountId>.jrds.ea.azure-automation.net` |
+| France Central |`https://<accountId>.webhook.fc.azure-automation.net`<br>`https://<accountId>.agentsvc.fc.azure-automation.net`<br>`https://<accountId>.jrds.fc.azure-automation.net` |
+| France South |`https://<accountId>.webhook.mrs.azure-automation.net`<br>`https://<accountId>.agentsvc.mrs.azure-automation.net`<br>`https://<accountId>.jrds.mrs.azure-automation.net` |
+| Germany West Central |`https://<accountId>.webhook.dewc.azure-automation.de`<br>`https://<accountId>.agentsvc.dewc.azure-automation.de`<br>`https://<accountId>.jrds.dewc.azure-automation.de` |
| Central India |`https://<accountId>.webhook.cid.azure-automation.net`<br>`https://<accountId>.agentsvc.cid.azure-automation.net`<br>`https://<accountId>.jrds.cid.azure-automation.net` |
+| South India |`https://<accountId>.webhook.ma.azure-automation.net`<br>`https://<accountId>.agentsvc.ma.azure-automation.net`<br>`https://<accountId>.jrds.ma.azure-automation.net` |
| Japan East |`https://<accountId>.webhook.jpe.azure-automation.net`<br>`https://<accountId>.agentsvc.jpe.azure-automation.net`<br>`https://<accountId>.jrds.jpe.azure-automation.net` |
+| Japan West |`https://<accountId>.webhook.jpw.azure-automation.net`<br>`https://<accountId>.agentsvc.jpw.azure-automation.net`<br>`https://<accountId>.jrds.jpw.azure-automation.net` |
| Korea Central |`https://<accountId>.webhook.kc.azure-automation.net`<br>`https://<accountId>.agentsvc.kc.azure-automation.net`<br>`https://<accountId>.jrds.kc.azure-automation.net` |
-| Australia South East |`https://<accountId>.webhook.ase.azure-automation.net`<br>`https://<accountId>.agentsvc.ase.azure-automation.net`<br>`https://<accountId>.jrds.ase.azure-automation.net` |
-| Australia East |`https://<accountId>.webhook.ae.azure-automation.net`<br>`https://<accountId>.agentsvc.ae.azure-automation.net`<br>`https://<accountId>.jrds.ae.azure-automation.net` |
-| Australia Central |`https://<accountId>.webhook.ac.azure-automation.net`<br>`https://<accountId>.agentsvc.ac.azure-automation.net`<br>`https://<accountId>.jrds.ac.azure-automation.net` |
+| Korea South |`https://<accountId>.webhook.ps.azure-automation.net`<br>`https://<accountId>.agentsvc.ps.azure-automation.net`<br>`https://<accountId>.jrds.ps.azure-automation.net` |
+| Norway East |`https://<accountId>.webhook.noe.azure-automation.net`<br>`https://<accountId>.agentsvc.noe.azure-automation.net`<br>`https://<accountId>.jrds.noe.azure-automation.net` |
+| Norway West |`https://<accountId>.webhook.now.azure-automation.net`<br>`https://<accountId>.agentsvc.now.azure-automation.net`<br>`https://<accountId>.jrds.now.azure-automation.net` |
+| Switzerland West |`https://<accountId>.webhook.stzw.azure-automation.net`<br>`https://<accountId>.agentsvc.stzw.azure-automation.net`<br>`https://<accountId>.jrds.stzw.azure-automation.net` |
+| UAE Central |`https://<accountId>.webhook.auh.azure-automation.net`<br>`https://<accountId>.agentsvc.auh.azure-automation.net`<br>`https://<accountId>.jrds.auh.azure-automation.net` |
+| UAE North |`https://<accountId>.webhook.uaen.azure-automation.net`<br>`https://<accountId>.agentsvc.uaen.azure-automation.net`<br>`https://<accountId>.jrds.uaen.azure-automation.net` |
+| UK West |`https://<accountId>.webhook.cw.azure-automation.net`<br>`https://<accountId>.agentsvc.cw.azure-automation.net`<br>`https://<accountId>.jrds.cw.azure-automation.net` |
| UK South |`https://<accountId>.webhook.uks.azure-automation.net`<br>`https://<accountId>.agentsvc.uks.azure-automation.net`<br>`https://<accountId>.jrds.uks.azure-automation.net` |
-| France Central |`https://<accountId>.webhook.fc.azure-automation.net`<br>`https://<accountId>.agentsvc.fc.azure-automation.net`<br>`https://<accountId>.jrds.fc.azure-automation.net` |
-| South Africa North |`https://<accountId>.webhook.san.azure-automation.net`<br>`https://<accountId>.agentsvc.san.azure-automation.net`<br>`https://<accountId>.jrds.san.azure-automation.net` |
-| Brazil South |`https://<accountId>.webhook.brs.azure-automation.net`<br>`https://<accountId>.agentsvc.brs.azure-automation.net`<br>`https://<accountId>.jrds.brs.azure-automation.net` |
-| China North |`https://<accountId>.webhook.bjb.azure-automation.cn`<br>`https://<accountId>.agentsvc.bjb.azure-automation.cn`<br>`https://<accountId>.jrds.bjb.azure-automation.cn` |
-| China North 2 |`https://<accountId>.webhook.bjs2.azure-automation.cn`<br>`https://<accountId>.agentsvc.bjs2.azure-automation.cn`<br>`https://<accountId>.jrds.bjs2.azure-automation.cn` |
-| China East 2 |`https://<accountId>.webhook.sha2.azure-automation.cn`<br>`https://<accountId>.agentsvc.sha2.azure-automation.cn`<br>`https://<accountId>.jrds.sha2.azure-automation.cn` |
+| Central US |`https://<accountId>.webhook.cus.azure-automation.net`<br>`https://<accountId>.agentsvc.cus.azure-automation.net`<br>`https://<accountId>.jrds.cus.azure-automation.net` |
+| East US |`https://<accountId>.webhook.eus.azure-automation.net`<br>`https://<accountId>.agentsvc.eus.azure-automation.net`<br>`https://<accountId>.jrds.eus.azure-automation.net` |
+| East US 2 |`https://<accountId>.webhook.eus2.azure-automation.net`<br>`https://<accountId>.agentsvc.eus2.azure-automation.net`<br>`https://<accountId>.jrds.eus2.azure-automation.net` |
+| North Central US |`https://<accountId>.webhook.ncus.azure-automation.net`<br>`https://<accountId>.agentsvc.ncus.azure-automation.net`<br>`https://<accountId>.jrds.ncus.azure-automation.net` |
+| South Central US |`https://<accountId>.webhook.scus.azure-automation.net`<br>`https://<accountId>.agentsvc.scus.azure-automation.net`<br>`https://<accountId>.jrds.scus.azure-automation.net` |
+| West Central US |`https://<accountId>.webhook.wcus.azure-automation.net`<br>`https://<accountId>.agentsvc.wcus.azure-automation.net`<br>`https://<accountId>.jrds.wcus.azure-automation.net` |
+| West US |`https://<accountId>.webhook.wus.azure-automation.net`<br>`https://<accountId>.agentsvc.wus.azure-automation.net`<br>`https://<accountId>.jrds.wus.azure-automation.net` |
+| West US 2 |`https://<accountId>.webhook.wus2.azure-automation.net`<br>`https://<accountId>.agentsvc.wus2.azure-automation.net`<br>`https://<accountId>.jrds.wus2.azure-automation.net` |
+| West US 3 |`https://<accountId>.webhook.usw3.azure-automation.net`<br>`https://<accountId>.agentsvc.usw3.azure-automation.net`<br>`https://<accountId>.jrds.usw3.azure-automation.net` |
| US Gov Virginia |`https://<accountId>.webhook.usge.azure-automation.us`<br>`https://<accountId>.agentsvc.usge.azure-automation.us`<br>`https://<accountId>.jrds.usge.azure-automation.us` | | US Gov Texas |`https://<accountId>.webhook.ussc.azure-automation.us`<br>`https://<accountId>.agentsvc.ussc.azure-automation.us`<br>`https://<accountId>.jrds.ussc.azure-automation.us` | | US Gov Arizona |`https://<accountId>.webhook.phx.azure-automation.us`<br>`https://<accountId>.agentsvc.phx.azure-automation.us`<br>`https://<accountId>.jrds.phx.azure-automation.us` |
automation Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/whats-new.md
This page is updated monthly, so revisit it regularly.
## June 2021
+### Hybrid Runbook Worker support for Ubuntu 20.04 LTS
+
+**Type:** New feature
+
+See [Supported Linux operating systems](./automation-linux-hrw-install.md#supported-linux-operating-systems) for a complete list.
+ ### Security update for Log Analytics Contributor role **Type:** Plan for change
availability-zones Az Region https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/availability-zones/az-region.md
To achieve comprehensive business continuity on Azure, build your application ar
| [Azure Firewall](../firewall/deploy-availability-zone-powershell.md) | :large_blue_diamond: | | [Azure Firewall Manager](../firewall-manager/quick-firewall-policy.md) | :large_blue_diamond: | | [Azure Kubernetes Service (AKS)](../aks/availability-zones.md) | :large_blue_diamond: |
+| [Azure Media Services (AMS)](../media-services/latest/concept-availability-zones.md) | :large_blue_diamond: |
| [Azure Private Link](../private-link/private-link-overview.md) | :large_blue_diamond: | | [Azure Site Recovery](../site-recovery/azure-to-azure-how-to-enable-zone-to-zone-disaster-recovery.md) | :large_blue_diamond: | | Azure SQL: [Virtual Machine](../azure-sql/database/high-availability-sla.md) | :large_blue_diamond: |
azure-cache-for-redis Cache How To Scale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-how-to-scale.md
# Scale an Azure Cache for Redis instance
-Azure Cache for Redis has different cache offerings, which provide flexibility in the choice of cache size and features. For a Basic, Standard or Premium cache, you can change its size and tier after it's been created to keep up with your application needs. This article shows you how to scale your cache using the Azure portal, and tools such as Azure PowerShell, and Azure CLI.
+Azure Cache for Redis has different cache offerings, which provide flexibility in the choice of cache size and features. For a Basic, Standard or Premium cache, you can change its size and tier after creating it to match your application needs. This article shows you how to scale your cache using the Azure portal, and tools such as Azure PowerShell, and Azure CLI.
## When to scale
You can monitor the following metrics to help determine if you need to scale.
* Network Bandwidth * CPU Usage
-If you determine your cache is no longer meeting your application's requirements, you can scale to a larger or smaller cache pricing tier that is right for your application. For more information on determining which cache pricing tier to use, see [Choosing the right tier](cache-overview.md#choosing-the-right-tier).
+If you determine your cache is no longer meeting your application requirements, you can scale it to a larger or smaller cache pricing tier that is right for your application. For more information on determining which cache pricing tier to use, see [Choosing the right tier](cache-overview.md#choosing-the-right-tier).
## Scale a cache
The following list contains answers to commonly asked questions about Azure Cach
* [Will I lose data from my cache during scaling?](#will-i-lose-data-from-my-cache-during-scaling) * [Is my custom databases setting affected during scaling?](#is-my-custom-databases-setting-affected-during-scaling) * [Will my cache be available during scaling?](#will-my-cache-be-available-during-scaling)
-* With Geo-replication configured, why am I not able to scale my cache or change the shards in a cluster?
-* [Operations that are not supported](#operations-that-are-not-supported)
+* [Are there scaling limitations with geo-replication?](#are-there-scaling-limitations-with-geo-replication)
+* [Operations that aren't supported](#operations-that-arent-supported)
* [How long does scaling take?](#how-long-does-scaling-take) * [How can I tell when scaling is complete?](#how-can-i-tell-when-scaling-is-complete)
No, your cache name and keys are unchanged during a scaling operation.
### How does scaling work?
-* When a **Basic** cache is scaled to a different size, it is shut down and a new cache is provisioned using the new size. During this time, the cache is unavailable and all data in the cache is lost.
+* When a **Basic** cache is scaled to a different size, it's shut down and a new cache is provisioned using the new size. During this time, the cache is unavailable and all data in the cache is lost.
* When a **Basic** cache is scaled to a **Standard** cache, a replica cache is provisioned and the data is copied from the primary cache to the replica cache. The cache remains available during the scaling process.
-* When a **Standard** cache is scaled to a different size or to a **Premium** cache, one of the replicas is shut down and reprovisioned to the new size and the data transferred over, and then the other replica performs a failover before it is reprovisioned, similar to the process that occurs during a failure of one of the cache nodes.
+* When a **Standard** cache is scaled to a different size or to a **Premium** cache, one of the replicas is shut down and reprovisioned to the new size and the data transferred over, and then the other replica does a failover before it's reprovisioned, similar to the process that occurs during a failure of one of the cache nodes.
+* When you scale out a clustered cache, new shards are provisioned and added to the Redis server cluster. Data is then resharded across all shards.
+* When you scale in a clustered cache, data is first resharded and then cluster size is reduced to required shards.
### Will I lose data from my cache during scaling?
-* When a **Basic** cache is scaled to a new size, all data is lost and the cache is unavailable during the scaling operation.
-* When a **Basic** cache is scaled to a **Standard** cache, the data in the cache is typically preserved.
-* When a **Standard** cache is scaled to a larger size or tier, or a **Premium** cache is scaled to a larger size, all data is typically preserved. When scaling down a Standard or Premium cache to a smaller size, data may be lost depending on how much data is in the cache related to the new size when it is scaled. If data is lost when scaling down, keys are evicted using the [allkeys-lru](https://redis.io/topics/lru-cache) eviction policy.
+* When you scale a **Basic** cache to a new size, all data is lost and the cache is unavailable during the scaling operation.
+* When you scale a **Basic** cache to a **Standard** cache, the data in the cache is typically preserved.
+* When you scale a **Standard** cache to a larger size or tier, or a **Premium** cache is scaled to a larger size, all data is typically preserved. When scaling down a Standard or Premium cache to a smaller size, data can be lost if the data size exceeds the new smaller size when it's scaled down. If data is lost when scaling down, keys are evicted using the [allkeys-lru](https://redis.io/topics/lru-cache) eviction policy.
### Is my custom databases setting affected during scaling?
While Standard and Premium caches have a 99.9% SLA for availability, there's no
### Will my cache be available during scaling? * **Standard** and **Premium** caches remain available during the scaling operation. However, connection blips can occur while scaling Standard and Premium caches, and also while scaling from Basic to Standard caches. These connection blips are expected to be small and redis clients can generally re-establish their connection instantly.
-* **Basic** caches are offline during scaling operations to a different size. Basic caches remain available when scaling from **Basic** to **Standard** but, may experience a small connection blip. If a connection blip occurs, redis clients can generally re-establish their connection instantly.
+* **Basic** caches are offline during scaling operations to a different size. Basic caches remain available when scaling from **Basic** to **Standard** but might experience a small connection blip. If a connection blip occurs, Redis clients can generally re-establish their connection instantly.
-### Scaling limitations with Geo-replication
+### Are there scaling limitations with geo-replication?
-Once you have added a Geo-replication link between two caches, you can no longer start a scaling operation or change the number of shards in a cluster. You must unlink the cache to issue these commands. For more information, see [Configure Geo-replication](cache-how-to-geo-replication.md).
+With geo-replication configured, you might notice that you cannot scale a cache or change the shards in a cluster. A geo-replication link between two caches prevents you from scaling operation or changing the number of shards in a cluster. You must unlink the cache to issue these commands. For more information, see [Configure Geo-replication](cache-how-to-geo-replication.md).
-### Operations that are not supported
+### Operations that aren't supported
* You can't scale from a higher pricing tier to a lower pricing tier. * You can't scale from a **Premium** cache down to a **Standard** or a **Basic** cache. * You can't scale from a **Standard** cache down to a **Basic** cache. * You can scale from a **Basic** cache to a **Standard** cache but you can't change the size at the same time. If you need a different size, you can do a scaling operation to the size you want at a later time.
-* You can't scale from a **Basic** cache directly to a **Premium** cache. First scale from **Basic** to **Standard** in one scaling operation, and then scale from **Standard** to **Premium** in an operation later.
+* You can't scale from a **Basic** cache directly to a **Premium** cache. First scale from **Basic** to **Standard** in one scaling operation, and then scale from **Standard** to **Premium** in a later operation.
* You can't scale from a larger size down to the **C0 (250 MB)** size. If a scaling operation fails, the service tries to revert the operation, and the cache will revert to the original size. ### How long does scaling take?
-Scaling time depends on how much data is in the cache, with larger amounts of data taking a longer time to complete. Scaling takes approximately 20 minutes. For clustered caches, scaling takes approximately 20 minutes per shard.
+Scaling time depends on a few factors. Here are some factors that can affect how long scaling takes.
+
+* Amount of data: Larger amounts of data take a longer time to be replicated
+* High write requests: Higher number of writes mean more data replicates across nodes or shards
+* High server load: Higher server load means Redis server is busy and has limited CPU cycles to complete data redistribution
+
+Generally, when you scale a cache with no data, it takes approximately 20 minutes. For clustered caches, scaling takes approximately 20 minutes per shard with minimal data.
+
+<!-- Scaling time depends on how much data is in the cache, with larger amounts of data taking a longer time to complete. Scaling takes approximately 20 minutes. For clustered caches, scaling takes approximately 20 minutes per shard.
+ -->
### How can I tell when scaling is complete?
azure-functions Disable Function https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/disable-function.md
This article explains how to disable a function in Azure Functions. To *disable*
The recommended way to disable a function is with an app setting in the format `AzureWebJobs.<FUNCTION_NAME>.Disabled` set to `true`. You can create and modify this application setting in a number of ways, including by using the [Azure CLI](/cli/azure/) and from your function's **Overview** tab in the [Azure portal](https://portal.azure.com).
-> [!NOTE]
-> When you disable an HTTP triggered function by using the methods described in this article, the endpoint may still by accessible when running on your local computer.
-
-> [!NOTE]
-> At the present, Function names with hyphens (`-`) in them cannot be disabled in Linux-based App Service Plans. If you need to disable your Functions in Linux plans, avoid using hyphens in your Function names.
- ## Disable a function # [Portal](#tab/portal)
In the second example, the function is disabled when there is an app setting tha
>[!IMPORTANT] >The portal uses application settings to disable v1.x functions. When an application setting conflicts with the function.json file, an error can occur. You should remove the `disabled` property from the function.json file to prevent errors.
+## Considerations
+
+Keep the following considerations in mind when you disable functions:
+++ When you disable an HTTP triggered function by using the methods described in this article, the endpoint may still by accessible when running on your local computer. +++ At this time, function names that contain a hyphen (`-`) can't be disabled when running on Linux in a Dedicated (App Service) plan. If you need to disable your functions when running on Linux in a Dedicated plan, don't use hyphens in your function names. ## Next steps
azure-functions Functions App Settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-app-settings.md
Only used when deploying to a Premium plan or to a Consumption plan running on W
## WEBSITE\_CONTENTOVERVNET
-For Premium plans only. A value of `1` enables your function app to scale when you have your storage account restricted to a virtual network. You should enable this setting when restricting your storage account to a virtual network. To learn more, see [Restrict your storage account to a virtual network](functions-networking-options.md#restrict-your-storage-account-to-a-virtual-network).
+A value of `1` enables your function app to scale when you have your storage account restricted to a virtual network. You should enable this setting when restricting your storage account to a virtual network. To learn more, see [Restrict your storage account to a virtual network](functions-networking-options.md#restrict-your-storage-account-to-a-virtual-network).
|Key|Sample value| ||| |WEBSITE_CONTENTOVERVNET|1|
+Supported on [Premium](functions-premium-plan.md) and [Dedicated (App Service) plans](dedicated-plan.md) (Standard and higher) running Windows. Not currently supported for Consumption and Premium plans running Linux.
+ ## WEBSITE\_CONTENTSHARE The file path to the function app code and configuration in an event-driven scaling plan on Windows. Used with WEBSITE_CONTENTAZUREFILECONNECTIONSTRING. Default is a unique string that begins with the function app name. See [Create a function app](functions-infrastructure-as-code.md#windows).
azure-functions Functions Create Function Linux Custom Image https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-create-function-linux-custom-image.md
A function app on Azure manages the execution of your functions in your hosting
::: zone-end The *deployment-container-image-name* parameter specifies the image to use for the function app. You can use the [az functionapp config container show](/cli/azure/functionapp/config/container#az_functionapp_config_container_show) command to view information about the image used for deployment. You can also use the [az functionapp config container set](/cli/azure/functionapp/config/container#az_functionapp_config_container_set) command to deploy from a different image.
+
+ > [!TIP]
+ > You can use the [`DisableColor` setting](functions-host-json.md#console) in the host.json file to prevent ANSI control characters from being written to the container logs.
1. Display the connection string for the storage account you created by using the [az storage account show-connection-string](/cli/azure/storage/account) command. Replace `<storage-name>` with the name of the storage account you created above:
azure-functions Functions Host Json https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-host-json.md
For more information on snapshots, see [Debug snapshots on exceptions in .NET ap
Configuration settings can be found in [Storage blob triggers and bindings](functions-bindings-storage-blob.md#hostjson-settings).
+## console
+
+This setting is a child of [logging](#logging). It controls the console logging when not in debugging mode.
+
+```json
+{
+ "logging": {
+ ...
+ "console": {
+ "isEnabled": false,
+ "DisableColors": true
+ },
+ ...
+ }
+}
+```
+
+|Property |Default | Description |
+||||
+|DisableColors|false| Supresses log formatting in the container logs on Linux. Set to true if you are seeing unwanted ANSI control characters in the container logs when running on Linux. |
+|isEnabled|false|Enables or disables console logging.|
+ ## cosmosDb Configuration setting can be found in [Cosmos DB triggers and bindings](functions-bindings-cosmosdb-v2-output.md#host-json).
Controls the logging behaviors of the function app, including Application Insigh
|console|n/a| The [console](#console) logging setting. | |applicationInsights|n/a| The [applicationInsights](#applicationinsights) setting. |
-## console
-
-This setting is a child of [logging](#logging). It controls the console logging when not in debugging mode.
-
-```json
-{
- "logging": {
- ...
- "console": {
- "isEnabled": "false"
- },
- ...
- }
-}
-```
-
-|Property |Default | Description |
-||||
-|isEnabled|false|Enables or disables console logging.|
- ## managedDependency Managed dependency is a feature that is currently only supported with PowerShell based functions. It enables dependencies to be automatically managed by the service. When the `enabled` property is set to `true`, the `requirements.psd1` file is processed. Dependencies are updated when any minor versions are released. For more information, see [Managed dependency](functions-reference-powershell.md#dependency-management) in the PowerShell article.
azure-functions Functions How To Use Nat Gateway https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-how-to-use-nat-gateway.md
Last updated 2/26/2021
# Tutorial: Control Azure Functions outbound IP with an Azure virtual network NAT gateway
-Virtual network address translation (NAT) simplifies outbound-only internet connectivity for virtual networks. When configured on a subnet, all outbound connectivity uses your specified static public IP addresses. An NAT can be useful for Azure Functions or Web Apps that need to consume a third-party service that uses an allowlist of IP address as a security measure. To learn more, see [What is Virtual Network NAT?](../virtual-network/nat-overview.md).
+Virtual network address translation (NAT) simplifies outbound-only internet connectivity for virtual networks. When configured on a subnet, all outbound connectivity uses your specified static public IP addresses. An NAT can be useful for Azure Functions or Web Apps that need to consume a third-party service that uses an allowlist of IP address as a security measure. To learn more, see [What is Virtual Network NAT?](../virtual-network/nat-gateway/nat-overview.md).
This tutorial shows you how to use virtual network NATs to route outbound traffic from an HTTP triggered function. This function lets you check its own outbound IP address. During this tutorial, you'll:
azure-functions Language Support Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/language-support-policy.md
+
+ Title: Azure Functions language runtime support policy
+description: Learn about Azure Functions language runtime support policy
+ Last updated : 06/14/2021++
+# Language runtime support policy
+
+This article explains Azure functions language runtime support policy.
+
+## Retirement process
+
+Azure Functions runtime is built around various components, including operating systems, the Azure Functions host, and language-specific workers. To maintain full support coverages for function apps, Azure Functions uses a phased reduction in support as programming language versions reach their end-of-life dates. For most language versions, the retirement date coincides with the community end-of-life date.
+
+### Notification phase
+
+We'll send notification emails to function app users about upcoming language version retirements. The notifications will be at least one year prior to the date of retirement. Upon the notification, you should prepare to upgrade the language version that your functions apps use to a supported version.
+
+### Retirement phase
+
+* __Phase 1:__ On the end-of-life date for a language version, you can no longer create new function apps targeting that language version. For at least 60 days after this date, existing function apps can continue to run on that language version and and are updated. During this phase, you're highly encouraged to upgrade the language version of your affected function apps to a supported version.
+
+* __Phase 2:__ At a minimum of 60 days after the language end-of-life date, we no longer can guaranteed that function apps targeting this language version will continue to run on the platform.
++
+## Retirement policy exceptions
+
+There are few exceptions to the retirement policy outlined above. Here is a list of languages that are approaching or have reached their end-of-life dates but continue to be supported on the platform until further notice. When these languages versions reach their end-of-life dates, they are no longer updated or patched. Because of this, we discourage you from developing and running your function apps on these language versions.
+
+|Language Versions |EOL Date |Expected Retirement Date|
+|--|--|-|
+|Node 6|30 April 2019|TBA|
+|Node 8|31 December 2019|TBA|
+|Node 10|30 April 2021|TBA|
+|PowerShell Core 6| 4 September 2020|TBA|
+|Python 3.6 |23 December 2021|TBA|
+
+
+## Language version support timeline
+
+To learn more about specific language version support policy timeline, visit the following external resources:
+* .NET - [dotnet.microsoft.com](https://dotnet.microsoft.com/platform/support/policy/dotnet-core)
+* Node - [github.com](https://github.com/nodejs/Release#release-schedule)
+* Java - [azul.com](https://www.azul.com/products/azul-support-roadmap/)
+* PowerShell - [dotnet.microsoft.com](/powershell/scripting/powershell-support-lifecycle?view=powershell-7.1&preserve-view=true#powershell-releases-end-of-life)
+* Python - [devguide.python.org](https://devguide.python.org/#status-of-python-branches)
+
+## Configuring language versions
+
+|Language | Configuration guides |
+|--|--|
+|C# (class library) |[link](./functions-dotnet-class-library.md#supported-versions)|
+|Node |[link](./functions-reference-node.md#setting-the-node-version)|
+|PowerShell |[link](./functions-reference-powershell.md#changing-the-powershell-version)|
+|Python |[link](./functions-reference-python.md#python-version)|
+
+
+## Next steps
+
+To learn more about how to upgrade your functions apps language versions, see the following resources:
++++ [Currently supported language versions](./supported-languages.md#languages-by-runtime-version)
azure-functions Supported Languages https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/supported-languages.md
There are two levels of support:
[!INCLUDE [functions-supported-languages](../../includes/functions-supported-languages.md)]
+### Language major version support
+
+Azure Functions provides a guarantee of support for the major versions of supported programming languages. For most languages, there are minor or patch versions released to update a supported major version. Examples of minor or patch versions include such as Python 3.9.1 and Node 14.17. After new minor versions of supported languages become available, the minor versions used by your functions apps are automatically upgraded to these newer minor or patch versions.
+
+> [!NOTE]
+>Because Azure Functions can remove the support of older minor versions at any time after a new minor version is available, you shouldn't pin your function apps to a specific minor/patch version of a programming language.
+>
+ ## Custom handlers Custom handlers are lightweight web servers that receive events from the Azure Functions host. Any language that supports HTTP primitives can implement a custom handler. This means that custom handlers can be used to create functions in languages that aren't officially supported. To learn more, see [Azure Functions custom handlers](functions-custom-handlers.md).
azure-government Azure Services In Fedramp Auditscope https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md
This article provides a detailed list of in-scope cloud services across Azure Pu
| [Virtual Machine Scale Sets](https://azure.microsoft.com/services/virtual-machine-scale-sets/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | | [Virtual Machines (incl. Reserved Instances)](https://azure.microsoft.com/services/virtual-machines/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | | [Virtual Network](https://azure.microsoft.com/services/virtual-network/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
-| [Virtual Network NAT](../../virtual-network/nat-overview.md) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
+| [Virtual Network NAT](../../virtual-network/nat-gateway/nat-overview.md) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
| [Virtual WAN](https://azure.microsoft.com/services/virtual-wan/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | | [Visual Studio Codespaces](https://azure.microsoft.com/services/visual-studio-online/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | | [VPN Gateway](https://azure.microsoft.com/services/vpn-gateway/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
This article provides a detailed list of in-scope cloud services across Azure Pu
| [Virtual Machine Scale Sets](https://azure.microsoft.com/services/virtual-machine-scale-sets/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | [Virtual Machines](https://azure.microsoft.com/services/virtual-machines/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | [Virtual Network](https://azure.microsoft.com/services/virtual-network/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [Virtual Network NAT](../../virtual-network/nat-overview.md) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
+| [Virtual Network NAT](../../virtual-network/nat-gateway/nat-overview.md) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
| [Virtual WAN](https://azure.microsoft.com/services/virtual-wan/) | :heavy_check_mark: | | | | :heavy_check_mark: | :heavy_check_mark: | [VPN Gateway](https://azure.microsoft.com/services/vpn-gateway/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | [Web Apps (App Service)](https://azure.microsoft.com/services/app-service/web/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
azure-government Documentation Government Impact Level 5 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-impact-level-5.md
Azure Traffic Manager supports Impact Level 5 workloads in Azure Government with
Azure Virtual Network supports Impact Level 5 workloads in Azure Government with no extra configuration required.
-### [Virtual NAT](../virtual-network/nat-overview.md)
+### [Virtual NAT](../virtual-network/nat-gateway/nat-overview.md)
Virtual NAT supports Impact Level 5 workloads in Azure Government with no extra configuration required.
azure-monitor Ip Collection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/ip-collection.md
By default IP addresses are temporarily collected, but not stored in Application
When telemetry is sent to Azure, the IP address is used to do a geolocation lookup using [GeoLite2 from MaxMind](https://dev.maxmind.com/geoip/geoip2/geolite2/). The results of this lookup are used to populate the fields `client_City`, `client_StateOrProvince`, and `client_CountryOrRegion`. The address is then discarded and `0.0.0.0` is written to the `client_IP` field. * Browser telemetry: We temporarily collect the sender's IP address. IP address is calculated by the ingestion endpoint.
-* Server telemetry: The Application Insights telemetry module temporarily collects the client IP address. IP address isn't collected locally when the `X-Forwarded-For` header is set.
+* Server telemetry: The Application Insights telemetry module temporarily collects the client IP address. IP address isn't collected locally when the `X-Forwarded-For` header is set. When the incoming list of IPs has more than one IP address, the last IP is used to populate geolocation fields.
This behavior is by design to help avoid unnecessary collection of personal data. Whenever possible, we recommend avoiding the collection of personal data.
azure-monitor Solution Agenthealth https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/solution-agenthealth.md
A record with a type of **Heartbeat** is created. These records have the proper
| `SCAgentChannel` | Value is *Direct* and/or *SCManagementServer*.| | `IsGatewayInstalled` | If Log Analytics gateway is installed, value is *true*, otherwise value is *false*.| | `ComputerIP` | The public IP address of the computer. On Azure VMs, this will show the public IP if one is available. For VMs using private IPs, this will display the Azure SNAT address (not the private IP address). |
+| `ComputerPrivateIPs` | List of private IPs of the computer. |
| `RemoteIPCountry` | Geographic location where computer is deployed.| | `ManagementGroupName` | Name of Operations Manager management group.| | `SourceComputerId` | Unique ID of computer.|
azure-monitor Private Link Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/private-link-security.md
As a local bypass to the All or Nothing behavior, you can select not to update y
That approach isn't recommended for production environments.
+## Limits and additional considerations
-### Consider limits
+### AMPLS limits
The AMPLS object has the following limits: * A VNet can only connect to **one** AMPLS object. That means the AMPLS object must provide access to all the Azure Monitor resources the VNet should have access to.
Go to the Azure portal. In your resource's menu, there's a menu item called **Ne
> [!NOTE] > Starting August 16, 2021, Network Isolation will be strictly enforced. Resources set to block queries from public networks, and that aren't associated with an AMPLS, will stop accepting queries from any network.
-![LA Network Isolation](./media/private-link-security/ampls-log-analytics-lan-network-isolation-6.png)
+![LA Network Isolation](./media/private-link-security/ampls-network-isolation.png)
### Connected Azure Monitor Private Link scopes Here you can review and configure the resource's connections to Azure Monitor Private Links scopes. Connecting to scopes (AMPLSs) allows traffic from the virtual network connected to each AMPLS to reach this resource, and has the same effect as connecting it from the scope as we did in [Connecting Azure Monitor resources](#connect-azure-monitor-resources). To add a new connection, select **Add** and select the Azure Monitor Private Link Scope. Select **Apply** to connect it. Your resource can connect to 5 AMPLS objects, as mentioned in [Restrictions and limitations](#restrictions-and-limitations).
For more information on bringing your own storage account, see [Customer-owned s
## Restrictions and limitations ### AMPLS
-The AMPLS object has a number of limits you should consider when planning your Private Link setup. See [Consider limits](#consider-limits) for a deeper review of these limits.
+
+The AMPLS object has a number of limits you should consider when planning your Private Link setup. See [AMPLS limits](#ampls-limits) for a deeper review of these limits.
### Agents
azure-monitor Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/resource-graph-samples.md
+
+ Title: Azure Resource Graph sample queries for Azure Monitor
+description: Sample Azure Resource Graph queries for Azure Monitor showing use of resource types and tables to access Azure Monitor related resources and properties.
Last updated : 07/07/2021+++++
+# Azure Resource Graph sample queries for Azure Monitor
+
+This page is a collection of [Azure Resource Graph](../governance/resource-graph/overview.md) sample queries
+for Azure Monitor. For a complete list of Azure Resource Graph samples, see
+[Resource Graph samples by Category](../governance/resource-graph/samples/samples-by-category.md) and
+[Resource Graph samples by Table](../governance/resource-graph/samples/samples-by-table.md).
+
+## Azure Monitor
++
+## Next steps
+
+- Learn more about the [query language](../governance/resource-graph/concepts/query-language.md).
+- Learn more about how to [explore resources](../governance/resource-graph/concepts/explore-resources.md).
+- See samples of [Starter language queries](../governance/resource-graph/samples/starter.md).
+- See samples of [Advanced language queries](../governance/resource-graph/samples/advanced.md).
azure-resource-manager Deploy To Management Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/deploy-to-management-group.md
Title: Use Bicep tp deploy resources to management group
+ Title: Use Bicep to deploy resources to management group
description: Describes how to create a Bicep file that deploys resources at the management group scope. Last updated 06/01/2021
azure-resource-manager Learn Bicep https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/learn-bicep.md
This path contains the following modules.
| [Deploy child and extension resources by using Bicep](/learn/modules/child-extension-bicep-templates/) | This module shows how to deploy various Azure resources in your Bicep code. Learn about child and extension resources, and how they can be defined and used within Bicep. Use Bicep to work with resources that you created outside a Bicep template or module. | | [Deploy resources to subscriptions, management groups, and tenants by using Bicep](/learn/modules/deploy-resources-scopes-bicep/) | Deploy Azure resources at the subscription, management group, and tenant scope. Learn what these resources are, why you would use them, and how you create Bicep code to deploy them. Also learn how to create a single set of Bicep files that you can deploy across multiple scopes in one operation. | | [Extend templates by using deployment scripts](/learn/modules/extend-resource-manager-template-deployment-scripts/) | Learn how to add custom steps to your Bicep file or Azure Resource Manager template (ARM template) by using deployment scripts. |
-| [Publish libraries of reusable infrastructure code by using template specs](/learn/modules/arm-template-specs/) | Learn how to create and publish template specs, and how to deploy them.|
## Other modules
-In addition to the preceding path, the following module contains Bicep content.
+In addition to the preceding path, the following modules contain Bicep content.
| Learn module | Description | | | -- | | [Preview Azure deployment changes by using what-if](/learn/modules/arm-template-whatif/) | This module teaches you how to preview your changes with the what-if operation. By using what-if, you can make sure your Bicep file only makes changes that you expect. |
+| [Publish libraries of reusable infrastructure code by using template specs](/learn/modules/arm-template-specs/) | Learn how to create and publish template specs, and how to deploy them. |
+| [Authenticate your Azure deployment pipeline by using service principals](/learn/modules/authenticate-azure-deployment-pipeline-service-principals/) | Service principals enable your deployment pipelines to authenticate securely with Azure. In this module, you'll learn what service principals are, how they work, and how to create them. You'll also learn how to grant them permission to your Azure resources so that your pipelines can deploy your Bicep files. |
## Next steps
azure-resource-manager Azure Subscription Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/azure-subscription-service-limits.md
Title: Azure subscription limits and quotas description: Provides a list of common Azure subscription and service limits, quotas, and constraints. This article includes information on how to increase limits along with maximum values. Previously updated : 06/23/2021 Last updated : 07/06/2021 # Azure subscription and service limits, quotas, and constraints
azure-sql Service Tiers General Purpose Business Critical https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/service-tiers-general-purpose-business-critical.md
Previously updated : 12/14/2020 Last updated : 7/7/2021 # Azure SQL Database and Azure SQL Managed Instance service tiers [!INCLUDE[appliesto-sqldb-sqlmi](../includes/appliesto-sqldb-sqlmi.md)]
-Azure SQL Database and Azure SQL Managed Instance are based on SQL Server database engine architecture that's adjusted for the cloud environment to ensure 99.99 percent availability, even if there is an infrastructure failure. Two service tiers are used by Azure SQL Database and Azure SQL Managed Instance, each with a different architectural model. These service tiers are:
+ Two [vCore](service-tiers-vcore.md) service tiers are available in both Azure SQL Database and Azure SQL Managed Instance:
-- [General purpose](service-tier-general-purpose.md), which is designed for budget-oriented workloads.-- [Business critical](service-tier-business-critical.md), which is designed for low-latency workloads with high resiliency to failures and fast failovers.
+- [General purpose](service-tier-general-purpose.md) is a budget-friendly tier designed for most workloads with common performance and availability requirements.
+- [Business critical](service-tier-business-critical.md) tier is designed for performance-sensitive workloads with strict availability requirements.
-Azure SQL Database has an additional service tier:
+Azure SQL Database also provides the Hyperscale service tier:
-- [Hyperscale](service-tier-hyperscale.md), which is designed for most business workloads, providing highly scalable storage, read scale-out, and fast database restore capabilities.-
-This article discusses differences between the service tiers, storage and backup considerations for the general purpose and business critical service tiers in the vCore-based purchasing model.
+- [Hyperscale](service-tier-hyperscale.md) is designed for most business workloads, providing highly scalable storage, read scale-out, fast scaling, and fast database restore capabilities.
## Service tier comparison
-The following table describes the key differences between service tiers for the latest generation (Gen5). Note that service tier characteristics might be different in SQL Database and SQL Managed Instance.
+The following table describes the key differences between service tiers.
-|-| Resource type | General Purpose | Hyperscale | Business Critical |
+|-| Resource type | General Purpose | Hyperscale | Business Critical |
|::|::|::|::|::| | **Best for** | | Offers budget oriented balanced compute and storage options. | Most business workloads. Auto-scaling storage size up to 100 TB, fluid vertical and horizontal compute scaling, fast database restore. | OLTP applications with high transaction rate and low IO latency. Offers highest resilience to failures and fast failovers using multiple synchronously updated replicas.|
-| **Available in resource type:** ||SQL Database / SQL Managed Instance | Single Azure SQL Database | SQL Database / SQL Managed Instance |
-| **Compute size**| SQL Database | 1 to 80 vCores | 1 to 80 vCores | 1 to 80 vCores |
+| **Available in resource type:** ||SQL Database / SQL Managed Instance | Single Azure SQL Database | SQL Database / SQL Managed Instance |
+| **Compute size**| SQL Database | 1 to 80 vCores | 1 to 80 vCores | 1 to 128 vCores |
| | SQL Managed Instance | 4, 8, 16, 24, 32, 40, 64, 80 vCores | N/A | 4, 8, 16, 24, 32, 40, 64, 80 vCores | | | SQL Managed Instance pools | 2, 4, 8, 16, 24, 32, 40, 64, 80 vCores | N/A | N/A |
-| **Storage type** | All | Premium remote storage (per instance) | De-coupled storage with local SSD cache (per instance) | Super-fast local SSD storage (per instance) |
-| **Database size** | SQL Database | 5 GB ΓÇô 4 TB | Up to 100 TB | 5 GB ΓÇô 4 TB |
+| **Storage type** | All | Remote storage | Tiered remote and local SSD storage | Local SSD storage |
+| **Database size** | SQL Database | 1 GB ΓÇô 4 TB | 40 GB - 100 TB | 1 GB ΓÇô 4 TB |
| | SQL Managed Instance | 32 GB ΓÇô 8 TB | N/A | 32 GB ΓÇô 4 TB |
-| **Storage size** | SQL Database | 5 GB ΓÇô 4 TB | Up to 100 TB | 5 GB ΓÇô 4 TB |
+| **Storage size** | SQL Database | 1 GB ΓÇô 4 TB | 40 GB - 100 TB | 1 GB ΓÇô 4 TB |
| | SQL Managed Instance | 32 GB ΓÇô 8 TB | N/A | 32 GB ΓÇô 4 TB |
-| **TempDB size** | SQL Database | [32 GB per vCore](resource-limits-vcore-single-databases.md#general-purposeprovisioned-computegen4) | [32 GB per vCore](resource-limits-vcore-single-databases.md#hyperscaleprovisioned-computegen5) | [32 GB per vCore](resource-limits-vcore-single-databases.md#business-criticalprovisioned-computegen4) |
+| **TempDB size** | SQL Database | [32 GB per vCore](resource-limits-vcore-single-databases.md) | [32 GB per vCore](resource-limits-vcore-single-databases.md) | [32 GB per vCore](resource-limits-vcore-single-databases.md) |
| | SQL Managed Instance | [24 GB per vCore](../managed-instance/resource-limits.md#service-tier-characteristics) | N/A | Up to 4 TB - [limited by storage size](../managed-instance/resource-limits.md#service-tier-characteristics) |
-| **Log write throughput** | SQL Database | [1.875 MB/s per vCore (max 30 MB/s)](resource-limits-vcore-single-databases.md#general-purposeprovisioned-computegen4) | 100 MB/s | [6 MB/s per vCore (max 96 MB/s)](resource-limits-vcore-single-databases.md#business-criticalprovisioned-computegen4) |
+| **Log write throughput** | SQL Database | Single databases: [4.5 MB/s per vCore (max 50 MB/s)](resource-limits-vcore-single-databases.md) <br> Elastic pools: [6 MB/s per vCore (max 62.5 MB/s)](resource-limits-vcore-elastic-pools.md)| 100 MB/s | Single databases: [12 MB/s per vCore (max 96 MB/s)](resource-limits-vcore-single-databases.md) <br> Elastic pools: [15 MB/s per vCore (max 120 MB/s)](resource-limits-vcore-elastic-pools.md)|
| | SQL Managed Instance | [3 MB/s per vCore (max 22 MB/s)](../managed-instance/resource-limits.md#service-tier-characteristics) | N/A | [4 MB/s per vcore (max 48 MB/s)](../managed-instance/resource-limits.md#service-tier-characteristics) | |**Availability**|All| 99.99% | [99.95% with one secondary replica, 99.99% with more replicas](service-tier-hyperscale-frequently-asked-questions-faq.yml#what-slas-are-provided-for-a-hyperscale-database-) | 99.99% <br/> [99.995% with zone redundant single database](https://azure.microsoft.com/blog/understanding-and-leveraging-azure-sql-database-sla/) |
-|**Backups**|All|RA-GRS, 7-35 days (7 days by default). Maximum retention for Basic tier is 7 days. | RA-GRS, 7 days, constant time point-in-time recovery (PITR) | RA-GRS, 7-35 days (7 days by default) |
-|**In-memory OLTP** | | N/A | N/A | Available |
+|**Backups**|All|RA-GRS, 1-35 days (7 days by default) | RA-GRS, 7 days, fast point-in-time recovery (PITR) | RA-GRS, 1-35 days (7 days by default) |
+|**In-memory OLTP** | | N/A | Partial support. Memory-optimized table types, table variables, and natively compiled modules are supported. | Available |
|**Read-only replicas**| | 0 built-in <br> 0 - 4 using [geo-replication](active-geo-replication-overview.md) | 0 - 4 built-in | 1 built-in, included in price <br> 0 - 4 using [geo-replication](active-geo-replication-overview.md) | |**Pricing/billing** | SQL Database | [vCore, reserved storage, and backup storage](https://azure.microsoft.com/pricing/details/sql-database/single/) are charged. <br/>IOPS is not charged. | [vCore for each replica and used storage](https://azure.microsoft.com/pricing/details/sql-database/single/) are charged. <br/>IOPS not yet charged. | [vCore, reserved storage, and backup storage](https://azure.microsoft.com/pricing/details/sql-database/single/) are charged. <br/>IOPS is not charged. | || SQL Managed Instance | [vCore, reserved storage, and backup storage](https://azure.microsoft.com/pricing/details/sql-database/managed/) is charged. <br/>IOPS is not charged| N/A | [vCore, reserved storage, and backup storage](https://azure.microsoft.com/pricing/details/sql-database/managed/) is charged. <br/>IOPS is not charged.|
The following table describes the key differences between service tiers for the
For more information, see the detailed differences between the service tiers in [Azure SQL Database (vCore)](resource-limits-vcore-single-databases.md), [single Azure SQL Database (DTU)](resource-limits-dtu-single-databases.md), [pooled Azure SQL Database (DTU)](resource-limits-dtu-single-databases.md), and [Azure SQL Managed Instance](../managed-instance/resource-limits.md) pages. > [!NOTE]
-> For information about the hyperscale service tier in the vCore-based purchasing model, see [hyperscale service tier](service-tier-hyperscale.md). For a comparison of the vCore-based purchasing model with the DTU-based purchasing model, see [purchasing models and resources](purchasing-models.md).
+> For information about the Hyperscale service tier, see [Hyperscale service tier](service-tier-hyperscale.md). For a comparison of the vCore-based purchasing model with the DTU-based purchasing model, see [purchasing models and resources](purchasing-models.md).
## Data and log storage
-The following factors affect the amount of storage used for data and log files, and apply to General Purpose and Business Critical. For details on data and log storage in Hyperscale, see [Hyperscale service tier](service-tier-hyperscale.md).
--- The allocated storage is used by data files (MDF) and log files (LDF).-- Each single database compute size supports a maximum database size, with a default maximum size of 32 GB.-- When you configure the required single database size (the size of the MDF file), 30 percent more additional storage is automatically added to support LDF files.-- You can select any single database size between 10 GB and the supported maximum.
- - For storage in the standard or general purpose service tiers, increase or decrease the size in 10-GB increments.
- - For storage in the premium or business critical service tiers, increase or decrease the size in 250-GB increments.
-- In the general purpose service tier, `tempdb` uses an attached SSD, and this storage cost is included in the vCore price.-- In the business critical service tier, `tempdb` shares the attached SSD with the MDF and LDF files, and the `tempdb` storage cost is included in the vCore price.-- In the DTU premium service tier, `tempdb` shares the attached SSD with MDF and LDF files.-- The storage size for a SQL Managed Instance must be specified in multiples of 32 GB.
+The following factors affect the amount of storage used for data and log files, and apply to General Purpose and Business Critical tiers. For details on data and log storage in Hyperscale, see [Hyperscale service tier](service-tier-hyperscale.md).
+- Each compute size supports a maximum data size, with a default of 32 GB.
+- When you configure maximum data size, an additional 30 percent of storage is automatically added for log files.
+- You can select any maximum data size between 1 GB and the supported storage size maximum, in 1 GB increments.
+- In the General Purpose service tier, `tempdb` uses local SSD storage, and this storage cost is included in the vCore price.
+- In the Business Critical service tier, `tempdb` shares local SSD storage with data and log files, and `tempdb` storage cost is included in the vCore price.
+- The maximum storage size for a SQL Managed Instance must be specified in multiples of 32 GB.
> [!IMPORTANT]
-> You are charged for the total storage allocated for MDF and LDF files.
+> In the General Purpose and Business Critical tiers, you are charged for the maximum storage size configured for a database, elastic pool, or managed instance. In the Hyperscale tier, you are charged for the allocated data storage.
-To monitor the current total size of your MDF and LDF files, use [sp_spaceused](/sql/relational-databases/system-stored-procedures/sp-spaceused-transact-sql). To monitor the current size of the individual MDF and LDF files, use [sys.database_files](/sql/relational-databases/system-catalog-views/sys-database-files-transact-sql).
+To monitor the current allocated and used data storage size in SQL Database, use *allocated_data_storage* and *storage* Azure Monitor [metrics](/azure/azure-monitor/essentials/metrics-supported#microsoftsqlserversdatabases) respectively. To monitor total consumed instance storage size for SQL Managed Instance, use the *storage_space_used_mb* [metric](/azure/azure-monitor/essentials/metrics-supported#microsoftsqlmanagedinstances). To monitor the current allocated and used storage size of individual data and log files in a database using T-SQL, use the [sys.database_files](/sql/relational-databases/system-catalog-views/sys-database-files-transact-sql) view and the [FILEPROPERTY(... , 'SpaceUsed')](/sql/t-sql/functions/fileproperty-transact-sql) function.
-> [!IMPORTANT]
+> [!TIP]
> Under some circumstances, you may need to shrink a database to reclaim unused space. For more information, see [Manage file space in Azure SQL Database](file-space-manage.md). ## Backups and storage
-Storage for database backups is allocated to support the point-in-time restore (PITR) and [long-term retention (LTR)](long-term-retention-overview.md) capabilities of SQL Database and SQL Managed Instance. This storage is allocated separately for each database and billed as two separate per-database charges.
+Storage for database backups is allocated to support the [point-in-time restore (PITR)](recovery-using-backups.md) and [long-term retention (LTR)](long-term-retention-overview.md) capabilities of SQL Database and SQL Managed Instance. This storage is separate from data and log file storage, and is billed separately.
-- **PITR**: Individual database backups are copied to [read-access geo-redundant (RA-GRS) storage](../../storage/common/geo-redundant-design.md) automatically. The storage size increases dynamically as new backups are created. The storage is used by weekly full backups, daily differential backups, and transaction log backups, which are copied every 5 minutes. The storage consumption depends on the rate of change of the database and the retention period for backups. You can configure a separate retention period for each database between 7 and 35 days. A minimum storage amount equal to 100 percent (1x) of the database size is provided at no extra charge. For most databases, this amount is enough to store 7 days of backups.-- **LTR**: You also have the option to configure long-term retention of full backups for up to 10 years [for SQL Managed Instance](long-term-retention-overview.md). If you set up an LTR policy, these backups are stored in RA-GRS storage automatically, but you can control how often the backups are copied. To meet different compliance requirements, you can select different retention periods for weekly, monthly, and/or yearly backups. The configuration you choose determines how much storage will be used for LTR backups. To estimate the cost of LTR storage, you can use the LTR pricing calculator. For more information, see [SQL Database long-term retention](long-term-retention-overview.md).
+- **PITR**: In General Purpose and Business Critical tiers, individual database backups are copied to [read-access geo-redundant (RA-GRS) storage](../../storage/common/geo-redundant-design.md) automatically. The storage size increases dynamically as new backups are created. The storage is used by full, differential, and transaction log backups. The storage consumption depends on the rate of change of the database and the retention period configured for backups. You can configure a separate retention period for each database between 1 and 35 days for SQL Database, and 0 to 35 days for SQL Managed Instance. A backup storage amount equal to the configured maximum data size is provided at no extra charge.
+- **LTR**: You also have the option to configure long-term retention of full backups for up to 10 years. If you set up an LTR policy, these backups are stored in RA-GRS storage automatically, but you can control how often the backups are copied. To meet different compliance requirements, you can select different retention periods for weekly, monthly, and/or yearly backups. The configuration you choose determines how much storage will be used for LTR backups. For more information, see [Long-term backup retention](long-term-retention-overview.md).
## Next steps
-For details about the specific compute and storage sizes available in the general purpose and business critical service tiers, see:
+For details about the specific compute and storage sizes available in vCore service tiers, see:
- [vCore-based resource limits for Azure SQL Database](resource-limits-vcore-single-databases.md). - [vCore-based resource limits for pooled databases in Azure SQL Database](resource-limits-vcore-elastic-pools.md).
azure-sql Connectivity Architecture Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/connectivity-architecture-overview.md
The following virtual network features are currently *not supported* with SQL Ma
- **Microsoft peering**: Enabling [Microsoft peering](../../expressroute/expressroute-faqs.md#microsoft-peering) on ExpressRoute circuits peered directly or transitively with a virtual network where SQL Managed Instance resides affects traffic flow between SQL Managed Instance components inside the virtual network and services it depends on, causing availability issues. SQL Managed Instance deployments to virtual network with Microsoft peering already enabled are expected to fail. - **Global virtual network peering**: [Virtual network peering](../../virtual-network/virtual-network-peering-overview.md) connectivity across Azure regions doesn't work for SQL Managed Instances placed in subnets created before 9/22/2020. - **AzurePlatformDNS**: Using the AzurePlatformDNS [service tag](../../virtual-network/service-tags-overview.md) to block platform DNS resolution would render SQL Managed Instance unavailable. Although SQL Managed Instance supports customer-defined DNS for DNS resolution inside the engine, there is a dependency on platform DNS for platform operations.-- **NAT gateway**: Using [Azure Virtual Network NAT](../../virtual-network/nat-overview.md) to control outbound connectivity with a specific public IP address would render SQL Managed Instance unavailable. The SQL Managed Instance service is currently limited to use of basic load balancer that doesn't provide coexistence of inbound and outbound flows with Virtual Network NAT.
+- **NAT gateway**: Using [Azure Virtual Network NAT](../../virtual-network/nat-gateway/nat-overview.md) to control outbound connectivity with a specific public IP address would render SQL Managed Instance unavailable. The SQL Managed Instance service is currently limited to use of basic load balancer that doesn't provide coexistence of inbound and outbound flows with Virtual Network NAT.
- **IPv6 for Azure Virtual Network**: Deploying SQL Managed Instance to [dual stack IPv4/IPv6 virtual networks](../../virtual-network/ipv6-overview.md) is expected to fail. Associating network security group (NSG) or route table (UDR) containing IPv6 address prefixes to SQL Managed Instance subnet, or adding IPv6 address prefixes to NSG or UDR that is already associated with Managed instance subnet, would render SQL Managed Instance unavailable. SQL Managed Instance deployments to a subnet with NSG and UDR that already have IPv6 prefixes are expected to fail. - **Azure DNS private zones with a name reserved for Microsoft services**: Following is the list of reserved names: windows.net, database.windows.net, core.windows.net, blob.core.windows.net, table.core.windows.net, management.core.windows.net, monitoring.core.windows.net, queue.core.windows.net, graph.windows.net, login.microsoftonline.com, login.windows.net, servicebus.windows.net, vault.azure.net. Deploying SQL Managed Instance to a virtual network with associated [Azure DNS private zone](../../dns/private-dns-privatednszone.md) with a name reserved for Microsoft services would fail. Associating Azure DNS private zone with reserved name with a virtual network containing Managed Instance, would render SQL Managed Instance unavailable. Please follow [Azure Private Endpoint DNS configuration](../../private-link/private-endpoint-dns.md) for the proper Private Link configuration. - **Service endpoint policies for Azure Storage**: Deploying SQL Managed Instance to a subnet that have associated [service endpoint policies](../../virtual-network/virtual-network-service-endpoint-policies-overview.md) will fail. Service endpoint policies could not be associated to a subnet that hosts Managed Instance.
azure-sql Automated Backup Sql 2014 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/automated-backup-sql-2014.md
On the **SQL Server settings** tab, scroll down to **Automated backup** and sele
## Configure existing VMs - For existing SQL Server VMs, you can enable and disable automated backups, change the retention period, specify the storage account, and enable encryption from the Azure portal.
-Navigate to the [SQL virtual machines resource](manage-sql-vm-portal.md#access-the-sql-virtual-machines-resource) for your SQL Server 2014 virtual machine and then select **Backups**.
+Navigate to the [SQL virtual machines resource](manage-sql-vm-portal.md#access-the-resource) for your SQL Server 2014 virtual machine and then select **Backups**.
![SQL Automated Backup for existing VMs](./media/automated-backup-sql-2014/azure-sql-rm-autobackup-existing-vms.png)
azure-sql Automated Backup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/automated-backup.md
In the **SQL Server settings** tab, select **Enable** under **Automated backup**
## Configure existing VMs -
-For existing SQL Server virtual machines, go to the [SQL virtual machines resource](manage-sql-vm-portal.md#access-the-sql-virtual-machines-resource) and then select **Backups** to configure your automated backups.
+For existing SQL Server virtual machines, go to the [SQL virtual machines resource](manage-sql-vm-portal.md#access-the-resource) and then select **Backups** to configure your automated backups.
![SQL Automated Backup for existing VMs](./media/automated-backup/sql-server-configuration.png)
azure-sql Automated Patching https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/automated-patching.md
For more information, see [Provision a SQL Server virtual machine on Azure](crea
### Existing VMs -
-For existing SQL Server virtual machines, open your [SQL virtual machines resource](manage-sql-vm-portal.md#access-the-sql-virtual-machines-resource) and select **Patching** under **Settings**.
+For existing SQL Server virtual machines, open your [SQL virtual machines resource](manage-sql-vm-portal.md#access-the-resource) and select **Patching** under **Settings**.
![SQL Automatic Patching for existing VMs](./media/automated-patching/azure-sql-rm-patching-existing-vms.png)
azure-sql Azure Key Vault Integration Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/azure-key-vault-integration-configure.md
For a detailed walkthrough of provisioning, see [Provision a SQL virtual machine
### Existing VMs -
-For existing SQL virtual machines, open your [SQL virtual machines resource](manage-sql-vm-portal.md#access-the-sql-virtual-machines-resource) and select **Security** under **Settings**. Select **Enable** to enable Azure Key Vault integration.
+For existing SQL virtual machines, open your [SQL virtual machines resource](manage-sql-vm-portal.md#access-the-resource) and select **Security** under **Settings**. Select **Enable** to enable Azure Key Vault integration.
![SQL Key Vault integration for existing VMs](./media/azure-key-vault-integration-configure/azure-sql-rm-akv-existing-vms.png)
azure-sql Business Continuity High Availability Disaster Recovery Hadr Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/business-continuity-high-availability-disaster-recovery-hadr-overview.md
Or you can configure a hybrid failover environment, with a licensed primary on-p
For more information, see the [product licensing terms](https://www.microsoft.com/licensing/product-licensing/products).
-To enable this benefit, go to your [SQL Server virtual machine resource](manage-sql-vm-portal.md#access-the-sql-virtual-machines-resource). Select **Configure** under **Settings**, and then choose the **Disaster Recovery** option under **SQL Server License**. Select the check box to confirm that this SQL Server VM will be used as a passive replica, and then select **Apply** to save your settings.
+To enable this benefit, go to your [SQL Server virtual machine resource](manage-sql-vm-portal.md#access-the-resource). Select **Configure** under **Settings**, and then choose the **Disaster Recovery** option under **SQL Server License**. Select the check box to confirm that this SQL Server VM will be used as a passive replica, and then select **Apply** to save your settings.
![Configure a disaster recovery replica in Azure](./media/business-continuity-high-availability-disaster-recovery-hadr-overview/dr-replica-in-portal.png)
azure-sql Doc Changes Updates Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/doc-changes-updates-release-notes.md
vm-windows-sql-server Previously updated : 06/01/2021 Last updated : 07/01/2021 # Documentation changes for SQL Server on Azure Virtual Machines [!INCLUDE[appliesto-sqlvm](../../includes/appliesto-sqlvm.md)] Azure allows you to deploy a virtual machine (VM) with an image of SQL Server built in. This article summarizes the documentation changes associated with new features and improvements in the recent releases of [SQL Server on Azure Virtual Machines](https://azure.microsoft.com/services/virtual-machines/sql-server/).
+## June 2021
+
+| Changes | Details |
+| | |
+| **Security enhancements in the Azure portal** | Once you've enabled [Azure Defender for SQL](/security-center/defender-for-sql-usage), you can view Security Center recommendations in the [SQL virtual machines resource in the Azure portal](manage-sql-vm-portal.md#security-center). |
+ ## May 2021 | Changes | Details |
azure-sql Licensing Model Azure Hybrid Benefit Ahb Change https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/licensing-model-azure-hybrid-benefit-ahb-change.md
Changing the licensing model of your SQL Server VM has the following requirement
# [Azure portal](#tab/azure-portal) - You can modify the license model directly from the portal:
-1. Open the [Azure portal](https://portal.azure.com) and open the [SQL virtual machines resource](manage-sql-vm-portal.md#access-the-sql-virtual-machines-resource) for your SQL Server VM.
+1. Open the [Azure portal](https://portal.azure.com) and open the [SQL virtual machines resource](manage-sql-vm-portal.md#access-the-resource) for your SQL Server VM.
1. Select **Configure** under **Settings**. 1. Select the **Azure Hybrid Benefit** option, and select the check box to confirm that you have a SQL Server license with Software Assurance. 1. Select **Apply** at the bottom of the **Configure** page.
azure-sql Manage Sql Vm Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/manage-sql-vm-portal.md
Title: Manage SQL Server virtual machines in Azure by using the Azure portal | Microsoft Docs
-description: Learn how to access the SQL virtual machine resource in the Azure portal for a SQL Server VM hosted on Azure.
+description: Learn how to access the SQL virtual machine resource in the Azure portal for a SQL Server VM hosted on Azure to modify SQL Server settings.
documentationcenter: na
vm-windows-sql-server Previously updated : 05/13/2019 Last updated : 05/30/2021
-# Manage SQL Server VMs in Azure by using the Azure portal
+# Manage SQL Server VMs by using the Azure portal
[!INCLUDE[appliesto-sqlvm](../../includes/appliesto-sqlvm.md)]
-In the [Azure portal](https://portal.azure.com), the [**SQL virtual machines**](https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.SqlVirtualMachine%2FSqlVirtualMachines) resource is an independent management service to manage SQL Server on Azure VMs. You can use it to view all of your SQL Server VMs simultaneously and modify settings dedicated to SQL Server:
+In the [Azure portal](https://portal.azure.com), the [**SQL virtual machines**](https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.SqlVirtualMachine%2FSqlVirtualMachines) resource is an independent management service to manage SQL Server on Azure Virtual Machines (VMs) that have been registered with the SQL Server IaaS Agent extension. You can use the resource to view all of your SQL Server VMs simultaneously and modify settings dedicated to SQL Server:
![SQL virtual machines resource](./media/manage-sql-vm-portal/sql-vm-manage.png)
+The **SQL virtual machines** resource management point is different to the **Virtual machine** resource used to manage the VM such as start it, stop it, or restart it.
-## Remarks
-- We recommend that you use the [**SQL virtual machines**](https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.SqlVirtualMachine%2FSqlVirtualMachines) resource to view and manage your SQL Server VMs in Azure. But currently, the **SQL virtual machines** resource does not support the management of [end-of-support](sql-server-2008-extend-end-of-support.md) SQL Server VMs. To manage settings for your end-of-support SQL Server VMs, use the deprecated [SQL Server configuration tab](#access-the-sql-server-configuration-tab) instead. -- The **SQL virtual machines** resource is available only to SQL Server VMs that have [registered with the SQL IaaS Agent extension](sql-agent-extension-manually-register-single-vm.md).
+## Prerequisite
+The **SQL virtual machines** resource is only available to SQL Server VMs that have been [registered with the SQL IaaS Agent extension](sql-agent-extension-manually-register-single-vm.md).
++
+## Access the resource
-## Access the SQL virtual machines resource
To access the **SQL virtual machines** resource, do the following: 1. Open the [Azure portal](https://portal.azure.com).
To access the **SQL virtual machines** resource, do the following:
> [!TIP] > The **SQL virtual machines** resource is for dedicated SQL Server settings. Select the name of the VM in the **Virtual machine** box to open settings that are specific to the VM, but not exclusive to SQL Server.
-## Access the SQL Server configuration tab
-The **SQL Server configuration** tab has been deprecated. At this time, it's the only method to manage [end-of-support](sql-server-2008-extend-end-of-support.md) SQL Server VMs, and SQL Server VMs that have not been [registered with the SQL IaaS Agent extension](sql-agent-extension-manually-register-single-vm.md).
-To access the deprecated **SQL Server configuration** tab, go to the **Virtual machines** resource. Use the following steps:
+## License and edition
-1. Open the [Azure portal](https://portal.azure.com).
-1. Select **All Services**.
-1. Enter **virtual machines** in the search box.
-1. (Optional): Select the star next to **Virtual machines** to add this option to your **Favorites** menu.
-1. Select **Virtual machines**.
+Use the **Configure** page of the SQL virtual machine resource to change your SQL Server licensing metadata to **Pay as you go**, **Azure Hybrid Benefit**, or **H#free-dr-replica-in-azure).
+++
+![Change the version and edition of SQL Server VM metadata in the Azure portal using the SQL virtual machines resource](./media/manage-sql-vm-portal/sql-vm-license-edition.png)
+
+You can also modify the edition of SQL Server from the **Configure** page as well, such as **Enterprise**, **Standard**, or **Developer**.
+
+Changing the license and edition metadata in the Azure portal is only supported once the version and edition of SQL Server has been modified internally to the VM. To learn more see, change the [version](change-sql-server-version.md) and [edition](change-sql-server-edition.md) of SQL Server on Azure VMs.
+
+## Storage
+
+Use the **Configure** page of the SQL virtual machines resource to extend your data, log, and tempdb drives.
+
+![Extend storage in the Azure portal using the SQL virtual machines resource](./media/manage-sql-vm-portal/sql-vm-storage-configuration.png)
+
+## Patching
+
+Use the **Patching** page of the SQL virtual machines resource to enable auto patching of your VM and automatically install Windows and SQL Server updates marked as Important. You can also configure a maintenance schedule here, such as running patching daily, as well as a local start time for maintenance, and a maintenance window.
++
+![Configure automated patching and schedule in the Azure portal using the SQL virtual machines resource](./media/manage-sql-vm-portal/sql-vm-automated-patching.png)
++
+To learn more, see, [Automated patching](automated-patching.md).
+++
+## Backups
+
+Use the **Backups** page of the SQL virtual machines resource to configure your automated backup settings, such as the retention period, which storage account to use, encryption, whether or not to back up system databases, and a backup schedule.
+
+![Configure automated backup and schedule in the Azure portal using the SQL virtual machines resource](./media/manage-sql-vm-portal/sql-vm-automated-backup.png)
+
+To learn more, see, [Automated patching](automated-backup.md).
++
+## High availability (Preview)
+
+Use the **High Availability** page of the SQL virtual machines resource to create a Windows Server Failover Cluster, and configure an Always On availability group, availability group listener, and Azure Load Balancer. Configuring high availability using Azure portal is currently in preview.
++
+![Configure a Windows Server Failover Cluster and an Always On availability group in the Azure portal using the SQL virtual machines resource](./media/manage-sql-vm-portal/sql-vm-high-availability.png)
++
+To learn more, see [Configure availability group by using the Azure portal](availability-group-azure-portal-configure.md).
+
+## Security Configuration
+
+Use the **Security Configuration** page of the SQL virtual machines resource to configure SQL Server security settings such as which port to use, whether or not SQL Authentication is enabled, and to enable Azure Key Vault integration.
+
+![Configure SQL Server security in the Azure portal using the SQL virtual machines resource](./media/manage-sql-vm-portal/sql-vm-security-configuration.png)
+
+To learn more, see the [Security best practices](security-considerations-best-practices.md).
++
+## Security Center
- ![Search for virtual machines](./media/manage-sql-vm-portal/vm-search.png)
+Use the **Security Center** page of the SQL virtual machines resource to view Security Center recommendations directly in the SQL virtual machine blade. Enable [Azure Defender for SQL](../../../security-center/defender-for-sql-usage.md) to leverage this feature.
-1. The portal lists all virtual machines in the subscription. Select the one that you want to manage to open the **Virtual machines** resource. Use the search box if your SQL Server VM isn't appearing.
-1. Select **SQL Server configuration** in the **Settings** pane to manage your SQL Server VM.
+![Configure SQL Server Security Center settings in the Azure portal using the SQL virtual machines resource](./media/manage-sql-vm-portal/sql-vm-security-center.png)
- ![SQL Server configuration](./media/manage-sql-vm-portal/sql-vm-configuration.png)
## Next steps
azure-sql Security Considerations Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/security-considerations-best-practices.md
vm-windows-sql-server Previously updated : 03/23/2018 Last updated : 05/30/2021
This topic includes overall security guidelines that help establish secure acces
Azure complies with several industry regulations and standards that can enable you to build a compliant solution with SQL Server running in a virtual machine. For information about regulatory compliance with Azure, see [Azure Trust Center](https://azure.microsoft.com/support/trust-center/).
+In addition to the practices described in this topic, we recommend that you review and implement the security best practices from both traditional on-premises security practices, as well as virtual machine security best practices.
+
+## Azure Defender for SQL
+
+[Azure Defender for SQL](../../../security-center/defender-for-sql-introduction.md) enables Azure Security Center security features such as vulnerability assessments and security alerts. See [enable Azure Defender for SQL](../../../security-center/defender-for-sql-usage.md) to learn more.
+
+## Portal management
+
+After you've [registered your SQL Server VM with the SQL IaaS extension](sql-agent-extension-manually-register-single-vm.md), you can configure a number of security settings using the [SQL virtual machines resource](manage-sql-vm-portal.md) in the Azure portal, such as enabling Azure Key Vault integration, or SQL authentication.
+
+Additionally, after you've enabled [Azure Defender for SQL](../../../security-center/defender-for-sql-usage.md) you can view Security Center features directly within the [SQL virtual machines resource](manage-sql-vm-portal.md) in the Azure portal, such as vulnerability assessments and security alerts.
+
+See [manage SQL Server VM in the portal](manage-sql-vm-portal.md) to learn more.
+
+## Azure Key Vault integration
+
+There are multiple SQL Server encryption features, such as transparent data encryption (TDE), column level encryption (CLE), and backup encryption. These forms of encryption require you to manage and store the cryptographic keys you use for encryption. The Azure Key Vault service is designed to improve the security and management of these keys in a secure and highly available location. The SQL Server Connector enables SQL Server to use these keys from Azure Key Vault.
For comprehensive details, see the other articles in this series: [Checklist](performance-guidelines-best-practices-checklist.md), [VM size](performance-guidelines-best-practices-vm-size.md), [Storage](performance-guidelines-best-practices-storage.md), [HADR configuration](hadr-cluster-best-practices.md), [Collect baseline](performance-guidelines-best-practices-collect-baseline.md).
-## Control access to the SQL virtual machine
+See [Azure Key Vault integration](azure-key-vault-integration-configure.md) to learn more.
++
+## Access control
When you create your SQL Server virtual machine, consider how to carefully control who has access to the machine and to SQL Server. In general, you should do the following:
Finally, consider enabling encrypted connections for the instance of the SQL Ser
## Encryption
-Managed disks offer Server-Side Encryption, and Azure Disk Encryption. [Server-Side Encryption](../../../virtual-machines/disk-encryption.md) provides encryption-at-rest and safeguards your data to meet your organizational security and compliance commitments. [Azure Disk Encryption](../../../security/fundamentals/azure-disk-encryption-vms-vmss.md) uses either Bitlocker or DM-Crypt technology, and integrates with Azure Key Vault to encrypt both the OS and data disks.
+Managed disks offer Server-Side Encryption, and Azure Disk Encryption. [Server-Side Encryption](../../../virtual-machines/disk-encryption.md) provides encryption-at-rest and safeguards your data to meet your organizational security and compliance commitments. [Azure Disk Encryption](../../../security/fundamentals/azure-disk-encryption-vms-vmss.md) uses either BitLocker or DM-Crypt technology, and integrates with Azure Key Vault to encrypt both the OS and data disks.
-## Use a non-default port
+## Non-default port
By default, SQL Server listens on a well-known port, 1433. For increased security, configure SQL Server to listen on a non-default port, such as 1401. If you provision a SQL Server gallery image in the Azure portal, you can specify this port in the **SQL Server settings** blade. - To configure this after provisioning, you have two options: -- For Resource Manager VMs, you can select **Security** from the [SQL virtual machines resource](manage-sql-vm-portal.md#access-the-sql-virtual-machines-resource). This provides an option to change the port.
+- For Resource Manager VMs, you can select **Security** from the [SQL virtual machines resource](manage-sql-vm-portal.md#access-the-resource). This provides an option to change the port.
![TCP port change in portal](./media/security-considerations-best-practices/sql-vm-change-tcp-port.png)
You don't want attackers to easily guess account names or passwords. Use the fol
- If you must use the **SA** login, enable the login after provisioning and assign a new strong password.
-## Additional best practices
-In addition to the practices described in this topic, we recommend that you review and implement the security best practices from both traditional on-premises security practices, as well as virtual machine security best practices.
-For more information about on-premises security practices, see [Security Considerations for a SQL Server Installation](/sql/sql-server/install/security-considerations-for-a-sql-server-installation) and the [Security center](/sql/relational-databases/security/security-center-for-sql-server-database-engine-and-azure-sql-database).
+## Next steps
-For more information about virtual machine security, see the [virtual machines security overview](../../../security/fundamentals/virtual-machines-overview.md).
+If you are also interested in best practices around performance, see [Performance Best Practices for SQL Server on Azure Virtual Machines](./performance-guidelines-best-practices-checklist.md).
+For other topics related to running SQL Server in Azure VMs, see [SQL Server on Azure Virtual Machines overview](sql-server-on-azure-vm-iaas-what-is-overview.md). If you have questions about SQL Server virtual machines, see the [Frequently Asked Questions](frequently-asked-questions-faq.yml).
-## Next steps
To learn more, see the other articles in this series:
To learn more, see the other articles in this series:
- [HADR settings](hadr-cluster-best-practices.md) - [Collect baseline](performance-guidelines-best-practices-collect-baseline.md)
-For other topics related to running SQL Server in Azure VMs, see [SQL Server on Azure Virtual Machines overview](sql-server-on-azure-vm-iaas-what-is-overview.md). If you have questions about SQL Server virtual machines, see the [Frequently Asked Questions](frequently-asked-questions-faq.yml).
azure-sql Sql Agent Extension Manually Register Single Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/sql-agent-extension-manually-register-single-vm.md
SQL Server VMs that have registered the extension in *lightweight* mode can upgr
To upgrade the extension to full mode using the Azure portal, follow these steps: 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Go to your [SQL virtual machines](manage-sql-vm-portal.md#access-the-sql-virtual-machines-resource) resource.
+1. Go to your [SQL virtual machines](manage-sql-vm-portal.md#access-the-resource) resource.
1. Select your SQL Server VM, and select **Overview**. 1. For SQL Server VMs with the NoAgent or lightweight IaaS mode, select the **Only license type and edition updates are available with the SQL IaaS extension** message.
azure-sql Sql Server Iaas Agent Extension Automate Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/sql-server-iaas-agent-extension-automate-management.md
The following table details these benefits:
| **View disk utilization in portal** | Allows you to view a graphical representation of the disk utilization of your SQL data files in the Azure portal. <br/> Management mode: Full | | **Flexible licensing** | Save on cost by [seamlessly transitioning](licensing-model-azure-hybrid-benefit-ahb-change.md) from the bring-your-own-license (also known as the Azure Hybrid Benefit) to the pay-as-you-go licensing model and back again. <br/> Management mode: Lightweight & full| | **Flexible version / edition** | If you decide to change the [version](change-sql-server-version.md) or [edition](change-sql-server-edition.md) of SQL Server, you can update the metadata within the Azure portal without having to redeploy the entire SQL Server VM. <br/> Management mode: Lightweight & full|
+| **Security Center Portal integration** | If you've enabled [Azure Defender for SQL](/security-center/defender-for-sql-usage.md), then you can view Security Center recommendations directly in the [SQL virtual machines](manage-sql-vm-portal.md) resource of the Azure portal. See [Security best practices](security-considerations-best-practices.md) to learn more. <br/> Management mode: Lightweight & full|
## Management modes
azure-sql Storage Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/storage-configuration.md
You can use the following quickstart template to deploy a SQL Server VM using st
## Existing VMs -
-For existing SQL Server VMs, you can modify some storage settings in the Azure portal. Open your [SQL virtual machines resource](manage-sql-vm-portal.md#access-the-sql-virtual-machines-resource), and select **Overview**. The SQL Server Overview page shows the current storage usage of your VM. All drives that exist on your VM are displayed in this chart. For each drive, the storage space displays in four sections:
+For existing SQL Server VMs, you can modify some storage settings in the Azure portal. Open your [SQL virtual machines resource](manage-sql-vm-portal.md#access-the-resource), and select **Overview**. The SQL Server Overview page shows the current storage usage of your VM. All drives that exist on your VM are displayed in this chart. For each drive, the storage space displays in four sections:
* SQL data * SQL log
azure-sql Ways To Connect To Sql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/ways-to-connect-to-sql.md
Server=mysqlvm;Integrated Security=true
## <a id="change"></a> Change SQL connectivity settings - You can change the connectivity settings for your SQL Server virtual machine in the Azure portal. 1. In the Azure portal, select **SQL virtual machines**.
azure-video-analyzer Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/overview.md
Last updated 03/11/2021
# What is Azure Video Analyzer? (preview)-
+
Azure Video Analyzer provides a platform to build intelligent video applications that span the edge and the cloud. The platform consists of an IoT Edge module, and an associated Azure service. It offers the capability to capture, record, and analyze live video along with publishing the results - video and/or video analytics. Video can be published to the edge or the Video Analyzer cloud service, while video analytics can be published to Azure services (in the cloud and/or the edge). The platform can be used to enhance IoT solutions with video analytics. Video Analyzer functionality can be combined with other Azure IoT Edge modules such as Stream Analytics on IoT Edge, Cognitive Services on IoT Edge and Azure services in the cloud such as Event Hub, Cognitive Services, etc. to build powerful hybrid (for example, edge + cloud) applications. The Video Analyzer edge module is designed to be an extensible platform, enabling you to connect different video analysis edge modules (such as Cognitive services containers, custom edge modules built by you with open-source machine learning models or custom models trained with your own data) to it and use them to analyze live video without worrying about the complexity of building and running a live video pipeline. The Video Analyzer cloud service enables you to play back the video and video analytics from such workflows.
azure-vmware Azure Security Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/azure-security-integration.md
Last updated 06/14/2021
# Integrate Azure Security Center with Azure VMware Solution
-Azure Security Center provides advanced threat protection across your Azure VMware Solution and on-premises virtual machines (VMs). It assesses the vulnerability of Azure VMware Solution VMs and raise alerts as needed. These security alerts can be forwarded to Azure Monitor for resolution. You can define security policies in Azure Security Center. For more information, see [Working with security policies](../security-center/tutorial-security-policy.md).
+Azure Security Center provides advanced threat protection across your Azure VMware Solution and on-premises virtual machines (VMs). It assesses the vulnerability of Azure VMware Solution VMs and raises alerts as needed. These security alerts can be forwarded to Azure Monitor for resolution. You can define security policies in Azure Security Center. For more information, see [Working with security policies](../security-center/tutorial-security-policy.md).
Azure Security Center offers many features, including: - File integrity monitoring
The diagram shows the integrated monitoring architecture of integrated security
## View recommendations and passed assessments
-This provides you with the security health details of your resource.
+Recommendations and assessments provide you with the security health details of your resource.
1. In Azure Security Center, select **Inventory** from the left pane.
This provides you with the security health details of your resource.
## Deploy an Azure Sentinel workspace
-Azure Sentinel is built on top of a Log Analytics workspace, so you'll just need to select the Log Analytics workspace you want to use.
+Since Azure Sentinel is built on top of a Log Analytics workspace, you'll only need to select the workspace you want to use.
1. In the Azure portal, search for **Azure Sentinel**, and select it.
After connecting data sources to Azure Sentinel, you can create rules to generat
- Status
-5. On the **Set rule logic** tab, enter the required information and then select **Next**.
+5. On the **Set rule logic** tab, enter the required information, and then select **Next**.
- Rule query (here showing our example query)
After connecting data sources to Azure Sentinel, you can create rules to generat
6. On the **Incident settings** tab, enable **Create incidents from alerts triggered by this analytics rule** and select **Next: Automated response**.
- :::image type="content" source="media/azure-security-integration/create-new-analytic-rule-wizard.png" alt-text="Screenshot of the Analytic rule wizard for creating a new rule in Azure Sentinel. Shows Create incidents from alerts triggered by this rule as enabled.":::
+ :::image type="content" source="media/azure-security-integration/create-new-analytic-rule-wizard.png" alt-text="Screenshot showing the Analytic rule wizard for creating a new rule in Azure Sentinel.":::
7. Select **Next: Review**.
-8. On the **Review and create** tab, review the information and select **Create**.
+8. On the **Review and create** tab, review the information, and select **Create**.
>[!TIP] >After the third failed attempt to sign in to Windows server, the created rule triggers an incident for every unsuccessful attempt.
azure-vmware Azure Vmware Solution Horizon https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/azure-vmware-solution-horizon.md
Here, we focus specifically on deploying Horizon on Azure VMware Solution. For g
With Horizon's introduction on Azure VMware Solution, there are now two Virtual Desktop Infrastructure (VDI) solutions on the Azure platform. The following diagram summarizes the key differences at a high level.
-Horizon 2006 and later versions on the Horizon 8 release line supports both on-premises deployment and Azure VMware Solution deployment. There are a few Horizon features that are supported on-premises but not on Azure VMware Solution. Additional products in the Horizon ecosystem are also supported. For for information, see [feature parity and interoperability](https://kb.vmware.com/s/article/80850).
+Horizon 2006 and later versions on the Horizon 8 release line supports both on-premises deployment and Azure VMware Solution deployment. There are a few Horizon features that are supported on-premises but not on Azure VMware Solution. Other products in the Horizon ecosystem are also supported. For for information, see [feature parity and interoperability](https://kb.vmware.com/s/article/80850).
## Deploy Horizon in a hybrid cloud
Given the Azure private cloud and SDDC max limit, we recommend a deployment arch
The connection from Azure Virtual Network to the Azure private clouds / SDDCs should be configured with ExpressRoute FastPath. The following diagram shows a basic Horizon pod deployment. ## Network connectivity to scale Horizon on Azure VMware Solution
This section lays out the network architecture at a high level with some common
### Single Horizon pod on Azure VMware Solution A single Horizon pod is the most straight forward deployment scenario because you deploy just one Horizon pod in the US East region. Since each private cloud and SDDC is estimated to handle 4,000 desktop sessions, you deploy the maximum Horizon pod size. You can plan the deployment of up to three private clouds/SDDCs.
A variation on the basic example might be to support connectivity for on-premise
The diagram shows how to support connectivity for on-premises resources. To connect to your corporate network to the Azure Virtual Network, you'll need an ExpressRoute circuit. You'll also need to connect your corporate network with each of the private cloud and SDDCs using ExpressRoute Global Reach. It allows the connectivity from the SDDC to the ExpressRoute circuit and on-premises resources. ### Multiple Horizon pods on Azure VMware Solution across multiple regions Another scenario is scaling Horizon across multiple pods. In this scenario, you deploy two Horizon pods in two different regions and federate them using CPA. It's similar to the network configuration in the previous example, but with some additional cross-regional links.
-You'll connect the Azure Virtual Network in each region to the private clouds/SDDCs in the other region. It allows Horizon connection servers part of the CPA federation to connect to all desktops under management. Adding additional private clouds/SDDCs to this configuration would allow you to scale to 24,000 sessions overall. 
+You'll connect the Azure Virtual Network in each region to the private clouds/SDDCs in the other region. It allows Horizon connection servers part of the CPA federation to connect to all desktops under management. Adding extra private clouds/SDDCs to this configuration would allow you to scale to 24,000 sessions overall. 
The same principles apply if you deploy two Horizon pods in the same region. Make sure to deploy the second Horizon pod in a *separate Azure Virtual Network*. Just like the single pod example, you can connect your corporate network and on-premises pod to this multi-pod/region example using ExpressRoute and Global Reach. ## Size Azure VMware Solution hosts for Horizon deployments
Work with your VMware EUC sales team to determine the Horizon licensing cost bas
### Azure Instance Types
-To understand the Azure virtual machine sizes which will be required for the Horizon Infrastructure please refer to VMware's guidelines which can be found [here](https://techzone.vmware.com/resource/horizon-on-azure-vmware-solution-configuration#horizon-installation-on-azure-vmware-solution).
+To understand the Azure virtual machine sizes that will be required for the Horizon Infrastructure, see [Horizon Installation on Azure VMware Solution](https://techzone.vmware.com/resource/horizon-on-azure-vmware-solution-configuration#horizon-installation-on-azure-vmware-solution).
## References [System Requirements For Horizon Agent for Linux](https://docs.vmware.com/en/VMware-Horizon/2012/linux-desktops-setup/GUID-E268BDBF-1D89-492B-8563-88936FD6607A.html)
azure-vmware Azure Vmware Solution Platform Updates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/azure-vmware-solution-platform-updates.md
No further action is required.
Azure VMware Solution service will do maintenance work through May 23, 2021, to apply important updates to the vCenter server in your private cloud. You'll receive a notification through Azure Service Health that includes the timeline of the maintenance for your private cloud.
-During this time, VMware vCenter will be unavailable and you won't be able to manage VMs (stop, start, create, or delete). It's recommended that, during this time, you don't plan any other activities like scaling up private cloud, creating new networks, and so on in your private cloud.
+During this time, VMware vCenter will be unavailable and you won't be able to manage VMs (stop, start, create, or delete). It's recommended that, during this time, you don't plan any other activities like scaling up private cloud, creating new networks, and so on, in your private cloud.
There is no impact to workloads running in your private cloud.
azure-vmware Concepts Hub And Spoke https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-hub-and-spoke.md
Traffic between the on-premises datacenter, Azure VMware Solution private cloud,
The diagram shows an example of a Hub and Spoke deployment in Azure connected to on-premises and Azure VMware Solution through ExpressRoute Global Reach. The architecture has the following main components:
Because an ExpressRoute gateway doesn't provide transitive routing between its c
* **On-premises to Azure VMware Solution traffic flow**
- :::image type="content" source="./media/hub-spoke/on-premises-azure-vmware-solution-traffic-flow.png" alt-text="On-premises to Azure VMware Solution traffic flow" border="false" lightbox="./media/hub-spoke/on-premises-azure-vmware-solution-traffic-flow.png":::
+ :::image type="content" source="./media/hub-spoke/on-premises-azure-vmware-solution-traffic-flow.png" alt-text="Diagram showing the on-premises to Azure VMware Solution traffic flow." border="false" lightbox="./media/hub-spoke/on-premises-azure-vmware-solution-traffic-flow.png":::
* **Azure VMware Solution to Hub VNET traffic flow**
- :::image type="content" source="./media/hub-spoke/azure-vmware-solution-hub-vnet-traffic-flow.png" alt-text="Azure VMware Solution to Hub virtual network traffic flow" border="false" lightbox="./media/hub-spoke/azure-vmware-solution-hub-vnet-traffic-flow.png":::
+ :::image type="content" source="./media/hub-spoke/azure-vmware-solution-hub-vnet-traffic-flow.png" alt-text="Diagram showing the Azure VMware Solution to Hub virtual network traffic flow." border="false" lightbox="./media/hub-spoke/azure-vmware-solution-hub-vnet-traffic-flow.png":::
For more information on Azure VMware Solution networking and connectivity concepts, see the [Azure VMware Solution product documentation](./concepts-networking.md).
For more information on Azure VMware Solution networking and connectivity concep
Create route tables to direct the traffic to Azure Firewall. For the Spoke virtual networks, create a route that sets the default route to the internal interface of Azure Firewall. This way, when a workload in the Virtual Network needs to reach the Azure VMware Solution address space, the firewall can evaluate it and apply the corresponding traffic rule to either allow or deny it. > [!IMPORTANT]
Create route tables to direct the traffic to Azure Firewall. For the Spoke virt
Set routes for specific networks on the corresponding route table. For example, routes to reach Azure VMware Solution management and workloads IP prefixes from the spoke workloads and the other way around. A second level of traffic segmentation using the network security groups within the Spokes and the Hub to create a more granular traffic policy.
Azure Application Gateway V1 and V2 have been tested with web apps that run on A
For more information, see the Azure VMware Solution-specific article on [Application Gateway](./protect-azure-vmware-solution-with-application-gateway.md). ### Jump box and Azure Bastion
As a security best practice, deploy [Microsoft Azure Bastion](../bastion/index.y
> Do not give a public IP address to the jump box VM or expose 3389/TCP port to the public internet. ## Azure DNS resolution considerations
azure-vmware Concepts Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-identity.md
You can view the privileges granted to the Azure VMware Solution CloudAdmin role
1. From the list of roles, select **CloudAdmin** and then select **Privileges**.
- :::image type="content" source="media/concepts/role-based-access-control-cloudadmin-privileges.png" alt-text="How to view the CloudAdmin role privileges in vSphere Client":::
+ :::image type="content" source="media/concepts/role-based-access-control-cloudadmin-privileges.png" alt-text="Screenshot showing the roles and privileges for CloudAdmin in the vSphere Client.":::
The CloudAdmin role in Azure VMware Solution has the following privileges on vCenter. For more information, see the [VMware product documentation](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.security.doc/GUID-ED56F3C4-77D0-49E3-88B6-B99B8B437B62.html).
azure-vmware Concepts Monitor Protection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-monitor-protection.md
Last updated 06/14/2021
Microsoft Azure native services let you monitor, manage, and protect your virtual machines (VMs) on Azure VMware Solution and on-premises VMs. The Azure native services that you can integrate with Azure VMware Solution include: - **Log Analytics workspace** is a unique environment to store log data. Each workspace has its own data repository and configuration. Data sources and solutions are configured to store their data in a specific workspace. Easily deploy the Log Analytics agent using Azure Arc enabled servers VM extension support for new and existing VMs. -- **Azure Security Center** is a unified infrastructure security management system. It strengthens security of data centers, and provides advanced threat protection across hybrid workloads in the cloud or on premises. It assesses the vulnerability of Azure VMware Solution VMs and raise alerts as needed. These security alerts can be forwarded to Azure Monitor for resolution. You can define security policies in Azure Security Center. For more information, see [Integrate Azure Security Center with Azure VMware Solution](azure-security-integration.md).
+- **Azure Security Center** is a unified infrastructure security management system. It strengthens security of data centers, and provides advanced threat protection across hybrid workloads in the cloud or on premises. It assesses the vulnerability of Azure VMware Solution VMs and raises alerts as needed. These security alerts can be forwarded to Azure Monitor for resolution. You can define security policies in Azure Security Center. For more information, see [Integrate Azure Security Center with Azure VMware Solution](azure-security-integration.md).
- **[Azure Monitor](../azure-monitor/vm/vminsights-enable-overview.md)** is a comprehensive solution for collecting, analyzing, and acting on telemetry from your cloud and on-premises environments. It requires no deployment. With Azure Monitor, you can monitor guest operating system performance and discover and map application dependencies for Azure VMware Solution or on-premises VMs. Your Log Analytics workspace in Azure Monitor enables log collection and performance counter collection using the Log Analytics agent or extensions. Collect data and logs to a single point and present that data to different Azure native services. - **Azure Arc** extends Azure management to any infrastructure, including Azure VMware Solution, on-premises, or other cloud platforms. [Azure Arc enabled servers](../azure-arc/servers/overview.md) enables you to manage your Windows and Linux physical servers and virtual machines hosted *outside* of Azure, on your corporate network, or other cloud provider. You can attach a Kubernetes cluster hosted in your Azure VMware Solution environment using [Azure Arc enabled Kubernetes](../azure-arc/kubernetes/overview.md). - **[Azure Update Management](../automation/update-management/overview.md)** in Azure Automation manages operating system updates for your Windows and Linux machines in a hybrid environment. It monitors patching compliance and forwards patching deviation alerts to Azure Monitor for remediation. Azure Update Management must connect to your Log Analytics workspace to use stored data to assess the status of updates on your VMs.
The diagram shows the integrated monitoring architecture for Azure VMware Soluti
The Log Analytics agent enables collection of log data from Azure, Azure VMware Solution, and on-premises VMs. The log data is sent to Azure Monitor Logs and stored in a Log Analytics workspace. You can deploy the Log Analytics agent using Arc enabled servers [VM extensions support](../azure-arc/servers/manage-vm-extensions.md) for new and existing VMs.
-Once the logs are collected by the Log Analytics workspace, you can configure the Log Analytics workspace with Azure Security Center. Azure Security Center assesses the vulnerability status of Azure VMware Solution VMs and raise an alert for any critical vulnerability. For instance, it assesses missing operating system patches, security misconfigurations, and [endpoint protection](../security-center/security-center-services.md).
+Once the logs are collected by the Log Analytics workspace, you can configure the Log Analytics workspace with Azure Security Center. Azure Security Center assesses the vulnerability status of Azure VMware Solution VMs and raises an alert for any critical vulnerability. For instance, it assesses missing operating system patches, security misconfigurations, and [endpoint protection](../security-center/security-center-services.md).
You can configure the Log Analytics workspace with Azure Sentinel for alert detection, threat visibility, hunting, and threat response. In the preceding diagram, Azure Security Center is connected to Azure Sentinel using Azure Security Center connector. Azure Security Center will forward the environment vulnerability to Azure Sentinel to create an incident and map with other threats. You can also create the scheduled rules query to detect unwanted activity and convert it to the incidents.
azure-vmware Concepts Private Clouds Clusters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-private-clouds-clusters.md
As with other resources, private clouds are installed and managed from within an
The diagram shows a single Azure subscription with two private clouds that represent a development and production environment. In each of those private clouds are two clusters. ## Hosts
azure-vmware Concepts Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-storage.md
That default storage policy is set to RAID-1 (Mirroring), FTT-1, and thick provi
>[!TIP] >If you're unsure if the cluster will grow to four or more, then deploy using the default policy. If you're sure your cluster will grow, then instead of expanding the cluster after your initial deployment, we recommend to deploy the extra hosts during deployment. As the VMs are deployed to the cluster, change the disk's storage policy in the VM settings to either RAID-5 FTT-1 or RAID-6 FTT-2. >
->:::image type="content" source="media/concepts/vsphere-vm-storage-policies-2.png" alt-text="Screenshot ":::
+>:::image type="content" source="media/concepts/vsphere-vm-storage-policies-2.png" alt-text="Screenshot showing the RAID-5 FTT-1 and RAID-6 Ftt-2 options highlighed.":::
## Data-at-rest encryption
azure-vmware Configure Dhcp Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/configure-dhcp-azure-vmware-solution.md
If you want to use a third-party external DHCP server, you'll create a DHCP rela
### Create DHCP relay service
-Use a DHCP relay for any non-NSX based DHCP service. For example, a VM running DHCP in Azure VMware Solution, Azure IaaS, or on-premises.
+Use a DHCP relay for any non-NSX-based DHCP service. For example, a VM running DHCP in Azure VMware Solution, Azure IaaS, or on-premises.
1. In NSX-T Manager, select **Networking** > **DHCP**, and then select **Add Server**.
azure-vmware Configure Github Enterprise Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/configure-github-enterprise-server.md
Title: Configure GitHub Enterprise Server on Azure VMware Solution description: Learn how to Set up GitHub Enterprise Server on your Azure VMware Solution private cloud. Previously updated : 02/11/2021 Last updated : 07/07/2021 # Configure GitHub Enterprise Server on Azure VMware Solution
-In this article, we walk through the steps to set up GitHub Enterprise Server, the "on-premises" version of [GitHub.com](https://github.com/), on your Azure VMware Solution private cloud. The scenario we'll cover is a GitHub Enterprise Server instance that can serve up to 3,000 developers running up to 25 jobs per minute on GitHub Actions. It includes the setup of (at time of writing) *preview* features, such as GitHub Actions. To customize the setup for your particular needs, review the requirements listed in [Installing GitHub Enterprise Server on VMware](https://docs.github.com/en/enterprise/admin/installation/installing-github-enterprise-server-on-vmware#hardware-considerations).
+In this article, you'll set up GitHub Enterprise Server, the "on-premises" version of [GitHub.com](https://github.com/), on your Azure VMware Solution private cloud. The scenario covers a GitHub Enterprise Server instance that can serve up to 3,000 developers running up to 25 jobs per minute on GitHub Actions. It includes the setup of (at time of writing) *preview* features, such as GitHub Actions. To customize the setup for your particular needs, review the requirements listed in [Installing GitHub Enterprise Server on VMware](https://docs.github.com/en/enterprise/admin/installation/installing-github-enterprise-server-on-vmware#hardware-considerations).
## Before you begin
GitHub Enterprise Server requires a valid license key. You may sign up for a [tr
## Install GitHub Enterprise Server on VMware
-Download [the current release of GitHub Enterprise Server](https://enterprise.github.com/releases/2.19.0/download) for VMware ESXi/vSphere (OVA) and [deploy the OVA template](https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.vm_admin.doc/GUID-17BEDA21-43F6-41F4-8FB2-E01D275FE9B4.html) you downloaded.
+1. Download [the current release of GitHub Enterprise Server](https://enterprise.github.com/releases/2.19.0/download) for VMware ESXi/vSphere (OVA) and [deploy the OVA template](https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.vm_admin.doc/GUID-17BEDA21-43F6-41F4-8FB2-E01D275FE9B4.html) you downloaded.
+ :::image type="content" source="media/github-enterprise-server/github-options.png" alt-text="Screenshot showing the GitHub Enterprise Server on VMware installation options.":::
+ :::image type="content" source="media/github-enterprise-server/deploy-ova-template.png" alt-text="Screenshot showing the Deploy the OVA Template menu option.":::
-Provide a recognizable name for your new virtual machine, such as GitHubEnterpriseServer. You don't need to include the release details in the VM name, as these details become stale when the instance is upgraded. Select all the defaults for now (we'll edit these details shortly) and wait for the OVA to be imported.
+1. Provide a recognizable name for your new virtual machine, such as GitHubEnterpriseServer. You don't need to include the release details in the VM name, as these details become stale when the instance is upgraded.
-Once imported, [adjust the hardware configuration](https://docs.github.com/en/enterprise/admin/installation/installing-github-enterprise-server-on-vmware#creating-the-github-enterprise-server-instance) based on your needs. In our example scenario, we'll need the following configuration.
+1. Select all the defaults for now (we'll edit these details shortly) and wait for the OVA to be imported.
-| Resource | Standard Setup | Standard Set up + "Beta Features" (Actions) |
-| | | |
-| vCPUs | 4 | 8 |
-| Memory | 32 GB | 61 GB |
-| Attached storage | 250 GB | 300 GB |
-| Root storage | 200 GB | 200 GB |
+1. Once imported, [adjust the hardware configuration](https://docs.github.com/en/enterprise/admin/installation/installing-github-enterprise-server-on-vmware#creating-the-github-enterprise-server-instance) based on your needs. In our example scenario, we'll need the following configuration.
-However, your needs may vary. Refer to the guidance on hardware considerations in [Installing GitHub Enterprise Server on VMware](https://docs.github.com/en/enterprise/admin/installation/installing-github-enterprise-server-on-vmware#hardware-considerations). Also see [Adding CPU or memory resources for VMware](https://docs.github.com/en/enterprise/admin/enterprise-management/increasing-cpu-or-memory-resources#adding-cpu-or-memory-resources-for-vmware) to customize the hardware configuration based on your situation.
+ | Resource | Standard Setup | Standard Set up + "Beta Features" (Actions) |
+ | | | |
+ | vCPUs | 4 | 8 |
+ | Memory | 32 GB | 61 GB |
+ | Attached storage | 250 GB | 300 GB |
+ | Root storage | 200 GB | 200 GB |
+
+ Your needs may vary. Refer to the guidance on hardware considerations in [Installing GitHub Enterprise Server on VMware](https://docs.github.com/en/enterprise/admin/installation/installing-github-enterprise-server-on-vmware#hardware-considerations). Also see [Adding CPU or memory resources for VMware](https://docs.github.com/en/enterprise/admin/enterprise-management/increasing-cpu-or-memory-resources#adding-cpu-or-memory-resources-for-vmware) to customize the hardware configuration based on your situation.
## Configure the GitHub Enterprise Server instance
-After the newly provisioned virtual machine (VM) has powered on, [configure it via your browser](https://docs.github.com/en/enterprise/admin/installation/installing-github-enterprise-server-on-vmware#configuring-the-github-enterprise-server-instance). You'll be required to upload your license file and set a management console password. Be sure to write down this password somewhere safe.
+After the newly provisioned virtual machine (VM) has powered on, [configure it through your browser](https://docs.github.com/en/enterprise/admin/installation/installing-github-enterprise-server-on-vmware#configuring-the-github-enterprise-server-instance). You'll be required to upload your license file and set a management console password. Be sure to write down this password somewhere safe.
We recommend to at least take the following steps: 1. Upload a public SSH key to the management console, so that you can [access the administrative shell via SSH](https://docs.github.com/en/enterprise/admin/configuration/accessing-the-administrative-shell-ssh).
-2. [Configure TLS on your instance](https://docs.github.com/en/enterprise/admin/configuration/configuring-tls) so that you can use a certificate signed by a trusted certificate authority.
+2. [Configure TLS on your instance](https://docs.github.com/en/enterprise/admin/configuration/configuring-tls) so that you can use a certificate signed by a trusted certificate authority. Apply your settings.
+
+ :::image type="content" source="media/github-enterprise-server/configuring-your-instance.png" alt-text="Screenshot showing the settings being applied to your instance.":::
+1. While the instance restarts, configure blob storage for GitHub Actions.
-Apply your settings. While the instance restarts, you can continue with the next step, **Configuring Blob Storage for GitHub Actions**.
+ >[!NOTE]
+ >GitHub Actions is [currently available as a limited beta on GitHub Enterprise Server release 2.22](https://docs.github.com/en/enterprise/admin/github-actions).
+
+ External blob storage is necessary to enable GitHub Actions on GitHub Enterprise Server (currently available as a "beta" feature). This external blob storage is used by Actions to store artifacts and logs. Actions on GitHub Enterprise Server [supports Azure Blob Storage as a storage provider](https://docs.github.com/en/enterprise/admin/github-actions/enabling-github-actions-and-configuring-storage#about-external-storage-requirements) (and some others). So we'll provision a new Azure storage account with a [storage account type](../storage/common/storage-account-overview.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json#types-of-storage-accounts) of BlobStorage.
+
+ :::image type="content" source="media/github-enterprise-server/storage-account.png" alt-text="Screenshot showing the instance details to enter for provisioning an Azure Blob Storage account.":::
+
+1. Once the deployment of the new BlobStorage resource has completed, copy and make a note of the connection string (available under Access keys). You'll need this string shortly.
+1. After the instance restarts, create a new admin account on the instance. Be sure to make a note of this user's password as well.
-After the instance restarts, you can create a new admin account on the instance. Be sure to make a note of this user's password as well.
+ :::image type="content" source="media/github-enterprise-server/create-admin-account.png" alt-text="Screenshot showing the Create admin account for GitHub Enterprise.":::
### Other configuration steps
To harden your instance for production use, the following optional setup steps a
2. [Configure](https://docs.github.com/en/enterprise/admin/configuration/configuring-backups-on-your-appliance) [backup-utilities](https://github.com/github/backup-utils), providing versioned snapshots for disaster recovery, hosted in availability that is separate from the primary instance. 3. [Setup subdomain isolation](https://docs.github.com/en/enterprise/admin/configuration/enabling-subdomain-isolation), using a valid TLS certificate, to mitigate cross-site scripting and other related vulnerabilities.
-## Configure blob storage for GitHub Actions
-
-> [!NOTE]
-> GitHub Actions is [currently available as a limited beta on GitHub Enterprise Server release 2.22](https://docs.github.com/en/enterprise/admin/github-actions).
-
-External blob storage is necessary to enable GitHub Actions on GitHub Enterprise Server (currently available as a "beta" feature). This external blob storage is used by Actions to store artifacts and logs. Actions on GitHub Enterprise Server [supports Azure Blob Storage as a storage provider](https://docs.github.com/en/enterprise/admin/github-actions/enabling-github-actions-and-configuring-storage#about-external-storage-requirements) (and some others). So we'll provision a new Azure storage account with a [storage account type](../storage/common/storage-account-overview.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json#types-of-storage-accounts) of BlobStorage:
--
-Once the deployment of the new BlobStorage resource has completed, copy and make a note of the connection string (available under Access keys). We'll need this string shortly.
## Set up the GitHub Actions runner > [!NOTE] > GitHub Actions is [currently available as a limited beta on GitHub Enterprise Server release 2.22](https://docs.github.com/en/enterprise/admin/github-actions).
-At this point, you should have an instance of GitHub Enterprise Server running, with an administrator account created. You should also have external blob storage that GitHub Actions will use for persistence.
-
-Now let's create somewhere for GitHub Actions to run; again, we'll use Azure VMware Solution.
-
-First, let's provision a new VM on the cluster. We'll base our VM on [a recent release of Ubuntu Server](http://releases.ubuntu.com/20.04.1/).
-
+At this point, you should have an instance of GitHub Enterprise Server running, with an administrator account created. You should also have external blob storage that GitHub Actions uses for persistence.
+Create somewhere for GitHub Actions to run; again, we'll use Azure VMware Solution.
-Once the VM is created, power it up and connect to it via SSH.
+1. Provision a new VM on the cluster and base it on [a recent release of Ubuntu Server](http://releases.ubuntu.com/20.04.1/).
-Next, install [the Actions runner](https://github.com/actions/runner) application, which runs a job from a GitHub Actions workflow. Identify and download the most current Linux x64 release of the Actions runner, either from [the releases page](https://github.com/actions/runner/releases) or by running the following quick script. This script requires both curl and [jq](https://stedolan.github.io/jq/) to be present on your VM.
+ :::image type="content" source="media/github-enterprise-server/provision-new-vm.png" alt-text="Screenshot showing the virtual machine name and location to provision a new VM.":::
-`LATEST\_RELEASE\_ASSET\_URL=$( curl https://api.github.com/repos/actions/runner/releases/latest | \`
+1. Continue through the set up selecting the compute resource, storage, and compatibility.
-` jq -r '.assets | .[] | select(.name | match("actions-runner-linux-arm64")) | .url' )`
+1. Select the guest OS that will be installed on the VM.
-`DOWNLOAD\_URL=$( curl $LATEST\_RELEASE\_ASSET\_URL | \`
+ :::image type="content" source="media/github-enterprise-server/provision-new-vm-2.png" alt-text="Screenshot showing the Guest OS Family and Guest OS version to install on the VM.":::
-` jq -r '.browser\_download\_url' )`
+1. Once the VM is created, power it up and connect to it via SSH.
-`curl -OL $DOWNLOAD\_URL`
+1. Install [the Actions runner](https://github.com/actions/runner) application, which runs a job from a GitHub Actions workflow. Identify and download the most current Linux x64 release of the Actions runner, either from [the releases page](https://github.com/actions/runner/releases) or by running the following quick script. This script requires both curl and [jq](https://stedolan.github.io/jq/) to be present on your VM.
-You should now have a file locally on your VM, actions-runner-linux-arm64-\*.tar.gz. Extract this tarball locally:
-
-`tar xzf actions-runner-linux-arm64-\*.tar.gz`
-
-This extraction unpacks a few files locally, including a `config.sh` and `run.sh` script, which we'll come back to shortly.
+ `LATEST\_RELEASE\_ASSET\_URL=$( curl https://api.github.com/repos/actions/runner/releases/latest | \`
+
+ ` jq -r '.assets | .[] | select(.name | match("actions-runner-linux-arm64")) | .url' )`
+
+ `DOWNLOAD\_URL=$( curl $LATEST\_RELEASE\_ASSET\_URL | \`
+
+ ` jq -r '.browser\_download\_url' )`
+
+ `curl -OL $DOWNLOAD\_URL`
+
+ You should now have a file locally on your VM, actions-runner-linux-arm64-\*.tar.gz. Extract this tarball locally:
+
+ `tar xzf actions-runner-linux-arm64-\*.tar.gz`
+
+ This extraction unpacks a few files locally, including a `config.sh` and `run.sh` script.
## Enable GitHub Actions
-> [!NOTE]
-> GitHub Actions is [currently available as a limited beta on GitHub Enterprise Server release 2.22](https://docs.github.com/en/enterprise/admin/github-actions).
-
-Nearly there! Let's configure and enable GitHub Actions on the GitHub Enterprise Server instance. We'll need to [access the GitHub Enterprise Server instance's administrative shell over SSH](https://docs.github.com/en/enterprise/admin/configuration/accessing-the-administrative-shell-ssh), and then run the following commands:
-
-`# set an environment variable containing your Blob storage connection string`
-
-`export CONNECTION\_STRING="<your connection string from the blob storage step>"`
-
-`# configure actions storage`
-
-`ghe-config secrets.actions.storage.blob-provider azure`
-
-`ghe-config secrets.actions.storage.azure.connection-string "$CONNECTION\_STRING"`
-
-`# apply these settings`
-
-`ghe-config-apply`
-
-`# execute a precheck, this install additional software required by Actions on GitHub Enterprise Server`
-
-`ghe-actions-precheck -p azure -cs "$CONNECTION\_STRING"`
-
-`# enable actions, and re-apply the config`
+>[!NOTE]
+>GitHub Actions is [currently available as a limited beta on GitHub Enterprise Server release 2.22](https://docs.github.com/en/enterprise/admin/github-actions).
-`ghe-config app.actions.enabled true`
+Configure and enable GitHub Actions on the GitHub Enterprise Server instance.
-`ghe-config-apply`
+1. [Access the GitHub Enterprise Server instance's administrative shell over SSH](https://docs.github.com/en/enterprise/admin/configuration/accessing-the-administrative-shell-ssh), and then run the following commands:
-Next run:
+ `# set an environment variable containing your Blob storage connection string`
+
+ `export CONNECTION\_STRING="<your connection string from the blob storage step>"`
+
+ `# configure actions storage`
+
+ `ghe-config secrets.actions.storage.blob-provider azure`
+
+ `ghe-config secrets.actions.storage.azure.connection-string "$CONNECTION\_STRING"`
+
+ `# apply these settings`
+
+ `ghe-config-apply`
+
+ `# execute a precheck, this install additional software required by Actions on GitHub Enterprise Server`
+
+ `ghe-actions-precheck -p azure -cs "$CONNECTION\_STRING"`
+
+ `# enable actions, and re-apply the config`
+
+ `ghe-config app.actions.enabled true`
+
+ `ghe-config-apply`
-`ghe-actions-check -s blob`
+1. Next check the health of your blob storage:
-You should see output: "Blob Storage is healthy".
+ `ghe-actions-check -s blob`
-Now that GitHub Actions is configured, enable it for your users. Sign in to your GitHub Enterprise Server instance as an administrator, and select the ![Rocket icon.](media/github-enterprise-server/rocket-icon.png) in the upper right corner of any page. In the left sidebar, select **Enterprise overview**, then **Policies**, **Actions**, and select the option to **enable Actions for all organizations**.
+ You should see output: _Blob Storage is healthy_.
-Next, configure your runner from the **Self-hosted runners** tab. Select **Add new** and then **New runner** from the drop-down.
+1. Now that **GitHub Actions** is configured, enable it for your users. Sign in to your GitHub Enterprise Server instance as an administrator, and select the ![Rocket icon.](media/github-enterprise-server/rocket-icon.png) in the upper right corner of any page.
-On the next page, you'll be presented with a set of commands to run, we just need to copy the command to **configure** the runner, for instance:
+1. In the left sidebar, select **Enterprise overview**, then **Policies**, **Actions**, and select the option to **enable Actions for all organizations**.
-`./config.sh --url https://10.1.1.26/enterprises/octo-org --token AAAAAA5RHF34QLYBDCHWLJC7L73MA`
+1. Configure your runner from the **Self-hosted runners** tab. Select **Add new** and then **New runner** from the drop-down. You'll be presented with a set of commands to run.
-Copy the `config.sh` command and paste it into a session on your Actions runner (created previously).
+1. Copy the command to **configure** the runner, for instance:
+ `./config.sh --url https://10.1.1.26/enterprises/octo-org --token AAAAAA5RHF34QLYBDCHWLJC7L73MA`
-Use the run.sh command to *run* the runner:
+1. Copy the `config.sh` command and paste it into a session on your Actions runner (created previously).
+ :::image type="content" source="media/github-enterprise-server/actions-runner.png" alt-text="Screenshot showing the GitHub Actions runner registration and settings.":::
-To make this runner available to organizations in your enterprise, edit its organization access:
+1. Use the `./run.sh` command to *run* the runner:
+ >[!TIP]
+ >To make this runner available to organizations in your enterprise, edit its organization access. You can limit access to a subset of organizations, and even to specific repositories.
+ >
+ >:::image type="content" source="media/github-enterprise-server/edit-runner-access.png" alt-text="Screenshot of how to edit access for the self-hosted runners.":::
-Here we'll make it available to all organizations, but you can limit access to a subset of organizations, and even to specific repositories.
## (Optional) Configure GitHub Connect
To enable GitHub Connect, follow the steps in [Enabling automatic access to GitH
Once GitHub Connect is enabled, select the **Server to use actions from GitHub.com in workflow runs** option. ## Set up and run your first workflow
Now that Actions and GitHub Connect is set up, let's put all this work to good u
In this basic workflow, we'll use `octokit/request-action` to just open an issue on GitHub using the API. >[!NOTE] >GitHub.com hosts the action, but when it runs on GitHub Enterprise Server, it *automatically* uses the GitHub Enterprise Server API. If you chose to not enable GitHub Connect, you can use the following alternative workflow.
-Navigate to a repo on your instance, and add the above workflow as: `.github/workflows/hello-world.yml`
+1. Navigate to a repo on your instance, and add the above workflow as: `.github/workflows/hello-world.yml`
+ :::image type="content" source="media/github-enterprise-server/workflow-example-3.png" alt-text="Screenshot of another alternative example workflow.":::
-In the **Actions** tab for your repo, wait for the workflow to execute.
+1. In the **Actions** tab for your repo, wait for the workflow to execute.
+ :::image type="content" source="media/github-enterprise-server/executed-example-workflow.png" alt-text="Screenshot of an executed example workflow.":::
-You can also watch it being processed by the runner.
+ You can see it being processed by the runner.
+ :::image type="content" source="media/github-enterprise-server/workflow-processed-by-runner.png" alt-text="Screenshot of the workflow processed by runner.":::
If everything ran successfully, you should see a new issue in your repo, entitled "Hello world." Congratulations! You just completed your first Actions workflow on GitHub Enterprise Server, running on your Azure VMware Solution private cloud.
azure-vmware Configure L2 Stretched Vmware Hcx Networks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/configure-l2-stretched-vmware-hcx-networks.md
DHCP does not work for virtual machines (VMs) on the VMware HCX L2 stretch netwo
1. Select **Add Segment Profile** and then **Segment Security**. :::image type="content" source="media/manage-dhcp/add-segment-profile.png" alt-text="Screenshot of how to add a segment profile in NSX-T" lightbox="media/manage-dhcp/add-segment-profile.png":::+ 1. Provide a name and a tag, and then set the **BPDU Filter** toggle to ON and all the DHCP toggles to OFF. :::image type="content" source="media/manage-dhcp/add-segment-profile-bpdu-filter-dhcp-options.png" alt-text="Screenshot showing the BPDU Filter toggled on and the DHCP toggles off" lightbox="media/manage-dhcp/add-segment-profile-bpdu-filter-dhcp-options.png":::
azure-vmware Configure Nsx Network Components Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/configure-nsx-network-components-azure-portal.md
You can create and configure an NSX-T segment from the Azure VMware Solution con
- **Connected gateway** - *Selected by default and is read-only.* Tier-1 gateway and type of segment information.
- - **T1** - Name of the Tier-1 gateway in NSX-T Manager. An Azure VMware Solution private cloud comes with an NSX-T Tier-0 gateway in Active/Active mode and a default NSX-T Tier-1 gateway in Active/Standby mode. Segments created through the Azure VMware Solution console only connect to the default Tier-1 gateway, and the workloads of these segments get East-West and North-South connectivity. You can only create more Tier-1 gateways through NSX-T Manager. Tier-1 gateways created from the NSX-T Manager console are not visible in the Azure VMware Solution console.
+ - **T1** - Name of the Tier-1 gateway in NSX-T Manager. A private cloud comes with an NSX-T Tier-0 gateway in Active/Active mode and a default NSX-T Tier-1 gateway in Active/Standby mode. Segments created through the Azure VMware Solution console only connect to the default Tier-1 gateway, and the workloads of these segments get East-West and North-South connectivity. You can only create more Tier-1 gateways through NSX-T Manager. Tier-1 gateways created from the NSX-T Manager console are not visible in the Azure VMware Solution console.
- **Type** - Overlay segment supported by Azure VMware Solution.
To set up port mirroring in the Azure VMware Solution console, you'll:
:::image type="content" source="media/configure-nsx-network-components-azure-portal/add-port-mirroring-vm-groups.png" alt-text="Screenshot showing how to create a VM group for port mirroring.":::
-1. Provide a name for the new VM group, select the desired VMs from the list, and then **OK**.
+1. Provide a name for the new VM group, select VMs from the list, and then **OK**.
1. Repeat these steps to create the destination VM group.
When a DNS query is received, a DNS forwarder compares the domain name with the
1. Select **FQDN zone** and provide a name, the FQDN zone, and up to three DNS server IP addresses in the format of **8.8.8.8**.
- :::image type="content" source="media/configure-nsx-network-components-azure-portal/nsxt-workload-networking-configure-fqdn-zone.png" alt-text="Screenshot showing showing the required information needed to add an FQDN zone.":::
+ :::image type="content" source="media/configure-nsx-network-components-azure-portal/nsxt-workload-networking-configure-fqdn-zone.png" alt-text="Screenshot showing the required information needed to add an FQDN zone.":::
1. Select **OK** to finish adding the default DNS zone and DNS service.
azure-vmware Configure Site To Site Vpn Gateway https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/configure-site-to-site-vpn-gateway.md
A virtual hub is a virtual network that is created and used by Virtual WAN. It's
3. On the **Basics** tab, enter the required fields.
- :::image type="content" source="../../includes/media/virtual-wan-tutorial-site-include/site-basics.png" alt-text="Screenshot shows Create VPN site page with the Basics tab open." lightbox="../../includes/media/virtual-wan-tutorial-site-include/site-basics.png":::
+ :::image type="content" source="../../includes/media/virtual-wan-tutorial-site-include/site-basics.png" alt-text="Screenshot showing the Create VPN site page with the Basics tab open." lightbox="../../includes/media/virtual-wan-tutorial-site-include/site-basics.png":::
* **Region** - Previously referred to as location. It's the location you want to create this site resource in. * **Name** - The name by which you want to refer to your on-premises site.
- * **Device vendor** - The name of the VPN device vendor (for example: Citrix, Cisco, Barracuda). Adding the device vendor can help the Azure Team better understand your environment in order to add more optimization possibilities in the future, or to help you troubleshoot.
+ * **Device vendor** - The name of the VPN device vendor (for example: Citrix, Cisco, Barracuda). It helps the Azure Team better understand your environment in order to add more optimization possibilities in the future, or to help you troubleshoot.
* **Private address space** - The CIDR IP address space that is located on your on-premises site. Traffic destined for this address space is routed to your local site. The CIDR block is only required if you [BGP](../vpn-gateway/bgp-howto.md) isn't enabled for the site.
A virtual hub is a virtual network that is created and used by Virtual WAN. It's
>If you edit the address space after creating the site (for example, add an additional address space) it can take 8-10 minutes to update the effective routes while the components are recreated.
-1. Select **Links** to add information about the physical links at the branch. If you have a Virtual WAN partner CPE device, check with them to see if this information is exchanged with Azure as a part of the branch information upload set up from their systems.
+1. Select **Links** to add information about the physical links at the branch. If you have a Virtual WAN partner CPE device, check with them to see if this information gets exchanged with Azure as a part of the branch information upload set up from their systems.
Specifying link and provider names allow you to distinguish between any number of gateways that may eventually be created as part of the hub. [BGP](../vpn-gateway/vpn-gateway-bgp-overview.md) and autonomous system number (ASN) must be unique inside your organization. BGP ensures that both Azure VMware Solution and the on-premises servers advertise their routes across the tunnel. If disabled, the subnets that need to be advertised must be manually maintained. If subnets are missed, HCX fails to form the service mesh.
A virtual hub is a virtual network that is created and used by Virtual WAN. It's
:::image type="content" source="../../includes/media/virtual-wan-tutorial-connect-vpn-site-include/status.png" alt-text="Screenshot that shows a site-to-site connection and connectivity status." lightbox="../../includes/media/virtual-wan-tutorial-connect-vpn-site-include/status.png":::
- **Connection Status:** This is the status of the Azure resource for the connection that connects the VPN site to the Azure hubΓÇÖs VPN gateway. Once this control plane operation is successful, Azure VPN gateway and the on-premises VPN device will proceed to establish connectivity.
+ **Connection Status:** Status of the Azure resource for the connection that connects the VPN site to the Azure hubΓÇÖs VPN gateway. Once this control plane operation is successful, Azure VPN gateway and the on-premises VPN device will proceed to establish connectivity.
- **Connectivity Status:** This is the actual connectivity (data path) status between AzureΓÇÖs VPN gateway in the hub and VPN site. It can show any of the following states:
+ **Connectivity Status:** Actual connectivity (data path) status between AzureΓÇÖs VPN gateway in the hub and VPN site. It can show any of the following states:
- * **Unknown**: This state is typically seen if the backend systems are working to transition to another status.
+ * **Unknown**: State is typically seen if the backend systems are working to transition to another status.
* **Connecting**: Azure VPN gateway is trying to reach out to the actual on-premises VPN site. * **Connected**: Connectivity is established between Azure VPN gateway and on-premises VPN site.
- * **Disconnected**: This status is seen if, for any reason (on-premises or in Azure), the connection was disconnected.
+ * **Disconnected**: Status is seen if, for any reason (on-premises or in Azure), the connection was disconnected.
1. Download the VPN configuration file and apply it to the on-premises endpoint.
azure-vmware Configure Windows Server Failover Cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/configure-windows-server-failover-cluster.md
Azure VMware Solution provides native support for virtualized WSFC. It supports
The following diagram illustrates the architecture of WSFC virtual nodes on an Azure VMware Solution private cloud. It shows where Azure VMware Solution resides, including the WSFC virtual servers (red box), in relation to the broader Azure platform. This diagram illustrates a typical hub-spoke architecture, but a similar setup is possible with the use of Azure Virtual WAN. Both offer all the value other Azure services can bring you. ## Supported configurations
Currently, the configurations supported are:
| Virtual NIC | VMXNET3 paravirtualized network interface card (NIC); enable the in-guest Windows Receive Side Scaling (RSS) on the virtual NIC. | | Memory | Use full VM reservation memory for nodes in the WSFC cluster. | | Increase the I/O timeout of each WSFC node. | Modify HKEY\_LOCAL\_MACHINE\System\CurrentControlSet\Services\Disk\TimeOutValueSet to 60 seconds or more. (If you recreate the cluster, this value might be reset to its default, so you must change it again.) |
-| Windows cluster health monitoring | The value of the SameSubnetThreshold Parameter of Windows cluster health monitoring must be modified to allow 10 missed heartbeats at minimum. This is [the default in Windows Server 2016](https://techcommunity.microsoft.com/t5/failover-clustering/tuning-failover-cluster-network-thresholds/ba-p/371834). This recommendation applies to all applications using WSFC, including shared and non-shared disks. |
+| Windows cluster health monitoring | The value of the SameSubnetThreshold Parameter of Windows cluster health monitoring must be modified to allow 10 missed heartbeats at minimum. It's [the default in Windows Server 2016](https://techcommunity.microsoft.com/t5/failover-clustering/tuning-failover-cluster-network-thresholds/ba-p/371834). This recommendation applies to all applications using WSFC, including shared and non-shared disks. |
### WSFC node - Boot disks configuration parameters
Currently, the configurations supported are:
| Multi-writer flag | Not used | | Disk format | Thick provisioned. (Eager Zeroed Thick (EZT) isn't required with vSAN.) | ## Non-supported scenarios
The following activities aren't supported and might cause WSFC node failover:
- **Validate Network Communication**. The Cluster Validation test will throw a warning that only one network interface per cluster node is available. You may ignore this warning. Azure VMware Solution provides the required availability and performance needed, since the nodes are connected to one of the NSX-T segments. However, keep this item as part of the Cluster Validation test, as it will validate other aspects of network communication.
-16. Create a DRS rule to place the WSFC VMs on the same Azure VMware Solution nodes. To do so, you need a host-to-VM affinity rule. This way, cluster nodes will run on the same Azure VMware Solution host. Again, this is for pilot purposes until placement policies are available.
+16. Create a DRS rule to place the WSFC VMs on the same Azure VMware Solution nodes. To do so, you need a host-to-VM affinity rule. This way, cluster nodes will run on the same Azure VMware Solution host. Again, it's for pilot purposes until placement policies are available.
>[!NOTE] > For this you need to create a Support Request ticket. Our Azure support organization will be able to help you with this.
azure-vmware Deploy Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/deploy-azure-vmware-solution.md
You should have connectivity between the Azure Virtual Network where the Express
1. Use a [virtual machine](../virtual-machines/windows/quick-create-portal.md#create-virtual-machine) within the Azure Virtual Network where the Azure VMware Solution ExpressRoute terminates (see [Step 3. Connect to Azure Virtual Network with ExpressRoute](#step-3-connect-to-azure-virtual-network-with-expressroute)). 1. Log into the Azure [portal](https://portal.azure.com).
- 2. Navigate to a VM that is in the running state, and under **Settings**, select **Networking** and select the network interface resource.
- ![View network interfaces](../virtual-network/media/diagnose-network-routing-problem/view-nics.png)
- 4. On the left, select **Effective routes**. You'll see a list of address prefixes that are contained within the `/22` CIDR block you entered during the deployment phase.
+
+ 1. Navigate to a VM that is in the running state, and under **Settings**, select **Networking** and select the network interface resource.
+
+ :::image type="content" source="../virtual-network/media/diagnose-network-routing-problem/view-nics.png" alt-text="Screenshot showing virtual network interface settings.":::
+
+ 1. On the left, select **Effective routes**. You'll see a list of address prefixes that are contained within the `/22` CIDR block you entered during the deployment phase.
1. If you want to log into both vCenter and NSX-T Manager, open a web browser and log into the same virtual machine used for network route validation. You can identify the vCenter and NSX-T Manager console's IP addresses and credentials in the Azure portal. Select your private cloud and then **Manage** > **Identity**.
- :::image type="content" source="media/tutorial-access-private-cloud/ss4-display-identity.png" alt-text="Screenshots of the private cloud vCenter and NSX Manager URLs and credentials." border="true":::
+ :::image type="content" source="media/tutorial-access-private-cloud/ss4-display-identity.png" alt-text="Screenshot showing the private cloud vCenter and NSX Manager URLs and credentials." border="true":::
## Next steps
azure-vmware Deploy Disaster Recovery Using Vmware Hcx https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/deploy-disaster-recovery-using-vmware-hcx.md
This guide covers the following replication scenarios:
1. Log into **vSphere Client** on the source site and access **HCX plugin**.
- :::image type="content" source="./media/disaster-recovery-virtual-machines/hcx-vsphere.png" alt-text="HCX option in vSphere" border="true":::
+ :::image type="content" source="./media/disaster-recovery-virtual-machines/hcx-vsphere.png" alt-text="Screenshot showing the HCX option in the vSphere Web Client." border="true":::
1. Enter the **Disaster Recovery** area and select **PROTECT VMS**.
- :::image type="content" source="./media/disaster-recovery-virtual-machines/protect-virtual-machine.png" alt-text="select protect vms" border="true" lightbox="./media/disaster-recovery-virtual-machines/protect-virtual-machine.png":::
+ :::image type="content" source="./media/disaster-recovery-virtual-machines/protect-virtual-machine.png" alt-text="Screenshot showing the Disaster Recovery dashboard in the vSphere Web Client." border="true" lightbox="./media/disaster-recovery-virtual-machines/protect-virtual-machine.png":::
1. Select the Source and the Remote sites. The Remote site in this case should be the Azure VMware Solution private cloud.
- :::image type="content" source="./media/disaster-recovery-virtual-machines/protect-virtual-machines.png" alt-text="protect VMs window" border="true":::
+ :::image type="content" source="./media/disaster-recovery-virtual-machines/protect-virtual-machines.png" alt-text="Screenshot showing the HCX: Protected Virtual Machines window." border="true":::
1. If needed, select the **Default replication** options:
This guide covers the following replication scenarios:
- **Number of Snapshots:** Total number of snapshots within the configured snapshot interval.
- :::image type="content" source="./media/disaster-recovery-virtual-machines/protect-virtual-machine-options.png" alt-text="protect VMs options" border="true" lightbox="./media/disaster-recovery-virtual-machines/protect-virtual-machine-options.png":::
+ :::image type="content" source="./media/disaster-recovery-virtual-machines/protect-virtual-machine-options.png" alt-text="Screenshot showing the Protect Virtual Machines replication options." border="true" lightbox="./media/disaster-recovery-virtual-machines/protect-virtual-machine-options.png":::
1. Select one or more VMs from the list and configure the replication options as needed. By default, the VMs inherit the Global Settings Policy configured in the Default replication options. For each network interface in the selected VM, configure the remote **Network Port Group** and select **Finish** to start the protection process.
- :::image type="content" source="./media/disaster-recovery-virtual-machines/network-interface-options.png" alt-text="network interface options" border="true" lightbox="./media/disaster-recovery-virtual-machines/network-interface-options.png":::
+ :::image type="content" source="./media/disaster-recovery-virtual-machines/network-interface-options.png" alt-text="Screenshot showing the Protect Virtual Machines network interface options." border="true" lightbox="./media/disaster-recovery-virtual-machines/network-interface-options.png":::
1. Monitor the process for each of the selected VMs in the same disaster recovery area.
- :::image type="content" source="./media/disaster-recovery-virtual-machines/protect-monitor-progress.png" alt-text="monitor progress of protection" border="true" lightbox="./media/disaster-recovery-virtual-machines/protect-monitor-progress.png":::
+ :::image type="content" source="./media/disaster-recovery-virtual-machines/protect-monitor-progress.png" alt-text="Screenshot showing the Protect Virtual Machines monitor progress of protection." border="true" lightbox="./media/disaster-recovery-virtual-machines/protect-monitor-progress.png":::
1. After the VM has been protected, you can view the different snapshots in the **Snapshots** tab.
- :::image type="content" source="./media/disaster-recovery-virtual-machines/list-of-snapshots.png" alt-text="list of snapshots" border="true" lightbox="./media/disaster-recovery-virtual-machines/list-of-snapshots.png":::
+ :::image type="content" source="./media/disaster-recovery-virtual-machines/list-of-snapshots.png" alt-text="Screenshot showing the Protect Virtual Machines list of snapshots." border="true" lightbox="./media/disaster-recovery-virtual-machines/list-of-snapshots.png":::
The yellow triangle means the snapshots and the virtual machines haven't been tested in a Test Recovery operation.
This guide covers the following replication scenarios:
1. Log into **vSphere Client** on the remote site, which is the Azure VMware Solution private cloud. 1. Within the **HCX plugin**, in the Disaster Recovery area, select the vertical ellipses on any VM to display the operations menu and then select **Test Recover VM**.
- :::image type="content" source="./media/disaster-recovery-virtual-machines/test-recover-virtual-machine.png" alt-text="Select Test Recover VM" border="true":::
+ :::image type="content" source="./media/disaster-recovery-virtual-machines/test-recover-virtual-machine.png" alt-text="Screenshot showing the Test Recovery VM menu option." border="true":::
1. Select the options for the test and the snapshot you want to use to test different states of the VM.
- :::image type="content" source="./media/disaster-recovery-virtual-machines/choose-snapshot.png" alt-text="choose a snapshot and select test" border="true":::
+ :::image type="content" source="./media/disaster-recovery-virtual-machines/choose-snapshot.png" alt-text="Screenshot showing the Replica Snapshot instance to test." border="true":::
1. After selecting **Test**, the recovery operation begins. 1. When finished, you can check the new VM in the Azure VMware Solution private cloud vCenter.
- :::image type="content" source="./media/disaster-recovery-virtual-machines/verify-test-recovery.png" alt-text="check recovery operation" border="true" lightbox="./media/disaster-recovery-virtual-machines/verify-test-recovery.png":::
+ :::image type="content" source="./media/disaster-recovery-virtual-machines/verify-test-recovery.png" alt-text="Screenshot showing the check recovery operation summary." border="true" lightbox="./media/disaster-recovery-virtual-machines/verify-test-recovery.png":::
1. After testing has been done on the VM or any application running on it, do a cleanup to delete the test instance.
- :::image type="content" source="./media/disaster-recovery-virtual-machines/cleanup-test-instance.png" alt-text="cleanup test instance" border="true" lightbox="./media/disaster-recovery-virtual-machines/cleanup-test-instance.png":::
+ :::image type="content" source="./media/disaster-recovery-virtual-machines/cleanup-test-instance.png" alt-text="Screenshot showing the cleanup test instance." border="true" lightbox="./media/disaster-recovery-virtual-machines/cleanup-test-instance.png":::
## Recover VMs
This guide covers the following replication scenarios:
1. Select the VM to be recovered from the list, open the **ACTIONS** menu, and select **Recover VMs**.
- :::image type="content" source="./media/disaster-recovery-virtual-machines/recover-virtual-machines.png" alt-text="recover VMs" border="true":::
+ :::image type="content" source="./media/disaster-recovery-virtual-machines/recover-virtual-machines.png" alt-text="Screenshot showing the Recover VMs menu option." border="true":::
1. Configure the recovery options for each instance and select **Recover** to start the recovery operation.
- :::image type="content" source="./media/disaster-recovery-virtual-machines/recover-virtual-machines-confirm.png" alt-text="recover VMs confirmation" border="true":::
+ :::image type="content" source="./media/disaster-recovery-virtual-machines/recover-virtual-machines-confirm.png" alt-text="Screenshot showing the confirmation for recovering VMs to target site." border="true":::
1. After the recovery operation is completed, the new VMs appear in the remote vCenter Server inventory.
This guide covers the following replication scenarios:
1. From the list, select the VMs to be replicated back to the source site, open the **ACTIONS** menu, and select **Reverse**. 1. Select **Reverse** to start the replication.
- :::image type="content" source="./media/disaster-recovery-virtual-machines/reverse-operation-virtual-machines.png" alt-text="Select reverse action under protect operations" border="true":::
+ :::image type="content" source="./media/disaster-recovery-virtual-machines/reverse-operation-virtual-machines.png" alt-text="Screenshot showing the Reverse menu option." border="true":::
1. Monitor on the details section of each VM.
- :::image type="content" source="./media/disaster-recovery-virtual-machines/review-reverse-operation.png" alt-text="review the results of reverse action" border="true" lightbox="./media/disaster-recovery-virtual-machines/review-reverse-operation.png":::
+ :::image type="content" source="./media/disaster-recovery-virtual-machines/review-reverse-operation.png" alt-text="Screenshot showing the results of reverse action." border="true" lightbox="./media/disaster-recovery-virtual-machines/review-reverse-operation.png":::
## Disaster recovery plan automation
azure-vmware Deploy Traffic Manager Balance Workloads https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/deploy-traffic-manager-balance-workloads.md
The gateways have Azure VMware Solution virtual machines (VMs) configured as bac
The diagram shows how Traffic Manager provides load balancing for the applications at the DNS level between regional endpoints. The gateways have backend pool members configured as IIS Servers and referenced as Azure VMware Solution external endpoints. Connection over the virtual network between the two private cloud regions uses an ExpressRoute gateway. Before you begin, first review the [Prerequisites](#prerequisites) and then we'll walk through the procedures to:
azure-vmware Ecosystem Back Up Vms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/ecosystem-back-up-vms.md
A key principle of Azure VMware Solution is to enable you to continue to use you
Our backup partners have industry-leading backup and restore solutions in VMware-based environments. Customers have widely adopted these solutions for their on-premises deployments. Now these partners have extended their solutions to Azure VMware Solution, using Azure to provide a backup repository and a storage target for long-term retention and archival.
-Backup network traffic between Azure VMware Solution VMs and the backup repository in Azure travels over a high-bandwidth, low-latency link. Replication traffic across regions travels over the internal Azure backplane network, which lowers bandwidth costs for users.
+Back up network traffic between Azure VMware Solution VMs and the backup repository in Azure travels over a high-bandwidth, low-latency link. Replication traffic across regions travels over the internal Azure backplane network, which lowers bandwidth costs for users.
>[!NOTE] >For common questions, see [our third-party backup solution FAQ](/azure/azure-vmware/faq#third-party-backup-and-recovery).
azure-vmware Enable Public Internet Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/enable-public-internet-access.md
This article details how you can use the public IP functionality in Virtual WAN.
## Reference architecture The architecture diagram shows a web server hosted in the Azure VMware Solution environment and configured with RFC1918 private IP addresses. The web service is made available to the internet through Virtual WAN public IP functionality. Public IP is typically a destination NAT translated in Azure Firewall. With DNAT rules, firewall policy translates public IP address requests to a private address (webserver) with a port.
azure-vmware Fix Deployment Provisioning Failures https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/fix-deployment-provisioning-failures.md
To copy the ExpressRoute ID:
1. In the right pane, select the **ExpressRoute** tab. 1. Select the copy icon for **ExpressRoute ID** and save the value to use in your support request. ## Pre-validation failures
azure-vmware Integrate Azure Native Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/integrate-azure-native-services.md
Last updated 06/15/2021
Microsoft Azure native services let you monitor, manage, and protect your virtual machines (VMs) in a hybrid environment (Azure, Azure VMware Solution, and on-premises). The Azure native services that you can integrate with Azure VMware Solution include: - **Log Analytics workspace:** Each workspace has its own data repository and configuration for storing log data. Data sources and solutions are configured to store their data in a specific workspace. Easily deploy the Log Analytics agent using Azure Arc enabled servers VM extension support for new and existing VMs. -- **Azure Security Center:** Unified infrastructure security management system that strengthens security of data centers, and provides advanced threat protection across hybrid workloads in the cloud or on premises. It assesses the vulnerability of Azure VMware Solution VMs and raise alerts as needed. To enable Azure Security Center, see [Integrate Azure Security Center with Azure VMware Solution](azure-security-integration.md).
+- **Azure Security Center:** Unified infrastructure security management system that strengthens security of data centers, and provides advanced threat protection across hybrid workloads in the cloud or on premises. It assesses the vulnerability of Azure VMware Solution VMs and raises alerts as needed. To enable Azure Security Center, see [Integrate Azure Security Center with Azure VMware Solution](azure-security-integration.md).
- **Azure Sentinel:** A cloud-native, security information event management (SIEM) solution. It provides security analytics, alert detection, and automated threat response across an environment. Azure Sentinel is built on top of a Log Analytics workspace. - **Azure Arc:** Extends Azure management to any infrastructure, including Azure VMware Solution, on-premises, or other cloud platforms. - **Azure Update Management:** Manages operating system updates for your Windows and Linux machines in a hybrid environment.
In this article, you'll integrate Azure native services in your Azure VMware Sol
>[!TIP] >You can [use an Azure Resource Manager (ARM) template to create an Automation account](../automation/quickstart-create-automation-account-template.md). Using an ARM template takes fewer steps compared to other deployment methods.
-1. [Enable Update Management from an Automation account](../automation/update-management/enable-from-automation-account.md). This links your Log Analytics workspace to your automation account. It also enables Azure and non-Azure VMs in Update Management.
+1. [Enable Update Management from an Automation account](../automation/update-management/enable-from-automation-account.md). It links your Log Analytics workspace to your automation account. It also enables Azure and non-Azure VMs in Update Management.
- If you have a workspace, select **Update management**. Then select the Log Analytics workspace, and Automation account and select **Enable**. The setup takes up to 15 minutes to complete.
azure-vmware Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/introduction.md
Azure VMware Solution is a VMware validated solution with on-going validation an
The diagram shows the adjacency between private clouds and VNets in Azure, Azure services, and on-premises environments. Network access from private clouds to Azure services or VNets provides SLA-driven integration of Azure service endpoints. ExpressRoute Global Reach connects your on-premises environment to your Azure VMware Solution private cloud.
-![Image of Azure VMware Solution private cloud adjacency to Azure and on-premises](./media/adjacency-overview-drawing-final.png)
## Hosts, clusters, and private clouds
The next step is to learn key [private cloud and cluster concepts](concepts-priv
<!-- LINKS - internal --> [concepts-private-clouds-clusters]: ./concepts-private-clouds-clusters.md -
azure-vmware Move Azure Vmware Solution Across Regions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/move-azure-vmware-solution-across-regions.md
Azure VMware Solution supports all backup solutions. You'll need CloudAdmin priv
2. Select **Manage** > **Connectivity** > **ExpressRoute**.
-3. Copy the sourceΓÇÖs **ExpressRoute ID**. YouΓÇÖll need this to peer between the private clouds.
+3. Copy the sourceΓÇÖs **ExpressRoute ID**. YouΓÇÖll need it to peer between the private clouds.
### Create the targetΓÇÖs authorization key
azure-vmware Move Ea Csp Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/move-ea-csp-subscriptions.md
You should have at least contributor rights on both **source** and **target** su
1. Once the validation is successful, select **Next** to start the migration of your private cloud.
- :::image type="content" source="media/move-subscriptions/move-resources-succeeded.png" alt-text=" Screenshot that shows the validation status of Succeeded.":::
+ :::image type="content" source="media/move-subscriptions/move-resources-succeeded.png" alt-text=" Screenshot showing the validation status of Succeeded.":::
1. Select the check box indicating you understand that the tools and scripts associated will not work until you update them to use the new resource IDs. Then select **Move**.
azure-vmware Production Ready Deployment Steps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/production-ready-deployment-steps.md
This /22 CIDR network address block shouldn't overlap with any existing network
For a detailed breakdown of how the /22 CIDR network is broken down per private cloud [Network planning checklist](tutorial-network-checklist.md#routing-and-subnet-considerations). ## Define the IP address segment for VM workloads
This network segment is used primarily for testing purposes during the initial d
**Example:** 10.0.4.0/24 ## Define the virtual network gateway
azure-vmware Protect Azure Vmware Solution With Application Gateway https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/protect-azure-vmware-solution-with-application-gateway.md
This article shows you how to use Application Gateway in front of a web server f
## Topology The diagram shows how Application Gateway is used to protect Azure IaaS virtual machines (VMs), Azure virtual machine scale sets, or on-premises servers. Application Gateway treats Azure VMware Solution VMs as on-premises servers.
-![Diagram showing how Application Gateway protects Azure IaaS virtual machines (VMs), Azure virtual machine scale sets, or on-premises servers.](media/protect-azure-vmware-solution-with-application-gateway/app-gateway-protects.png)
> [!IMPORTANT] > Azure Application Gateway is currently the only supported method to expose web apps running on Azure VMware Solution VMs.
azure-vmware Request Host Quota Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/request-host-quota-azure-vmware-solution.md
Access the Azure portal using the **Admin On Behalf Of** (AOBO) procedure from P
1. In **Partner Center**, select **CSP** to access the **Customers** area.
- :::image type="content" source="media/enable-azure-vmware-solution/csp-customers-screen.png" alt-text="Microsoft Partner Center customers area" lightbox="media/enable-azure-vmware-solution/csp-customers-screen.png":::
+ :::image type="content" source="media/enable-azure-vmware-solution/csp-customers-screen.png" alt-text="Screenshot showing the Microsoft Partner Center customer area." lightbox="media/enable-azure-vmware-solution/csp-customers-screen.png":::
1. Select your customer and then select **Add products**.
- :::image type="content" source="media/enable-azure-vmware-solution/csp-partner-center.png" alt-text="Microsoft Partner Center" lightbox="media/enable-azure-vmware-solution/csp-partner-center.png":::
+ :::image type="content" source="media/enable-azure-vmware-solution/csp-partner-center.png" alt-text="Screenshot showing Azure plan selected in the Microsoft Partner Center." lightbox="media/enable-azure-vmware-solution/csp-partner-center.png":::
1. Select **Azure plan** and then select **Add to cart**.
azure-vmware Reserved Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/reserved-instance.md
CSPs that want to purchase reserved instances for their customers must use the *
3. Expand customer details and select **Microsoft Azure Management Portal**.
- :::image type="content" source="media/reserved-instances/csp-partner-center-aobo.png" alt-text="Microsoft Partner Center customers area" lightbox="media/reserved-instances/csp-partner-center-aobo.png":::
+ :::image type="content" source="media/reserved-instances/csp-partner-center-aobo.png" alt-text="Screenshot showing the Microsoft Partner Center customer area with Microsoft Azure Management Portal selected." lightbox="media/reserved-instances/csp-partner-center-aobo.png":::
4. In the Azure portal, select **All services** > **Reservations**. 5. Select **Purchase Now** and then select **Azure VMware Solution**.
- :::image type="content" source="media/reserved-instances/csp-buy-reserved-instance-azure-portal.png" alt-text="Microsoft Azure portal reservations" lightbox="media/reserved-instances/csp-buy-reserved-instance-azure-portal.png":::
+ :::image type="content" source="media/reserved-instances/csp-buy-reserved-instance-azure-portal.png" alt-text="Screenshot showing where to purchase Azure VMware Solution reservations in the Microsoft Azure portal." lightbox="media/reserved-instances/csp-buy-reserved-instance-azure-portal.png":::
6. Enter the required fields. The selected attributes that match running Azure VMware Solution hosts qualify for the reservation discount. Attributes include the SKU, regions (where applicable), and scope. Reservation scope selects where the reservation savings apply.
azure-vmware Set Up Backup Server For Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/set-up-backup-server-for-azure-vmware-solution.md
This article helps you prepare your Azure VMware Solution environment to back up
## Supported VMware features -- **Agentless backup:** Azure Backup Server doesn't require an agent to be installed on the vCenter or ESXi server to back up the VM. Instead, just provide the IP address or fully qualified domain name (FQDN) and the sign in credentials used to authenticate the VMware server with Azure Backup Server.
+- **Agentless backup:** Azure Backup Server doesn't require an agent to be installed on the vCenter or ESXi server to back up the VM. Instead, provide the IP address or fully qualified domain name (FQDN) and the sign-in credentials used to authenticate the VMware server with Azure Backup Server.
- **Cloud-integrated backup:** Azure Backup Server protects workloads to disk and the cloud. The backup and recovery workflow of Azure Backup Server helps you manage long-term retention and offsite backup. - **Detect and protect VMs managed by vCenter:** Azure Backup Server detects and protects VMs deployed on a vCenter or ESXi server. Azure Backup Server also detects VMs managed by vCenter so that you can protect large deployments. - **Folder-level auto protection:** vCenter lets you organize your VMs in VM folders. Azure Backup Server detects these folders. You can use it to protect VMs at the folder level, including all subfolders. When protecting folders, Azure Backup Server protects the VMs in that folder and protects VMs added later. Azure Backup Server detects new VMs daily, protecting them automatically. As you organize your VMs in recursive folders, Azure Backup Server automatically detects and protects the new VMs deployed in the recursive folders.
Follow the steps in this section to download, extract, and install the software
1. From the **Where is your workload running?** menu, select **On-Premises**.
- :::image type="content" source="media/azure-vmware-solution-backup/deploy-mabs-on-premises-workload.png" alt-text="Screenshot showing the options for where your workload runs and what to backup.":::
+ :::image type="content" source="media/azure-vmware-solution-backup/deploy-mabs-on-premises-workload.png" alt-text="Screenshot showing the options for where your workload runs and what to back up.":::
1. From the **What do you want to back up?** menu, select the workloads you want to protect by using Azure Backup Server.
azure-vmware Tutorial Access Private Cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/tutorial-access-private-cloud.md
In this tutorial, you learn how to:
The URLs and user credentials for private cloud vCenter and NSX-T Manager display.
- :::image type="content" source="media/tutorial-access-private-cloud/ss4-display-identity.png" alt-text="Display private cloud vCenter and NSX Manager URLs and credentials." border="true" lightbox="media/tutorial-access-private-cloud/ss4-display-identity.png":::
+ :::image type="content" source="media/tutorial-access-private-cloud/ss4-display-identity.png" alt-text="Screenshot showing the private cloud vCenter and NSX Manager URLs and credentials." border="true" lightbox="media/tutorial-access-private-cloud/ss4-display-identity.png":::
1. Navigate to the VM you created in the preceding step and connect to the virtual machine.
In this tutorial, you learn how to:
1. In the vCenter tab, enter the `cloudadmin@vmcp.local` user credentials from the previous step.
- :::image type="content" source="media/tutorial-access-private-cloud/ss5-vcenter-login.png" alt-text="Sign in to private cloud vCenter." border="true":::
+ :::image type="content" source="media/tutorial-access-private-cloud/ss5-vcenter-login.png" alt-text="Screenshot showing the VMware vSphere sign in page." border="true":::
- :::image type="content" source="media/tutorial-access-private-cloud/ss6-vsphere-client-home.png" alt-text="vCenter portal." border="true":::
+ :::image type="content" source="media/tutorial-access-private-cloud/ss6-vsphere-client-home.png" alt-text="Screenshot showing a summary of Cluster-1 in the vSphere Client." border="true":::
1. In the second tab of the browser, sign in to NSX-T manager.
- :::image type="content" source="media/tutorial-access-private-cloud/ss10-nsx-manager-home.png" alt-text="In the second tab of the browser, sign in to NSX-T manager." border="true":::
+ :::image type="content" source="media/tutorial-access-private-cloud/ss10-nsx-manager-home.png" alt-text="Screenshot of the NSX-T Manager Overview." border="true":::
Continue to the next tutorial to learn how to create a virtual network to set up
> [!div class="nextstepaction"] > [Create a Virtual Network](tutorial-configure-networking.md) -
azure-vmware Tutorial Configure Networking https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/tutorial-configure-networking.md
In this tutorial, you learn how to:
1. Select **Review + create**.
- :::image type="content" source="./media/tutorial-configure-networking/create-virtual-network.png" alt-text="Select Review + create." border="true":::
+ :::image type="content" source="./media/tutorial-configure-networking/create-virtual-network.png" alt-text="Screenshot showing the settings for the new virtual network." border="true":::
1. Verify the information and select **Create**. Once the deployment is complete, you'll see your virtual network in the resource group.
Now that you've created a virtual network, you'll create a virtual network gatew
| **Gateway subnet address range** | This value is populated when you select the virtual network. Don't change the default value. | | **Public IP address** | Select **Create new**. |
- :::image type="content" source="./media/tutorial-configure-networking/create-virtual-network-gateway.png" alt-text="Provide values for the fields and then select Review + create." border="true":::
+ :::image type="content" source="./media/tutorial-configure-networking/create-virtual-network-gateway.png" alt-text="Screenshot showing the details for the virtual network gateway." border="true":::
1. Verify that the details are correct, and select **Create** to start the deployment of your virtual network gateway. 1. Once the deployment completes, move to the next section to connect your ExpressRoute connection to the virtual network gateway containing your Azure VMware Solution private cloud.
azure-vmware Tutorial Deploy Vmware Hcx https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/tutorial-deploy-vmware-hcx.md
Before you deploy the virtual appliance to your on-premises vCenter, you must do
1. Select **Manage** > **Connectivity** and select the **HCX** tab to identify the Azure VMware Solution HCX Manager's IP address.
- :::image type="content" source="media/tutorial-vmware-hcx/find-hcx-ip-address.png" alt-text="Screenshot of the VMware HCX IP address." lightbox="media/tutorial-vmware-hcx/find-hcx-ip-address.png":::
+ :::image type="content" source="media/tutorial-vmware-hcx/find-hcx-ip-address.png" alt-text="Screenshot showing the VMware HCX IP address." lightbox="media/tutorial-vmware-hcx/find-hcx-ip-address.png":::
1. Select **Manage** > **Identity**.
Before you deploy the virtual appliance to your on-premises vCenter, you must do
> [!TIP] > The vCenter password was defined when you set up the private cloud. It's the same password you'll use to sign in to Azure VMware Solution HCX Manager. You can select **Generate a new password** to generate new vCenter and NSX-T passwords.
- :::image type="content" source="media/tutorial-access-private-cloud/ss4-display-identity.png" alt-text="Display private cloud vCenter and NSX Manager URLs and credentials." border="true":::
+ :::image type="content" source="media/tutorial-access-private-cloud/ss4-display-identity.png" alt-text="Screenshot showing the private cloud vCenter and NSX Manager URLs and credentials." border="true":::
1. Open a browser window, sign in to the Azure VMware Solution HCX Manager on `https://x.x.x.9` port 443 with the **cloudadmin\@vsphere.local** user credentials
Before you deploy the virtual appliance to your on-premises vCenter, you must do
1. Navigate to and select the OVA file that you downloaded and then select **Open**.
- :::image type="content" source="media/tutorial-vmware-hcx/select-ovf-template.png" alt-text="Screenshot of browsing to an OVF template." lightbox="media/tutorial-vmware-hcx/select-ovf-template.png":::
+ :::image type="content" source="media/tutorial-vmware-hcx/select-ovf-template.png" alt-text="Screenshot showing the Deploy OVF Template dialog and browsing to an OVF template." lightbox="media/tutorial-vmware-hcx/select-ovf-template.png":::
1. Select a name and location, and select a resource or cluster where you're deploying the VMware HCX Connector. Then review the details and required resources and select **Next**.
Before you deploy the virtual appliance to your on-premises vCenter, you must do
1. In **Customize template**, enter all required information and then select **Next**.
- :::image type="content" source="media/tutorial-vmware-hcx/customize-template.png" alt-text="Screenshot of the boxes for customizing a template." lightbox="media/tutorial-vmware-hcx/customize-template.png":::
+ :::image type="content" source="media/tutorial-vmware-hcx/customize-template.png" alt-text="Screenshot showing the Customize template settings for the OVF template." lightbox="media/tutorial-vmware-hcx/customize-template.png":::
1. Verify the configuration, and then select **Finish** to deploy the VMware HCX Connector OVA.
After you deploy the VMware HCX Connector OVA on-premises and start the applianc
After the services restart, you'll see vCenter showing as green on the screen that appears. Both vCenter and SSO must have the appropriate configuration parameters, which should be the same as the previous screen. For an end-to-end overview of this procedure, view the [Azure VMware Solution: Activate HCX](https://www.youtube.com/embed/PnVg6SZkQsY?rel=0&amp;vq=hd720) video.
You can connect or pair the VMware HCX Cloud Manager in Azure VMware Solution wi
You'll see a screen showing that your VMware HCX Cloud Manager in Azure VMware Solution and your on-premises VMware HCX Connector are connected (paired).
- :::image type="content" source="media/tutorial-vmware-hcx/site-pairing-complete.png" alt-text="Screenshot that shows the pairing of the HCX Manager in Azure VMware Solution and the VMware HCX Connector.":::
+ :::image type="content" source="media/tutorial-vmware-hcx/site-pairing-complete.png" alt-text="Screenshot showing the site pairing of the HCX Manager in Azure VMware Solution and the VMware HCX Connector.":::
For an end-to-end overview of this procedure, view the [Azure VMware Solution: HCX Site Pairing](https://www.youtube.com/embed/jXOmYUnbWZY?rel=0&amp;vq=hd720) video.
You'll create four network profiles:
1. Under **Infrastructure**, select **Interconnect** > **Multi-Site Service Mesh** > **Network Profiles** > **Create Network Profile**.
- :::image type="content" source="media/tutorial-vmware-hcx/network-profile-start.png" alt-text="Screenshot of selections for starting to create a network profile." lightbox="media/tutorial-vmware-hcx/network-profile-start.png":::
+ :::image type="content" source="media/tutorial-vmware-hcx/network-profile-start.png" alt-text="Screenshot showing where to create a network profile in the vSphere Client." lightbox="media/tutorial-vmware-hcx/network-profile-start.png":::
1. For each network profile, select the network and port group, provide a name, and create the segment's IP pool. Then select **Create**.
- :::image type="content" source="media/tutorial-vmware-hcx/example-configurations-network-profile.png" alt-text="Screenshot of details for a new network profile.":::
+ :::image type="content" source="media/tutorial-vmware-hcx/example-configurations-network-profile.png" alt-text="Screenshot showing the details for a new network profile.":::
For an end-to-end overview of this procedure, view the [Azure VMware Solution: HCX Network Profile](https://www.youtube.com/embed/O0rU4jtXUxc) video.
azure-vmware Tutorial Expressroute Global Reach Private Cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/tutorial-expressroute-global-reach-private-cloud.md
Now that you've created an authorization key for the private cloud ExpressRoute
1. Enter the ExpressRoute ID and the authorization key created in the previous section.
- :::image type="content" source="./media/expressroute-global-reach/on-premises-cloud-connections.png" alt-text="Screenshot that shows the dialog for entering the connection information.":::
+ :::image type="content" source="./media/expressroute-global-reach/on-premises-cloud-connections.png" alt-text="Screenshot showing the dialog for entering the connection information.":::
1. Select **Create**. The new connection shows in the on-premises cloud connections list. >[!TIP] >You can delete or disconnect a connection from the list by selecting **More**. >
->:::image type="content" source="./media/expressroute-global-reach/on-premises-connection-disconnect.png" alt-text="Disconnect or deleted an on-premises connection":::
+>:::image type="content" source="./media/expressroute-global-reach/on-premises-connection-disconnect.png" alt-text="Screenshot showing how to disconnect or delete an on-premises connection in Azure VMware Solution.":::
## Verify on-premises network connectivity
azure-vmware Tutorial Network Checklist https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/tutorial-network-checklist.md
In this tutorial, you'll learn about:
> * DHCP and DNS considerations in Azure VMware Solution ## Prerequisite
-Ensure that all gateways, including the ExpressRoute provider's service, support 4-byte Autonomous System Number (ASN). Azure VMware Solution uses 4-byte public ASNs for advertising routes.
+Ensure that all gateways, including the ExpressRoute provider's service, supports 4-byte Autonomous System Number (ASN). Azure VMware Solution uses 4-byte public ASNs for advertising routes.
## Virtual network and ExpressRoute circuit considerations When you create a virtual network connection in your subscription, the ExpressRoute circuit gets established through peering, uses an authorization key and a peering ID you request in the Azure portal. The peering is a private, one-to-one connection between your private cloud and the virtual network.
azure-vmware Tutorial Nsx T Network Segment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/tutorial-nsx-t-network-segment.md
Title: Tutorial - Add an NSX-T network segment in Azure VMware Solution
-description: Learn how to create a NSX-T network segment to use for virtual machines (VMs) in vCenter.
+description: Learn how to create an NSX-T network segment to use for virtual machines (VMs) in vCenter.
Last updated 03/13/2021
An Azure VMware Solution private cloud with access to the vCenter and NSX-T Mana
## Next steps
-In this tutorial, you created a NSX-T network segment to use for VMs in vCenter.
+In this tutorial, you created an NSX-T network segment to use for VMs in vCenter.
You can now:
azure-vmware Tutorial Scale Private Cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/tutorial-scale-private-cloud.md
You'll need an existing private cloud to complete this tutorial. If you haven't
1. On the overview page of an existing private cloud, under **Manage**, select **Scale private cloud**. Next, select **+ Add a cluster**.
- :::image type="content" source="./media/tutorial-scale-private-cloud/ss2-select-add-cluster.png" alt-text="select add a cluster" border="true":::
+ :::image type="content" source="./media/tutorial-scale-private-cloud/ss2-select-add-cluster.png" alt-text="Screenshot showing how to add a cluster to an Azure VMware Solution private cloud." border="true":::
1. In the **Add cluster** page, use the slider to select the number of hosts. Select **Save**.
- :::image type="content" source="./media/tutorial-scale-private-cloud/ss3-configure-new-cluster.png" alt-text="In the Add cluster page, use the slider to select the number of hosts. Select Save." border="true":::
+ :::image type="content" source="./media/tutorial-scale-private-cloud/ss3-configure-new-cluster.png" alt-text="Screenshot showing how to configure a new cluster." border="true":::
The deployment of the new cluster will begin.
You'll need an existing private cloud to complete this tutorial. If you haven't
1. On the overview page of an existing private cloud, select **Scale private cloud** and select the pencil icon to edit the cluster.
- :::image type="content" source="./media/tutorial-scale-private-cloud/ss4-select-scale-private-cloud-2.png" alt-text="Select Scale private cloud in Overview" border="true":::
+ :::image type="content" source="./media/tutorial-scale-private-cloud/ss4-select-scale-private-cloud-2.png" alt-text="Screenshot showing where to edit an existing cluster." border="true":::
1. In the **Edit Cluster** page, use the slider to select the number of hosts. Select **Save**.
- :::image type="content" source="./media/tutorial-scale-private-cloud/ss5-scale-cluster.png" alt-text="In the Edit Cluster page, use the slider to select the number of hosts. Select Save." border="true":::
- The addition of hosts to the cluster begins. ## Next steps
azure-vmware Vrealize Operations For Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/vrealize-operations-for-azure-vmware-solution.md
Thoroughly review [Before you begin](#before-you-begin) and [Prerequisites](#pre
## On-premises vRealize Operations managing Azure VMware Solution deployment Most customers have an existing on-premise deployment of vRealize Operations to manage one or more on-premise vCenters domains. When they provision an Azure VMware Solution private cloud, they connect their on-premises environment with their private cloud using an Azure ExpressRoute or a Layer 3 VPN solution. To extend the vRealize Operations capabilities to the Azure VMware Solution private cloud, you create an adapter [instance for the private cloud resources](https://docs.vmware.com/en/vRealize-Operations-Manager/8.1/com.vmware.vcom.config.doc/GUID-640AD750-301E-4D36-8293-1BFEB67E2600.html). It collects data from the Azure VMware Solution private cloud and brings it into on-premises vRealize Operations. The on-premises vRealize Operations Manager instance can directly connect to the vCenter and NSX-T manager on Azure VMware Solution. Optionally, you can deploy a vRealize Operations Remote Collector on the Azure VMware Solution private cloud. The collector compresses and encrypts the data collected from the private cloud before it's sent over the ExpressRoute or VPN network to the vRealize Operations Manager running on-premise.
Another option is to deploy an instance of vRealize Operations Manager on a vSph
>[!IMPORTANT] >This option isn't currently supported by VMware. Once the instance has been deployed, you can configure vRealize Operations to collect data from vCenter, ESXi, NSX-T, vSAN, and HCX.
Once the instance has been deployed, you can configure vRealize Operations to co
When you connect the Azure VMware Solution vCenter to vRealize Operations Manager using a vCenter Server Cloud Account, you'll see a warning: The warning occurs because the **cloudadmin\@vsphere.local** user in Azure VMware Solution doesn't have sufficient privileges to do all vCenter Server actions required for registration. However, the privileges are sufficient for the adapter instance to do data collection, as seen below: For more information, see [Privileges Required for Configuring a vCenter Adapter Instance](https://docs.vmware.com/en/vRealize-Operations-Manager/8.1/com.vmware.vcom.core.doc/GUID-3BFFC92A-9902-4CF2-945E-EA453733B426.html).
azure-web-pubsub Quickstart Serverless https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-web-pubsub/quickstart-serverless.md
Install a code editor, such as [Visual Studio Code](https://code.visualstudio.co
Install the [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools#installing) (version 2.7.1505 or higher) to run Azure Function apps locally.
+# [C#](#tab/csharp)
+
+Install a code editor, such as [Visual Studio Code](https://code.visualstudio.com/).
+
+Install the [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools#installing) (version 3 or higher) to run Azure Function apps locally.
+ [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
While the service is deploying, let's switch to working with code. Clone the [sa
- In the browser, open the **Azure portal** and confirm the Web PubSub Service instance you deployed earlier was successfully created. Navigate to the instance. - Select **Keys** and copy out the connection string. + # [JavaScript](#tab/javascript) - Update function configuration.
While the service is deploying, let's switch to working with code. Clone the [sa
In *local.settings.json*, you need to make these changes and then save the file. - Replace the place holder *<connection-string>* to the real one copied from **Azure portal** for **`WebPubSubConnectionString`** setting. - For **`AzureWebJobsStorage`** setting, this is required due to [Azure Functions requires an Azure Storage account](../azure-functions/storage-considerations.md).
- - If you have Azure Storage Emulator run in local, keep the original settings of "UseDevelopmentStorage=true".
+ - If you have [Azure Storage Emulator](https://go.microsoft.com/fwlink/?linkid=717179&clcid=0x409) run in local, keep the original settings of "UseDevelopmentStorage=true".
- If you have an Azure storage connection string, replace the value with it. - JavaScript functions are organized into folders. In each folder are two files: `function.json` defines the bindings that are used in the function, and `index.js` is the body of the function. There are several triggered functions in this function app:
While the service is deploying, let's switch to working with code. Clone the [sa
func start ```
+# [C#](#tab/csharp)
+
+- Update function configuration.
+
+ Open the */samples/functions/csharp/simplechat* folder in the cloned repository. Edit *local.settings.json* to add service connection string.
+ In *local.settings.json*, you need to make these changes and then save the file.
+ - Replace the place holder *<connection-string>* to the real one copied from **Azure portal** for **`WebPubSubConnectionString`** setting.
+ - For **`AzureWebJobsStorage`** setting, this is required due to [Azure Functions requires an Azure Storage account](../azure-functions/storage-considerations.md).
+ - If you have [Azure Storage Emulator](https://go.microsoft.com/fwlink/?linkid=717179&clcid=0x409) run in local, keep the original settings of "UseDevelopmentStorage=true".
+ - If you have an Azure storage connection string, replace the value with it.
+
+- C# functions are organized by file Functions.cs. There are several triggered functions in this function app:
+
+ - **login** - This is the HTTP triggered function. It uses the *webPubSubConnection* input binding to generate and return valid service connection information.
+ - **connected** - This is the `WebPubSubTrigger` triggered function. Receives a chat message in the request body and broadcast the message to all connected client applications with multiple tasks.
+ - **broadcast** - This is the `WebPubSubTrigger` triggered function. Receives a chat message in the request body and broadcast the message to all connected client applications with single task.
+ - **connect** and **disconnect** - These are the `WebPubSubTrigger` triggered functions. Handle the connect and disconnect events.
+
+- In the terminal, ensure that you are in the */samples/functions/csharp/simplechat* folder. Install the extensions and run the function app.
+
+ ```bash
+ func extensions install
+
+ func start
+ ```
+++ - The local function will use port defined in the `local.settings.json` file. To make it available in public network, you need to work with [ngrok](https://ngrok.com) to expose this endpoint. Run command below and you'll get a forwarding endpoint, for example: http://{ngrok-id}.ngrok.io -> http://localhost:7071. ```bash ngrok http 7071
- ```
+ ```
- Set `Event Handler` in Azure Web PubSub service. Go to **Azure portal** -> Find your Web PubSub resource -> **Settings**. Add a new hub settings mapping to the one function in use as below. Replace the {ngrok-id} to yours.
While the service is deploying, let's switch to working with code. Clone the [sa
- User Event Pattern: * - System Events: connect, connected, disconnected. - ## Run the web application
If you're not going to continue to use this app, delete all resources created by
1. In the window that opens, select the resource group, and then select **Delete resource group**.
-1. In the new window, type the name of the resource group to delete, and then select **Delete**.
+1. In the new window, type the name of the resource group to delete, and then select **Delete**.
+
+## Next steps
+
+In this quickstart, you learned how to run a serverless simple chat application. Now, you could start to build your own application.
+
+> [!div class="nextstepaction"]
+> [Quick start: Create a simple chatroom with Azure Web PubSub](https://azure.github.io/azure-webpubsub/getting-started/create-a-chat-app/js-handle-events)
+
+> [!div class="nextstepaction"]
+> [Azure Web PubSub bindings for Azure Functions](https://azure.github.io/azure-webpubsub/references/functions-bindings)
+
+> [!div class="nextstepaction"]
+> [Explore more Azure Web PubSub samples](https://github.com/Azure/azure-webpubsub/tree/main/samples)
backup Azure File Share Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/azure-file-share-support-matrix.md
Azure file shares backup is available in all regions **except** for: Germany Cen
| Standard | Supported | | Large | Supported | | Premium | Supported |
-| File shares connected with Azure File sync service | Supported |
+| File shares connected with Azure File Sync service | Supported |
## Protection limits
backup Blob Backup Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/blob-backup-support-matrix.md
Title: Support matrix for Azure Blobs backup description: Provides a summary of support settings and limitations when backing up Azure Blobs. Previously updated : 05/20/2021 Last updated : 07/07/2021
Operational backup of blobs uses blob point-in-time restore, blob versioning, so
- A block that has been uploaded via [Put Block](/rest/api/storageservices/put-block) or [Put Block from URL](/rest/api/storageservices/put-block-from-url), but not committed via [Put Block List](/rest/api/storageservices/put-block-list), isn't part of a blob and so isn't restored as part of a restore operation. - A blob with an active lease can't be restored. If a blob with an active lease is included in the range of blobs to restore, the restore operation will fail automatically. Break any active leases before starting the restore operation. - Snapshots aren't created or deleted as part of a restore operation. Only the base blob is restored to its previous state.
+- If there're [immutable blobs](../storage/blobs/storage-blob-immutable-storage.md#about-immutable-blob-storage) among those being restored, such immutable blobs won't be restored to their state as per the selected recovery point. However, other blobs that don't have immutability enabled will be restored to the selected recovery point as expected.
## Next steps
backup Quick Backup Vm Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/quick-backup-vm-template.md
The resources defined in the template are:
- [**Microsoft.Network/networkSecurityGroups**](/azure/templates/microsoft.network/networksecuritygroups) - [**Microsoft.Network/virtualNetworks**](/azure/templates/microsoft.network/virtualnetworks) - [**Microsoft.Network/networkInterfaces**](/azure/templates/microsoft.network/networkinterfaces)-- [**Microsoft.Compute/virutalMachines**](/azure/templates/microsoft.compute/virtualmachines)
+- [**Microsoft.Compute/virtualMachines**](/azure/templates/microsoft.compute/virtualmachines)
- [**Microsoft.RecoveryServices/vaults**](/azure/templates/microsoft.recoveryservices/2016-06-01/vaults) - [**Microsoft.RecoveryServices/vaults/backupFabrics/protectionContainers/protectedItems**](/azure/templates/microsoft.recoveryservices/vaults/backupfabrics/protectioncontainers/protecteditems)
batch Batch Pool No Public Ip Address https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-pool-no-public-ip-address.md
To restrict access to these nodes and reduce the discoverability of these nodes
- The VNet must be in the same subscription and region as the Batch account you use to create your pool. - The subnet specified for the pool must have enough unassigned IP addresses to accommodate the number of VMs targeted for the pool; that is, the sum of the `targetDedicatedNodes` and `targetLowPriorityNodes` properties of the pool. If the subnet doesn't have enough unassigned IP addresses, the pool partially allocates the compute nodes, and a resize error occurs. - You must disable private link service and endpoint network policies. This can be done by using Azure CLI:
- ```az network vnet subnet update --vnet-name <vnetname> -n <subnetname> --resouce-group <resourcegroup> --disable-private-endpoint-network-policies --disable-private-link-service-network-policies```
+ ```az network vnet subnet update --vnet-name <vnetname> -n <subnetname> --resource-group <resourcegroup> --disable-private-endpoint-network-policies --disable-private-link-service-network-policies```
> [!IMPORTANT] > For each 100 dedicated or low-priority nodes, Batch allocates one private link service and one load balancer. These resources are limited by the subscription's [resource quotas](../azure-resource-manager/management/azure-subscription-service-limits.md). For large pools, you might need to [request a quota increase](batch-quota-limit.md#increase-a-quota) for one or more of these resources. Additionally, no resource locks should be applied to any resource created by Batch, since this prevent cleanup of resources as a result of user-initiated actions such as deleting a pool or resizing to zero.
client-request-id: 00000000-0000-0000-0000-000000000000
## Outbound access to the internet
-In a pool without public IP addresses, your virtual machines won't be able to access the public internet unless you configure your network setup appropriately, such as by using [virtual network NAT](../virtual-network/nat-overview.md). Note that NAT only allows outbound access to the internet from the virtual machines in the virtual network. Batch-created compute nodes won't be publicly accessible, since they don't have public IP addresses associated.
+In a pool without public IP addresses, your virtual machines won't be able to access the public internet unless you configure your network setup appropriately, such as by using [virtual network NAT](../virtual-network/nat-gateway/nat-overview.md). Note that NAT only allows outbound access to the internet from the virtual machines in the virtual network. Batch-created compute nodes won't be publicly accessible, since they don't have public IP addresses associated.
Another way to provide outbound connectivity is to use a user-defined route (UDR). This lets you route traffic to a proxy machine that has public internet access.
cdn Cdn Pop List Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cdn/cdn-pop-list-api.md
na ms.devlang: na Previously updated : 08/22/2019 Last updated : 07/06/2021
To lock down your application to accept traffic only from Azure CDN from Microso
Configure IP ACLing for your backends to accept traffic from Azure CDN from Microsoft's backend IP address space and Azure's infrastructure services only.
-* Azure CDN from Microsoft's IPv4 backend IP space: 147.243.0.0/16
-* Azure CDN from Microsoft's IPv6 backend IP space: 2a01:111:2050::/44
-
-To use Service tags with Azure CDN from Microsoft, please use the Azure Front Door tag. IP Ranges and Service tags for Microsoft services can be found [here](https://www.microsoft.com/download/details.aspx?id=56519)
-
+Use Azure Front Door [service tags](../virtual-network/service-tags-overview.md) with Azure CDN from Microsoft to configure Microsoft's backend IP ranges. For a complete list, see [IP Ranges and Service tags](https://www.microsoft.com/en-us/download/details.aspx?id=56519) for Microsoft services.
## Typical use case
For security purposes, you can use this IP list to enforce that requests to your
## Next steps
-For information about the REST API, see [Azure CDN REST API](/rest/api/cdn/).
+For information about the REST API, see [Azure CDN REST API](/rest/api/cdn/).
cloud-services-extended-support In Place Migration Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/in-place-migration-overview.md
This article provides an overview on the platform-supported migration tool and h
The migration tool utilizes the same APIs and has the same experience as the [Virtual Machine (classic) migration](../virtual-machines/migration-classic-resource-manager-overview.md).
-> [!IMPORTANT]
-> Migrating from Cloud Services (classic) to Cloud Services (extended support) using the migration tool is currently in public preview. This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- Refer to the following resources if you need assistance with your migration: - [Microsoft Q&A](/answers/topics/azure-cloud-services-extended-support.html): Microsoft and community support for migration.
To perform this migration, you must be added as a coadministrator for the subscr
Register-AzResourceProvider -ProviderNamespace Microsoft.ClassicInfrastructureMigrate ```
-5. Register your subscription for the Cloud Services migration preview feature using [Portal](../azure-resource-manager/management/resource-providers-and-types.md#azure-portal), [PowerShell](../azure-resource-manager/management/resource-providers-and-types.md#azure-powershell) or [CLI](../azure-resource-manager/management/resource-providers-and-types.md#azure-cli)
-
- ```powershell
- Register-AzProviderFeature -FeatureName CloudServices -ProviderNamespace Microsoft.Compute
- ```
-
-6. Check the status of your registration. Registration can take a few minutes to complete.
+5. Check the status of your registration. Registration can take a few minutes to complete.
```powershell Get-AzProviderFeature -FeatureName CloudServices -ProviderNamespace Microsoft.Compute
Cloud Service deployments using legacy role sizes (such as Small or ExtraLarge).
| Migration of Cloud Service to different virtual network | Not supported <br><br> 1. Move the deployment to a different classic virtual network before migration. This will cause downtime. <br> 2. Migrate the new virtual network to Azure Resource Manager. <br><br> Or <br><br> 1. Migrate the virtual network to Azure Resource Manager <br>2. Move the Cloud Service to a new virtual network. This will cause downtime. | | Cloud Service in a virtual network but does not have an explicit subnet assigned | Not supported. Mitigation involves moving the role into a subnet, which requires a role restart (downtime) | -
-## Post Migration Changes
-The Cloud Services (classic) deployment is converted to a Cloud Service (extended support) deployment. Refer to [Cloud Services (extended support) documentation](deploy-prerequisite.md) for more details.
-
-### Changes to deployment files
-
-Minor changes are made to customerΓÇÖs .csdef and .cscfg file to make the deployment files conform to the Azure Resource Manager and Cloud Services (extended support) requirements. Post migration retrieves your new deployment files or update the existing files. This will be needed for update/delete operations.
--- Virtual Network uses full Azure Resource Manager resource ID instead of just the resource name in the NetworkConfiguration section of the .cscfg file. For example, `/subscriptions/subscription-id/resourceGroups/resource-group-name/providers/Microsoft.Network/virtualNetworks/vnet-name`. For virtual networks belonging to the same resource group as the cloud service, you can choose to update the .cscfg file back to using just the virtual network name. --- Classic sizes like Small, Large, ExtraLarge are replaced by their new size names, Standard_A*. The size names need to be changed to their new names in .csdef file. For more information, see [Cloud Services (extended support) deployment prerequisites](deploy-prerequisite.md#required-service-definition-file-csdef-updates)--- Use the Get API to get the latest copy of the deployment files.
- - Get the template using [Portal](../azure-resource-manager/templates/export-template-portal.md), [PowerShell](../azure-resource-manager/management/manage-resource-groups-powershell.md#export-resource-groups-to-templates), [CLI](../azure-resource-manager/management/manage-resource-groups-cli.md#export-resource-groups-to-templates), and [Rest API](/rest/api/resources/resourcegroups/exporttemplate)
- - Get the .csdef file using [PowerShell](/powershell/module/az.cloudservice/?preserve-view=true&view=azps-5.4.0#cloudservice) or [Rest API](/rest/api/compute/cloudservices/rest-get-package).
- - Get the .cscfg file using [PowerShell](/powershell/module/az.cloudservice/?preserve-view=true&view=azps-5.4.0#cloudservice) or [Rest API](/rest/api/compute/cloudservices/rest-get-package).
-
-
-
-### Changes to customerΓÇÖs Automation, CI/CD pipeline, custom scripts, custom dashboards, custom tooling, etc.
-
-Customers need to update their tooling and automation to start using the new APIs / commands to manage their deployment. Customer can easily adopt new features and capabilities of Azure Resource Manager/Cloud Services (extended support) as part of this change.
--- Changes to Resource and Resource Group names post migration
- - As part of migration, the names of few resources like the Cloud Service, public IP addresses, etc. change. These changes might need to be reflected in deployment files before update of Cloud Service. [Learn More about the names of resources changing](in-place-migration-technical-details.md#translation-of-resources-and-naming-convention-post-migration).
--- Recreate rules and policies required to manage and scale cloud services
- - [Auto Scale rules](configure-scaling.md) are not migrated. After migration, recreate the auto scale rules.
- - [Alerts](enable-alerts.md) are not migrated. After migration, recreate the alerts.
- - The Key Vault is created without any access policies. [Create appropriate policies](../key-vault/general/assign-access-policy-portal.md) on the Key Vault to view or manage your certificates. Certificates will be visible under settings on the tab called secrets.
- ## Next steps - [Overview of Platform-supported migration of IaaS resources from classic to Azure Resource Manager](../virtual-machines/migration-classic-resource-manager-overview.md) - Migrate to Cloud Services (extended support) using the [Azure portal](in-place-migration-portal.md)-- Migrate to Cloud Services (extended support) using [PowerShell](in-place-migration-powershell.md)
+- Migrate to Cloud Services (extended support) using [PowerShell](in-place-migration-powershell.md)
cloud-services-extended-support In Place Migration Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/in-place-migration-portal.md
This article shows you how to use the Azure portal to migrate from [Cloud Services (classic)](../cloud-services/cloud-services-choose-me.md) to [Cloud Services (extended support)](overview.md).
-> [!IMPORTANT]
-> Migrating from Cloud Services (classic) to Cloud Services (extended support) using the migration tool is currently in public preview. This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- ## Before you begin **Ensure you are an administrator for the subscription.**
If you're not able to add a co-administrator, contact a service administrator or
If the prepare is successful, the migration is ready for commit.
- :::image type="content" source="media/in-place-migration-portal-4.png" alt-text="Image shows validation passing in the Azure portal.":::
+ :::image type="content" source="media/in-place-migration-portal-4.png" alt-text="Image shows validation passing in the Azure portal.":::
If the prepare fails, review the error, address any issues, and retry the prepare.
If you're not able to add a co-administrator, contact a service administrator or
Type in "yes" to confirm and commit to the migration. The migration is now complete. The migrated Cloud Services (extended support) deployment is unlocked for all operations". ## Next steps
-Review the [Post migration changes](in-place-migration-overview.md#post-migration-changes) section to see changes in deployment files, automation and other attributes of your new Cloud Services (extended support) deployment.
+
+Review the [Post migration changes](post-migration-changes.md) section to see changes in deployment files, automation and other attributes of your new Cloud Services (extended support) deployment.
cloud-services-extended-support In Place Migration Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/in-place-migration-powershell.md
These steps show you how to use Azure PowerShell commands to migrate from [Cloud Services (classic)](../cloud-services/cloud-services-choose-me.md) to [Cloud Services (extended support)](overview.md).
-> [!IMPORTANT]
-> Migrating from Cloud Services (classic) to Cloud Services (extended support) using the migration tool is currently in public preview. This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- ## 1) Plan for migration Planning is the most important step for a successful migration experience. Review the [Cloud Services (extended support) overview](overview.md) and [Planning for migration of IaaS resources from classic to Azure Resource Manager](../virtual-machines/migration-classic-resource-manager-plan.md) prior to beginning any migration steps.
Move-AzureVirtualNetwork -Commit -VirtualNetworkName $vnetName
## Next steps
-Review the [Post migration changes](in-place-migration-overview.md#post-migration-changes) section to see changes in deployment files, automation and other attributes of your new Cloud Services (extended support) deployment.
+
+Review the [Post migration changes](post-migration-changes.md) section to see changes in deployment files, automation and other attributes of your new Cloud Services (extended support) deployment.
cloud-services-extended-support In Place Migration Technical Details https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/in-place-migration-technical-details.md
This article discusses the technical details regarding the migration tool as pertaining to Cloud Services (classic).
-> [!IMPORTANT]
-> Migrating from Cloud Services (classic) to Cloud Services (extended support) using the migration tool is currently in public preview. This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- ## Details about feature / scenarios supported for migration ### Extensions and plugin migration
As part of migration, the resource names are changed, and few Cloud Services fea
| Cloud Services (classic) <br><br> Resource name | Cloud Services (classic) <br><br> Syntax| Cloud Services (extended support) <br><br> Resource name| Cloud Services (extended support) <br><br> Syntax | ||||| | Cloud Service | `cloudservicename` | Not associated| Not associated |
-| Deployment (portal created) <br><br> Deployment (non-portal created) | `deploymentname` | Cloud Services (extended support) | `deploymentname` |
-| Virtual Network | `vnetname` <br><br> `Group resourcegroupname vnetname` <br><br> Not associated | Virtual Network (not portal created) <br><br> Virtual Network (portal created) <br><br> Virtual Networks (Default) | `vnetname` <br><br> `group-resourcegroupname-vnetname` <br><br> `DefaultRdfevirtualnetwork_vnetid`|
-| Not associated | Not associated | Key Vault | `cloudservicename` |
+| Deployment (portal created) <br><br> Deployment (non-portal created) | `deploymentname` | Cloud Services (extended support) | `cloudservicename` |
+| Virtual Network | `vnetname` <br><br> `Group resourcegroupname vnetname` <br><br> Not associated | Virtual Network (not portal created) <br><br> Virtual Network (portal created) <br><br> Virtual Networks (Default) | `vnetname` <br><br> `group-resourcegroupname-vnetname` <br><br> `VNet-cloudservicename`|
+| Not associated | Not associated | Key Vault | `KV-cloudservicename` |
| Not associated | Not associated | Resource Group for Cloud Service Deployments | `cloudservicename-migrated` | | Not associated | Not associated | Resource Group for Virtual Network | `vnetname-migrated` <br><br> `group-resourcegroupname-vnetname-migrated`| | Not associated | Not associated | Public IP (Dynamic) | `cloudservicenameContractContract` | | Reserved IP Name | `reservedipname` | Reserved IP (non-portal created) <br><br> Reserved IP (portal created) | `reservedipname` <br><br> `group-resourcegroupname-reservedipname` |
-| Not associated| Not associated | Load Balancer | `deploymentname-lb`|
+| Not associated| Not associated | Load Balancer | `LB-cloudservicename`|
As part of migration, the resource names are changed, and few Cloud Services fea
- Customers can use PowerShell or Rest API to abort or commit. ### How much time can the operations take?<br>
-Validate is designed to be quick. Prepare is longest running and takes some time depending on total number of role instances being migrated. Abort and commit can also take time but will take less time compared to prepare. All operations will time out after 24 hrs.
+Validate is designed to be quick. Prepare is longest running and takes some time depending on total number of role instances being migrated. Abort and commit can also take time but will take less time compared to prepare. All operations will time out after 24 hrs.
cloud-services-extended-support Post Migration Changes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/post-migration-changes.md
+
+ Title: Azure Cloud Services (extended support) post migration changes
+description: Overview of post migration changes after migrating to Cloud Services (extended support)
++++++ Last updated : 2/08/2021++
+
+# Post migration changes
+The Cloud Services (classic) deployment is converted to a Cloud Service (extended support) deployment. For more information, see [Cloud Services (extended support) documentation](deploy-prerequisite.md).
+
+## Changes to deployment files
+
+Minor changes are made to customerΓÇÖs .csdef and .cscfg file to make the deployment files conform to the Azure Resource Manager and Cloud Services (extended support) requirements. Post migration retrieves your new deployment files or update the existing files. This will be needed for update/delete operations.
+
+- Virtual Network uses full Azure Resource Manager resource ID instead of just the resource name in the NetworkConfiguration section of the .cscfg file. For example, `/subscriptions/subscription-id/resourceGroups/resource-group-name/providers/Microsoft.Network/virtualNetworks/vnet-name`. For virtual networks belonging to the same resource group as the cloud service, you can choose to update the .cscfg file back to using just the virtual network name.
+
+- Classic sizes like Small, Large, ExtraLarge are replaced by their new size names, Standard_A*. The size names need to be changed to their new names in .csdef file. For more information, see [Cloud Services (extended support) deployment prerequisites](deploy-prerequisite.md#required-service-definition-file-csdef-updates)
+
+- Use the Get API to get the latest copy of the deployment files.
+ - Get the template using [Portal](../azure-resource-manager/templates/export-template-portal.md), [PowerShell](../azure-resource-manager/management/manage-resource-groups-powershell.md#export-resource-groups-to-templates), [CLI](../azure-resource-manager/management/manage-resource-groups-cli.md#export-resource-groups-to-templates), and [Rest API](/rest/api/resources/resourcegroups/exporttemplate)
+ - Get the .csdef file using [PowerShell](/powershell/module/az.cloudservice/?preserve-view=true&view=azps-5.4.0#cloudservice) or [Rest API](/rest/api/compute/cloudservices/rest-get-package).
+ - Get the .cscfg file using [PowerShell](/powershell/module/az.cloudservice/?preserve-view=true&view=azps-5.4.0#cloudservice) or [Rest API](/rest/api/compute/cloudservices/rest-get-package).
+
+
+
+## Changes to customerΓÇÖs Automation, CI/CD pipeline, custom scripts, custom dashboards, custom tooling, etc.
+
+Customers need to update their tooling and automation to start using the new APIs / commands to manage their deployment. Customer can easily adopt new features and capabilities of Azure Resource Manager/Cloud Services (extended support) as part of this change.
+
+- Changes to Resource and Resource Group names post migration
+ - As part of migration, the names of few resources like the Cloud Service, public IP addresses, etc. change. These changes might need to be reflected in deployment files before update of Cloud Service. [Learn More about the names of resources changing](in-place-migration-technical-details.md#translation-of-resources-and-naming-convention-post-migration).
+
+- Recreate rules and policies required to manage and scale cloud services
+ - [Auto Scale rules](configure-scaling.md) are not migrated. After migration, recreate the auto scale rules.
+ - [Alerts](enable-alerts.md) are not migrated. After migration, recreate the alerts.
+ - The Key Vault is created without any access policies. [Create appropriate policies](../key-vault/general/assign-access-policy-portal.md) on the Key Vault to view or manage your certificates. Certificates will be visible under settings on the tab called secrets.
++
+## Changes to Certificate Management Post Migration
+
+As a standard practice to manage your certificates, all the valid .pfx certificate files should be added to certificate store in Key Vault and update would work perfectly fine via any client - Portal, Powershell or Rest API.
+
+Currently, Azure Portal does a validation for you to check if all the required Certificates are uploaded in certificate store in Key Vault and warns if a certificate is not found. However, if you are planning to use Certificates as secrets, then these certificates cannot be validated for their thumbprint and any update operation which involves addition of secrets would fail via Portal. Customers are reccomended to use PowerShell or RestAPI to continue updates involving Secrets.
++
+## Changes for Update via Visual Studio
+If you were publishing updates via Visual Studio directly, then you would need to first download the latest CSCFG file from your deployment post migration. Use this file as reference to add Network Configuration details to your current CSCFG file in Visual Studio project. Then build the solution and publish it. You may have to choose the Key Vault and Resource Group for this update.
++
+## Next steps
+- [Overview of Platform-supported migration of IaaS resources from classic to Azure Resource Manager](../virtual-machines/migration-classic-resource-manager-overview.md)
+- Migrate to Cloud Services (extended support) using the [Azure portal](in-place-migration-portal.md)
+- Migrate to Cloud Services (extended support) using [PowerShell](in-place-migration-powershell.md)
cognitive-services Custom Speech Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/custom-speech-overview.md
Last updated 02/12/2021 -+ # What is Custom Speech?
cognitive-services Cognitive Services Data Loss Prevention https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/cognitive-services-data-loss-prevention.md
+
+ Title: Data Loss Prevention #Required; page title is displayed in search results. Include the brand.
+description: Cognitive Services Data Loss Prevention capabilities allow customers to configure the list of outbound URLs their Cognitive Services resources are allowed to access. This configuration creates another level of control for customers to prevent data loss. #Required; article description that is displayed in search results.
++++ Last updated : 07/02/2021 #Required; mm/dd/yyyy format.+++
+# Configure data loss prevention for Azure Cognitive Services
+
+Cognitive Services data loss prevention capabilities allow customers to configure the list of outbound URLs their Cognitive Services resources are allowed to access. This creates another level of control for customers to prevent data loss. In this article, we'll cover the steps required to enable the data loss prevention feature for Cognitive Services resources.
+
+## Prerequisites
+
+Before you make a request, you need an Azure account and an Azure Cognitive Services subscription. If you already have an account, go ahead and skip to the next section. If you don't have an account, we have a guide to get you set up in minutes: [Create a Cognitive Services account for Azure](cognitive-services-apis-create-account.md).
+
+You can get your subscription key from the [Azure portal](cognitive-services-apis-create-account.md#get-the-keys-for-your-resource) after [creating your account](https://azure.microsoft.com/free/cognitive-services/).
+
+## Enabling data loss prevention
+
+There are two parts to enable data loss prevention. First the property restrictOutboundNetworkAccess must be set to true. When this is set to true, you also need to provide the list of approved URLs. The list of URLs is added to the allowedFqdnList property. The allowedFqdnList property contains an array of comma-separated URLs.
+
+>[!Note]
+>The allowedFqdnList can only contain up to 1000 URLs and supports both IP addresses and wildcard domains, i.e. *.microsoft.com. It can take up to 15 minutes for the updated list to take affect.
+
+# [Azure CLI](#tab/azure-cli)
+
+1. Install the [Azure CLI](/cli/azure/install-azure-cli) and [sign in](/cli/azure/authenticate-azure-cli), or select **Try it**.
+
+1. View the details of the Cognitive Services resource.
+
+ ```azurecli-interactive
+ az cognitiveservices account show \
+ -g "myresourcegroup" -n "myaccount" \
+ ```
+
+1. View the current properties of the Cognitive Services resource.
+
+ ```azurecli-interactive
+ az rest -m get \
+ -u /subscriptions/{subscription ID}}/resourceGroups/{resource group}/providers/Microsoft.CognitiveServices/accounts/{account name}?api-version=2021-04-30 \
+ ```
+
+1. Configure the restrictOutboundNetworkAccess property and update the allowed FqdnList with the approved URLs
+
+ ```azurecli-interactive
+ az rest -m patch \
+ -u /subscriptions/{subscription ID}}/resourceGroups/{resource group}/providers/Microsoft.CognitiveServices/accounts/{account name}?api-version=2021-04-30 \
+ -b '{"properties": { "restrictOutboundNetworkAccess": true, "allowedFqdnList": [ "microsoft.com" ] }}'
+ ```
+
+# [PowerShell](#tab/powershell)
+
+1. Install the [Azure PowerShell](/powershell/azure/install-az-ps) and [sign in](/powershell/azure/authenticate-azureps), or select **Try it**.
+
+1. Display the current properties for Cognitive Services resource.
+
+ ```azurepowershell-interactive
+ $getParams = @{
+ ResourceGroupName = 'myresourcegroup'
+ ResourceProviderName = 'Microsoft.CognitiveServices'
+ ResourceType = 'accounts'
+ Name = 'myaccount'
+ ApiVersion = '2021-04-30'
+ Method = 'GET'
+ }
+ Invoke-AzRestMethod @getParams
+ ```
+
+1. Configure the restrictOutboundNetworkAccess property and update the allowed FqdnList with the approved URLs
+
+ ```azurepowershell-interactive
+ $patchParams = @{
+ ResourceGroupName = 'myresourcegroup'
+ ResourceProviderName = 'Microsoft.CognitiveServices'
+ ResourceType = 'accounts'
+ Name = 'myaccount'
+ ApiVersion = '2021-04-30'
+ Payload = '{"properties": { "restrictOutboundNetworkAccess": true, "allowedFqdnList": [ "microsoft.com" ] }}'
+ Method = 'PATCH'
+ }
+ Invoke-AzRestMethod @patchParams
+ ```
+++
+## Supported services
+
+The following services support data loss prevention configuration:
+
+- Computer Vision
+- Content Moderator
+- Custom Vision
+- Face
+- Form Recognizer
+- Speech Service
+- QnA Maker
+
+## Next steps
+
+- [Configure Virtual Networks](cognitive-services-virtual-networks.md)
cognitive-services Label Tool https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/label-tool.md
To try out the Form Recognizer Sample Labeling Tool online, go to the [FOTT webs
### [v2.1](#tab/v2-1) > [!div class="nextstepaction"]
-> [Try Prebuilt Models](https://fott.azurewebsites.net/)
+> [Try Prebuilt Models](https://fott-2-1.azurewebsites.net/)
### [v2.0](#tab/v2-0)
cognitive-services Rest Api And Client Library https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/metrics-advisor/quickstarts/rest-api-and-client-library.md
Previously updated : 10/14/2020 Last updated : 07/06/2021 zone_pivot_groups: programming-languages-metrics-monitor
communication-services Privacy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/privacy.md
Audio and video communication is ephemerally processed by the service and no dat
Audio and video communication is ephemerally processed by the service and no data is retained in your resource other than Azure Monitor logs.
+### Call Recording
+
+Call recordings are stored temporarily in the same geography that was selected for ```Data Location``` during resource creation for 48 hours. After this the recording is deleted and you are responsible for storing the recording in a secure and compliant location.
+ ## Azure Monitor and Log Analytics Azure Communication Services will feed into Azure Monitor logging data for understanding operational health and utilization of the service. Some of these logs include Communication Service identities and phone numbers as field data. To delete any potentially personal data [use these procedures for Azure Monitor](../../azure-monitor/logs/personal-data-mgmt.md). You may also want to configure [the default retention period for Azure Monitor](../../azure-monitor/logs/manage-cost-storage.md).
communication-services Direct Routing Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/telephony-sms/direct-routing-provisioning.md
For more information about regular expressions, see [.NET regular expressions ov
You can select multiple SBCs for a single pattern. In such a case, the routing algorithm will choose them in random order. You may also specify the exact number pattern more than once. The higher row will have higher priority, and if all SBCs associated with that row are not available next row will be selected. This way, you create complex routing scenarios.
+## Delete direct routing configuration
+
+### Delete using Azure portal
+
+#### To delete a Voice Route:
+1. In the left navigation, go to Direct routing under Voice Calling - PSTN and then select the Voice Routes tab.
+1. Select route or routes you want to delete using a checkbox.
+1. Select Remove.
+
+#### To delete an SBC:
+1. In the left navigation, go to Direct routing under Voice Calling - PSTN.
+1. On a Session Border Controllers tab, select Configure.
+1. Clear the FQDN and port fields for the SBC that you want to remove, select Next.
+1. On a Voice Routes tab, review voice routing configuration, make changes if needed. select Save.
+
+> [!NOTE]
+> When you remove SBC associated with a voice route, you can choose a different SBC for the route on the Voice Routes tab. The voice route without an SBC will be deleted.
+ ## Next steps ### Conceptual documentation
communication-services Teams Embed https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/ui-library/teams-embed.md
[!INCLUDE [Public Preview Notice](../../includes/private-preview-include.md)]
+Teams Embed is an Azure Communication Services capability focused on common business-to-consumer and business-to-business calling interactions. The core of the Teams Embed system is [video and voice calling](../voice-video-calling/calling-sdk-features.md), and it builds on Azure's calling primitives to deliver a complete user experience based on Microsoft Teams meetings.
-Teams Embed is an Azure Communication Services capability focused on common business-to-consumer and business-to-business calling interactions. The core of the Teams Embed system is [video and voice calling](../voice-video-calling/calling-sdk-features.md), but the Teams Embed system builds on Azure's calling primitives to deliver a complete user experience based on Microsoft Teams meetings.
+Teams Embed SDK is closed-source and make these capabilities available to you in a turnkey, composite format. You drop Teams Embed into your app's canvas, and the SDK generates a complete user experience. Because this user experience is very similar to Microsoft Teams meetings, you can take advantage of:
-Teams Embed SDKs are closed-source and make these capabilities available to you in a turnkey, composite format. You drop Teams Embed into your app's canvas and the SDK generates a complete user experience. Because this user experience is very similar to Microsoft Teams meetings you can take advantage of:
+- Reduced development time and engineering complexity.
+- End-user familiarity with Teams experience.
+- Ability to re-use [Teams end-user training content.](https://support.microsoft.com/office/meetings-in-teams-e0b0ae21-53ee-4462-a50d-ca9b9e217b67)
-- Reduced development time and engineering complexity-- End-user familiarity with Teams-- Ability to re-use [Teams end-user training content](https://support.microsoft.com/office/meetings-in-teams-e0b0ae21-53ee-4462-a50d-ca9b9e217b67)
+The Teams Embed SDK's provides most features supported in Teams meetings, including:
-The Teams Embed provides most features supported in Teams meetings, including:
+## Joining a meeting
-- Pre-meeting experience where a user configures their audio and video devices-- In-meeting experience for configuring audio and video devices-- [Video Backgrounds](https://support.microsoft.com/office/change-your-background-for-a-teams-meeting-f77a2381-443a-499d-825e-509a140f4780): allowing participants to blur or replace their backgrounds-- [Multiple options for the video gallery](https://support.microsoft.com/office/using-video-in-microsoft-teams-3647fc29-7b92-4c26-8c2d-8a596904cdae) large gallery, together mode, focus, pinning, and spotlight-- [Content Sharing](https://support.microsoft.com/office/share-content-in-a-meeting-in-teams-fcc2bf59-aecd-4481-8f99-ce55dd836ce8): allowing participants to share their screen
+The users can join easily over the meeting using the Teams meeting URL to a simpler and great experience, just like the Teams application. Adding the capability to the user to be part of extensive live meetings without losing the experience of the simplicity of the Teams application.
-For more information about this UI compared to other Azure Communication SDKs, see the [UI SDK concept introduction](ui-library-overview.md).
+## Pre-meeting experience
+
+As a participant of any of the meetings, you can set up a default configuration for audio and video devices. Add your name and bring your own image avatar.
+
+## Meeting experience
+
+Customize the user experience, adjust the capabilities accordingly to your needs. You will control the overall experience during the meetings.
+
+- [**Video background blur effect**](https://support.microsoft.com/office/change-your-background-for-a-teams-meeting-f77a2381-443a-499d-825e-509a140f4780): The user can add a blur effect or change their background.
+
+- [**Content sharing**](https://support.microsoft.com/office/share-content-in-a-meeting-in-teams-fcc2bf59-aecd-4481-8f99-ce55dd836ce8): The user can share video, photo, or the whole screen, and the users will see the shared content.
+
+- [**Multiple layout options for the video gallery**](https://support.microsoft.com/office/using-video-in-microsoft-teams-3647fc29-7b92-4c26-8c2d-8a596904cdae): Bring the capabilities to select the layout default options during the meeting: large gallery, together mode, focus, pinning, and spotlight. And adapt the layout accordingly of the device resolution.
+
+- [**Turn Video On/Off**](https://support.microsoft.com/office/using-video-in-microsoft-teams-3647fc29-7b92-4c26-8c2d-8a596904cdae#bkmk_turnvideoonoff): Bring the possibility to the users to manage their video during the meeting.
+
+- **Attendee actions**: The user can ["raise the hand"](https://support.microsoft.com/en-us/office/raise-your-hand-in-a-teams-meeting-bb2dd8e1-e6bd-43a6-85cf-30822667b372), mute & unmute their microphone, change the camera or audio configuration, hang up and many more actions.
+
+- [**Multilanguage support**](https://support.microsoft.com/topic/languages-supported-in-microsoft-teams-for-education-293792c3-352e-4b24-9fc2-4c28b5de2db8): Support 56 languages during the whole teams experience.
+
+## Quality and security
+
+The Teams Embed SDK is built under Teams Quality standards so for video quality you can see [the bandwidth requirements.](https://docs.microsoft.com/microsoftteams/prepare-network#bandwidth-requirements)
+
+You can use an Azure Communication Service access token, more information [how generate and manage access tokens.](https://docs.microsoft.com/azure/communication-services/quickstarts/access-tokens)
+
+## Capabilities
+
+| SDK Capabilities | Availability |
+||--|
+| *Meeting actions* | |
+| Join a call | Yes |
+| Handle the *call state* (includes connection states, participants count, and modalities like microphone or camera state) | Yes |
+| Rise user events Hand Raised, Muted and sending video event | Yes |
+| Flaky Network Handling support | Yes |
+| Remove participants from meeting | Yes |
+| Supports 56 languages | Yes |
+| Chat during meeting and 1n1 Chat | No |
+| PSTN Calling | No |
+| Recording and transcript | No |
+| Whiteboard sharing | No |
+| Breaking into rooms | No |
+| *Joining meeting experience* | |
+| Join group call with GUID and ACS token | Yes |
+| Join meeting with Live meeting URL and ACS token | Yes |
+| Join meeting with meeting URL and ACS token | Yes |
+| Join via waiting in lobby | Yes |
+
+| User Experience Customization | Availability |
+||--|
+| *Meeting actions* | |
+| Display the call roster | Yes |
+| Background blur | Yes |
+| Customize the layout: colors, icons, buttons | Partially |
+| Customize the call screen icons | Yes |
+| Change meeting views layout | Yes |
+| Dynamic call NxN layout changing | Yes |
+| Rise/Lower hand | Yes |
+| Mute/Un-mute | Yes |
+| Put on hold | Yes |
+| Select audio routing | Yes |
+| Select camera | Yes |
+| Share photo, screen and video | Yes |
+| Start/Stop video | Yes |
+| User tile press event | Yes |
+| Name plate press event | Yes |
+| Customize the screen background colorΓÇï | No |
+| Customize the top/bottom bar colorΓÇï | No |
+| Whiteboard sharing | No |
+| *Pre-meeting experience* | |
+| Joining can configure display name, enable photo sharing | Yes |
+| Joining can configure to show call staging view (pre call screen) | Yes |
+| Joining can configure to show name plate on call screen | Yes |
+| Customize the lobby screen | Partially |
+
+For more information about how to start with the Teams Embed SDK, please follow [Getting Started guide.](../../quickstarts/meeting/getting-started-with-teams-embed.md). If you want to learn more about the SDK capabilities see the [samples guide.](../../quickstarts/meeting/samples-for-teams-embed.md)
container-registry Container Registry Skus https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-skus.md
Title: Registry service tiers and features description: Learn about the features and limits (quotas) in the Basic, Standard, and Premium service tiers (SKUs) of Azure Container Registry. Previously updated : 05/18/2020 Last updated : 06/24/2021 # Azure Container Registry service tiers
The following table details the features and registry limits of the Basic, Stand
[!INCLUDE [container-instances-limits](../../includes/container-registry-limits.md)]
+## Registry throughput and throttling
+
+### Throughput
+
+When generating a high rate of registry operations, use the service tier's limits for read and write operations and bandwidth as a guide for expected maximum throughput. These limits affect data-plane operations including listing, deleting, pushing, and pulling images and other artifacts.
+
+To estimate the throughput of image pulls and pushes specifically, consider the registry limits and these factors:
+
+* Number and size of image layers
+* Reuse of layers or base images across images
+* additional API calls that might be required for each pull or push
+
+For details, see documentation for the [Docker HTTP API V2](https://docs.docker.com/registry/spec/api/).
+
+When evaluating or troubleshooting registry throughput, also consider the configuration of your client environment:
+
+* your Docker daemon configuration for concurrent operations
+* your network connection to the registry's data endpoint (or endpoints, if your registry is [geo-replicated](container-registry-geo-replication.md)).
+
+If you experience issues with throughput to your registry, see [Troubleshoot registry performance](container-registry-troubleshoot-performance.md).
+
+#### Example
+
+Pushing a single 133 MB `nginx:latest` image to an Azure container registry requires multiple read and write operations for the image's five layers:
+
+* Read operations to read the image manifest, if it exists in the registry
+* Write operations to write the configuration blob of the image
+* Write operations to write the image manifest
+
+### Throttling
+
+You may experience throttling of pull or push operations when the registry determines the rate of requests exceeds the limits allowed for the registry's service tier. You may see an HTTP 429 error similar to `Too many requests`.
+
+Throttling could occur temporarily when you generate a burst of image pull or push operations in a very short period, even when the average rate of read and write operations is within registry limits. You may need to implement retry logic with some backoff in your code or reduce the maximum rate of requests to the registry.
+ ## Changing tiers You can change a registry's service tier with the Azure CLI or in the Azure portal. You can move freely between tiers as long as the tier you're switching to has the required maximum storage capacity.
Submit and vote on new feature suggestions in [ACR UserVoice][container-registry
[container-registry-geo-replication]: container-registry-geo-replication.md [container-registry-storage]: container-registry-storage.md [container-registry-delete]: container-registry-delete.md
-[container-registry-webhook]: container-registry-webhook.md
+[container-registry-webhook]: container-registry-webhook.md
container-registry Container Registry Troubleshoot Performance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-troubleshoot-performance.md
May include one or more of the following:
* Pull or push images with the Docker CLI takes longer than expected * Deployment of images to a service such as Azure Kubernetes Service takes longer than expected * You're not able to complete a large number of concurrent pull or push operations in the expected time
+* You see an HTTP 429 error similar to `Too many requests`
* Pull or push operations in a geo-replicated registry take longer than expected, or push fails with error `Error writing blob` or `Error writing manifest` ## Causes
cosmos-db High Availability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/high-availability.md
description: This article describes how Azure Cosmos DB provides high availabili
Previously updated : 02/05/2021 Last updated : 07/07/2021
Azure Cosmos DB provides comprehensive SLAs that encompass throughput, latency a
|Operation type | Single-region |Multi-region (single-region writes)|Multi-region (multi-region writes) | ||||-|
-|Writes | 99.99 |99.99 |99.999|
-|Reads | 99.99 |99.999 |99.999|
+|Writes | 99.99 |99.99 |99.999|
+|Reads | 99.99 |99.999 |99.999|
> [!NOTE] > In practice, the actual write availability for bounded staleness, session, consistent prefix and eventual consistency models is significantly higher than the published SLAs. The actual read availability for all consistency levels is significantly higher than the published SLAs.
When configuring multi-region writes for your Azure Cosmos account, you can opt
The following table summarizes the high availability capability of various account configurations:
-|KPI|Single-region without AZs|Single-region with AZs|Multi-region, single-region writes with AZs|Multi-region, multi-region writes with AZs|
-||||||
-|Write availability SLA | 99.99% | 99.995% | 99.995% | 99.999% |
-|Read availability SLA | 99.99% | 99.995% | 99.995% | 99.999% |
-|Zone failures ΓÇô data loss | Data loss | No data loss | No data loss | No data loss |
-|Zone failures ΓÇô availability | Availability loss | No availability loss | No availability loss | No availability loss |
-|Regional outage ΓÇô data loss | Data loss | Data loss | Dependent on consistency level. See [Consistency, availability, and performance tradeoffs](./consistency-levels.md) for more information. | Dependent on consistency level. See [Consistency, availability, and performance tradeoffs](./consistency-levels.md) for more information.
-|Regional outage ΓÇô availability | Availability loss | Availability loss | No availability loss for read region failure, temporary for write region failure | No availability loss |
-|Price (***1***) | N/A | Provisioned RU/s x 1.25 rate | Provisioned RU/s x 1.25 rate (***2***) | Multi-region write rate |
+|KPI|Single-region without AZs|Single-region with AZs|Multi-region, single-region writes without AZs|Multi-region, single-region writes with AZs|Multi-region, multi-region writes with or without AZs|
+|||||||
+|Write availability SLA | 99.99% | 99.995% | 99.99% | 99.995% | 99.999% |
+|Read availability SLA | 99.99% | 99.995% | 99.999% | 99.999% | 99.999% |
+|Zone failures ΓÇô data loss | Data loss | No data loss | No data loss | No data loss | No data loss |
+|Zone failures ΓÇô availability | Availability loss | No availability loss | No availability loss | No availability loss | No availability loss |
+|Regional outage ΓÇô data loss | Data loss | Data loss | Dependent on consistency level. See [Consistency, availability, and performance tradeoffs](./consistency-levels.md) for more information. | Dependent on consistency level. See [Consistency, availability, and performance tradeoffs](./consistency-levels.md) for more information. | Dependent on consistency level. See [Consistency, availability, and performance tradeoffs](./consistency-levels.md) for more information.
+|Regional outage ΓÇô availability | Availability loss | Availability loss | No availability loss for read region failure, temporary for write region failure | No availability loss for read region failure, temporary for write region failure | No availability loss |
+|Price (***1***) | N/A | Provisioned RU/s x 1.25 rate | Provisioned RU/s x n regions | Provisioned RU/s x 1.25 rate x n regions (***2***) | Multi-region write rate x n regions |
***1*** For Serverless accounts request units (RU) are multiplied by a factor of 1.25.
cosmos-db How To Configure Vnet Service Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/how-to-configure-vnet-service-endpoint.md
Title: Configure virtual network based access for an Azure Cosmos account description: This document describes the steps required to set up a virtual network service endpoint for Azure Cosmos DB. -+ Previously updated : 10/13/2020- Last updated : 07/07/2021+
The following sections describe how to configure a virtual network service endpo
:::image type="content" source="./media/how-to-configure-vnet-service-endpoint/choose-subnet-and-vnet.png" alt-text="Select virtual network and subnet":::
+ > [!NOTE]
+ > Configuring a VNET service endpoint may take up to 15 minutes to propagate and the endpoint may exhibit an inconsistent behavior during this period.
+ 1. After the Azure Cosmos DB account is enabled for access from a virtual network, it will allow traffic from only this chosen subnet. The virtual network and subnet that you added should appear as shown in the following screenshot: :::image type="content" source="./media/how-to-configure-vnet-service-endpoint/vnet-and-subnet-configured-successfully.png" alt-text="Virtual network and subnet configured successfully":::
cosmos-db Manage With Templates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/manage-with-templates.md
This template creates an Azure Cosmos account, database and container with with
This template will create a SQL Cosmos account, a natively maintained Role Definition, and a natively maintained Role Assignment for an AAD identity. This template is also available for one-click deploy from Azure Quickstart Templates Gallery.
-[:::image type="content" source="../media/template-deployments/deploy-to-azure.svg" alt-text="Deploy to Azure":::](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2F101-cosmosdb-sql-rbac%2Fazuredeploy.json)
+[:::image type="content" source="../media/template-deployments/deploy-to-azure.svg" alt-text="Deploy to Azure":::](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.documentdb%2Fcosmosdb-sql-rbac%2Fazuredeploy.json)
<a id="free-tier"></a>
cosmos-db Optimize Write Performance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/mongodb/optimize-write-performance.md
+
+ Title: Optimize write performance in the Azure Cosmos DB API for MongoDB
+description: This article describes how to optimize write performance in the Azure Cosmos DB API for MongoDB to get the most throughput possible for the lowest cost.
++++ Last updated : 06/25/2021++++
+# Optimize write performance in Azure Cosmos DB API for MongoDB
+
+Optimizing write performance helps you get the most out of Azure Cosmos DB API for MongoDB's unlimited scale. Unlike other managed MongoDB services, the API for MongoDB automatically and transparently shards your collections for you (when using sharded collections) to scale infinitely.
+
+The way you write data needs to be mindful of this by parallelizing and spreading data across shards to get the most writes out of your databases and collections. This article explains best practices to optimize write performance.
+
+## Spread the load across your shards
+When writing data to a sharded API for MongoDB collection, your data is split up (sharded) into tiny slices and it is written to each shard based on the value of your shard key field. You can think of each slice as a small portion of a virtual machine that only stores the documents containing one unique shard key value.
+
+If your application writes a massive amount of data to a single shard, this won't be efficient because the app would be maxing out the throughput of only one shard instead of spreading the load across all of your shards. Your write load will be evenly spread across your collection by writing in parallel to many documents with unique shard key values.
+
+One example of doing this would be a product catalog application that is sharded on the category field. Instead of writing to one category (shard) at a time, it's better write to all categories simultaneously to achieve the maximum write throughput.
+
+## Reduce the number of indexes
+[Indexing](../mongodb-indexing.md) is a great feature to drastically reduce the time it takes to query your data. For the most flexible query experience, the API for MongoDB enables a wildcard index on your data by default to make queries against all fields blazing-fast. However, all indexes, which include wildcard indexes introduce additional load when writing data because writes change the collection and indexes.
+
+Reducing the number of indexes to only the indexes you need to support your queries will make your writes faster and cheaper. As a general rule, we recommend the following:
+
+* Any field that you filter on should have a corresponding single-field index for it. This option also enables multi-field filtering.
+* Any group of fields that you sort on should have a composite index for that group.
+
+## Set ordered to false in the MongoDB drivers
+By default, the MongoDB drivers set the ordered option to "true" when writing data, which writes each document in order one by one. This option reduces write performance since each write request has to wait for the previous one to complete. When writing data, set this option to false to improve performance.
+
+```JavaScript
+db.collection.insertMany(
+ [ <doc1> , <doc2>, ... ],
+ {
+ ordered: false
+ }
+)
+```
+
+## Tune for the optimal batch size and thread count
+Parallelization of write operations across many threads/processes is key to scaling writes. The API for MongoDB accepts writes in batches of up to 1,000 documents for each process/thread.
+
+If you are writing more than 1,000 documents at a time per process/thread, client functions such as `insertMany()` should be limited to roughly 1,000 documents. Otherwise, the client will wait for each batch to commit before moving on to the next batch. In some cases, splitting up the batches with fewer or slightly more than 1,000 documents will be faster.
+++
+## Next steps
+
+* Learn more about [indexing in the API for MongoDB](../mongodb-indexing.md).
+* Learn more about [Azure Cosmos DB's sharding/partitioning](../partitioning-overview.md).
+* Learn more about [troubleshooting common issues](../mongodb-troubleshoot.md).
cosmos-db Partitioning Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/partitioning-overview.md
Each physical partition consists of a set of replicas, also referred to as a [*r
Typically, smaller containers only require a single physical partition, but they will still have at least 4 replicas.
-The following image shows how logical partitions are mapped to physical partitions that are distributed globally:
+The following image shows how logical partitions are mapped to physical partitions that are distributed globally. [Partition set](global-dist-under-the-hood.md#partition-sets) in the image refers to a group of physical partitions that manage the same logical partition keys across multiple regions:
:::image type="content" source="./media/partitioning-overview/logical-partitions.png" alt-text="An image that demonstrates Azure Cosmos DB partitioning" border="false":::
cosmos-db Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/role-based-access-control.md
Title: Azure role-based access control in Azure Cosmos DB description: Learn how Azure Cosmos DB provides database protection with Active directory integration (Azure RBAC).- Last updated 06/17/2021-++
[!INCLUDE[appliesto-all-apis](includes/appliesto-all-apis.md)] > [!NOTE]
-> This article is about role-based access control for management plane operations in Azure Cosmos DB. If you are using data plane operations, see [Azure Cosmos DB RBAC](how-to-setup-rbac.md) for role-based access control applied to your data plane operations.
+> Azure RBAC support in Azure Cosmos DB applies to management plane operations only. This article is about role-based access control for management plane operations in Azure Cosmos DB. If you are using data plane operations, data is secured using primary keys, resource tokens, or the Azure Cosmos DB RBAC. To learn more about role-based access control applied to data plane operations, see [Secure access to data](secure-access-to-data.md) and [Azure Cosmos DB RBAC](how-to-setup-rbac.md) articles.
Azure Cosmos DB provides built-in Azure role-based access control (Azure RBAC) for common management scenarios in Azure Cosmos DB. An individual who has a profile in Azure Active Directory can assign these Azure roles to users, groups, service principals, or managed identities to grant or deny access to resources and operations on Azure Cosmos DB resources. Role assignments are scoped to control-plane access only, which includes access to Azure Cosmos accounts, databases, containers, and offers (throughput).
The following are the built-in roles supported by Azure Cosmos DB:
| [CosmosRestoreOperator](../role-based-access-control/built-in-roles.md) | Can perform restore action for Azure Cosmos DB account with continuous backup mode.| |[Cosmos DB Operator](../role-based-access-control/built-in-roles.md#cosmos-db-operator)|Can provision Azure Cosmos accounts, databases, and containers. Cannot access any data or use Data Explorer.|
-> [!IMPORTANT]
-> Azure RBAC support in Azure Cosmos DB applies to control plane operations only. Data plane operations are secured using primary keys, resource tokens or the Cosmos DB RBAC. To learn more, see [Secure access to data in Azure Cosmos DB](secure-access-to-data.md)
- ## Identity and access management (IAM) The **Access control (IAM)** pane in the Azure portal is used to configure Azure role-based access control on Azure Cosmos resources. The roles are applied to users, groups, service principals, and managed identities in Active Directory. You can use built-in roles or custom roles for individuals and groups. The following screenshot shows Active Directory integration (Azure RBAC) using access control (IAM) in the Azure portal:
cosmos-db Troubleshoot Dot Net Sdk Request Header Too Large https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/troubleshoot-dot-net-sdk-request-header-too-large.md
A 400 bad request most likely occurs because the session token is too large. If
Restart your client application to reset all the session tokens. Eventually, the session token will grow back to the previous size that caused the issue. To avoid this issue completely, use the solution in the next section. #### Solution:
+> [!IMPORTANT]
+> Upgrade to at least .NET [v3.20.1](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/changelog.md) or [v2.15.0](https://github.com/Azure/azure-cosmos-dotnet-v2/blob/master/changelog.md). These minor versions contain optimizations to reduce the session token size to prevent the header from growing and hitting the size limit.
1. Follow the guidance in the [.NET v3](performance-tips-dotnet-sdk-v3-sql.md) or [.NET v2](performance-tips.md) performance tips articles. Convert the application to use the direct connection mode with the Transmission Control Protocol (TCP). The direct connection mode with the TCP protocol doesn't have the header size restriction like the HTTP protocol, so it avoids this issue. Make sure to use the latest version of the SDK, which has a fix for query operations when the service interop isn't available. 1. If the direct connection mode with the TCP protocol isn't an option for your workload, mitigate it by changing the [client consistency level](how-to-manage-consistency.md). The session token is only used for session consistency, which is the default consistency level for Azure Cosmos DB. Other consistency levels don't use the session token.
cost-management-billing Activate Subs Accounts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/cloudyn/activate-subs-accounts.md
- Title: Activate Azure subscriptions and accounts
-description: Enable access using Azure Resource Manager APIs for new and existing accounts and resolve common account problems.
-- Previously updated : 10/23/2020---------
-# Activate Azure subscriptions and accounts with Cloudyn
-
-Adding or updating your Azure Resource Manager credentials allows Cloudyn to discover all the accounts and subscriptions within your Azure Tenant. If you also have Azure Diagnostics extension enabled on your virtual machines, then Cloudyn can collect extended metrics like CPU and memory. This article describes how to enable access using Azure Resource Manager APIs for new and existing accounts. It also describes how to resolve common account problems.
-
-Cloudyn cannot access most of your Azure subscription data when the subscription is _unactivated_. You must edit _unactivated_ accounts so that Cloudyn can access them.
--
-## Required Azure permissions
-
-Specific permissions are needed to complete the procedures in this article. Either you or your tenant administrator must have both of the following permissions:
--- Permission to register the CloudynCollector application with your Azure AD tenant.-- The ability to assign the application to a role in your Azure subscriptions.-
-In your Azure subscriptions, your accounts must have `Microsoft.Authorization/*/Write` access to assign the CloudynCollector application. This action is granted through the [Owner](../../role-based-access-control/built-in-roles.md#owner) role or [User Access Administrator](../../role-based-access-control/built-in-roles.md#user-access-administrator) role.
-
-If your account is assigned the **Contributor** role, you do not have adequate permission to assign the application. You receive an error when attempting to assign the CloudynCollector application to your Azure subscription.
-
-### Check Azure Active Directory permissions
-
-1. Sign in into the [Azure portal](https://portal.azure.com).
-2. In the Azure portal, select **Azure Active Directory**.
-3. In Azure Active Directory, select **User settings**.
-4. Check the **App registrations** option.
- - If it is set to **Yes**, then non-administrator users can register AD apps. This setting means any user in the Azure AD tenant can register an app.
- ![select App registrations in User settings](./media/activate-subs-accounts/app-register.png)
- - If the **App registrations** option is set to **No**, then only tenant administrative users can register Azure Active Directory apps. Your tenant administrator must register the CloudynCollector application.
--
-## Add an account or update a subscription
-
-When you add an account update a subscription, you grant Cloudyn access to your Azure data.
-
-### Add a new account (subscription)
-
-1. In the Cloudyn portal, click the gear symbol in the upper-right and select **Cloud Accounts**.
-2. Click **Add new account** and the **Add new account** box appears. Enter the required information.
- ![enter required information in the Add new account box](./media/activate-subs-accounts/add-new-account.png)
-
-### Update a subscription
-
-1. If you want to update an _unactivated_ subscription that already exists in Cloudyn in Accounts Management, click the edit pencil symbol to the right of the parent _tenant GUID_. Subscriptions are grouped under a parent tenant, so avoid activating subscriptions individually.
- ![select your tenant ID in the Rediscover subscriptions box](./media/activate-subs-accounts/existing-sub.png)
-2. If necessary, enter the Tenant ID. If you don't know your Tenant ID, use the following steps to find it:
- 1. Sign in to the [Azure portal](https://portal.azure.com).
- 2. In the Azure portal, select **Azure Active Directory**.
- 3. To get the tenant ID, select **Properties** for your Azure AD tenant.
- 4. Copy the Directory ID GUID. This value is your tenant ID.
- For more information, see [Get tenant ID](../../active-directory/develop/howto-create-service-principal-portal.md#get-tenant-and-app-id-values-for-signing-in).
-3. If necessary, select your Rate ID. If you don't know your rate ID, use the following steps to find it.
- 1. In the upper-right of the Azure portal, click your user information and then click **View my bill**.
- 2. Under **Billing Account**, click **Subscriptions**.
- 3. Under **My subscriptions**, select the subscription.
- 4. Your rate ID is shown under **Offer ID**. Copy the Offer ID for the subscription.
-4. In the Add new account (or Edit Subscription) box, click **Save** (or **Next**). You're redirected to the Azure portal.
-5. Sign in to the portal. Click **Accept** to authorize Cloudyn Collector access your Azure account.
-
- You're redirected to the Cloudyn Accounts management page and your subscription is updated with **active** Account Status. It should display a green check mark symbol under the Resource Manager column.
-
- If you don't see a green checkmark symbol for one or more of the subscriptions, it means that you do not have permissions to create the reader app (the CloudynCollector) for the subscription. A user with higher permissions for the subscription needs to repeat this process.
-
-Watch the [Connecting to Azure Resource Manager with Cloudyn](https://youtu.be/oCIwvfBB6kk) video that walks through the process.
-
->[!VIDEO https://www.youtube.com/embed/oCIwvfBB6kk?ecver=1]
-
-## Resolve common indirect enterprise set-up problems
-
-When you first use the Cloudyn portal, you might see the following messages if you are an Enterprise Agreement or Cloud Solution Provider (CSP) user:
--- *The specified API key is not a top level enrollment key* displayed in the **Set Up Cloudyn** wizard.-- *Direct Enrollment ΓÇô No* displayed in the Enterprise Agreement portal.-- *No usage data was found for the last 30 days. Please contact your distributor to make sure markup was enabled for your Azure account* displayed in the Cloudyn portal.-
-The preceding messages indicate that you purchased an Azure Enterprise Agreement through a reseller or CSP. Your reseller or CSP needs to enable _markup_ for your Azure account so that you can view your data in Cloudyn.
-
-Here's how to fix the problems:
-
-1. Your reseller needs to enable _markup_ for your account. For instructions, see the [Indirect Customer Onboarding Guide](https://ea.azure.com/api/v3Help/v2IndirectCustomerOnboardingGuide).
-2. You generate the Azure Enterprise Agreement key for use with Cloudyn.
-
-Before you can generate the Azure Enterprise Agreement API key to set up Cloudyn, you must enable the Azure Billing API by following the instructions at:
--- [Overview of Reporting APIs for Enterprise customers](../manage/enterprise-api.md)-- [Microsoft Azure enterprise portal Reporting API](https://ea.azure.com/helpdocs/reportingAPI) under **Enabling data access to the API**-
-You also might need to give department administrators, account owners, and enterprise administrators permissions to _view charges_ with the Billing API.
-
-Only an Azure service administrator can enable Cloudyn. Co-administrator permissions are insufficient. However, you can work around the administrator requirement. You can request that your Azure Active Directory administrator grant permission to authorize the **CloudynAzureCollector** with a PowerShell script. The following script grants permission to register the Azure Active Directory Service Principal **CloudynAzureCollector**.
--
-```powershell
-#THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
-
-#Tenant - enter your tenant ID or Name
-$tenant = "<ReplaceWithYourTenantID>"
-
-#Cloudyn Collector application ID
-$appId = "83e638ef-7885-479f-bbe8-9150acccdb3d"
-
-#URL to activate the consent screen
-$url = "https://login.windows.net/"+$tenant+"/oauth2/authorize?api-version=1&response_type=code&client_id="+$appId+"&redirect_uri=http%3A%2F%2Flocalhost%3A8080%2FCloudynJava&prompt=consent"
-
-#Choose your browser, the default is Internet Explorer
-
-#Chrome
-#[System.Diagnostics.Process]::Start("chrome.exe", "--incognito $url")
-
-#Firefox
-#[System.Diagnostics.Process]::Start("firefox.exe","-private-window $url" )
-
-#IExplorer
-[System.Diagnostics.Process]::Start("iexplore.exe","$url -private" )
-
-```
-
-## Next steps
--- If you haven't already completed the first tutorial for Cloudyn, read it at [Review usage and costs](tutorial-review-usage.md).
cost-management-billing Azure Vm Extended Metrics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/cloudyn/azure-vm-extended-metrics.md
- Title: Add extended metrics for Azure virtual machines
-description: This article helps you enable and configure extended diagnostics metrics for your Azure VMs.
--- Previously updated : 03/12/2020-------
-# Add extended metrics for Azure virtual machines
-
-Cloudyn uses Azure metric data from your Azure VMs to show you detailed information about their resources. Metric data, also called performance counters, is used by Cloudyn to generate reports. However, Cloudyn does not automatically gather all Azure metric data from guest VMs ΓÇö you must enable metric collection. This article helps you enable and configure additional diagnostics metrics for your Azure VMs.
-
-After you enable metric collection, you can:
--- Know when your VMs are reaching their memory, disk, and CPU limits.-- Detect usage trends and anomalies.-- Control your costs by sizing according to usage.-- Get cost effective sizing optimization recommendations from Cloudyn.-
-For example, you might want to monitor the CPU % and Memory % of your Azure VMs. The Azure VM metrics correspond to _Percentage CPU_ and _\Memory\% Committed Bytes In Use_.
-
-> [!NOTE]
-> Extended metric data collection is only supported with Azure guest-level monitoring. Cloudyn is not compatible with the [Log Analytics agent](../../azure-monitor/agents/agents-overview.md).
--
-## Determine whether extended metrics are enabled
-
-1. Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
-2. Under **Virtual machines**, select a VM and then under **Monitoring**, select **Metrics**. A list of available metrics is shown.
-3. Select some metrics and a graph displays data for them.
- ![Example metric ΓÇô host percentage CPU](./media/azure-vm-extended-metrics/metric01.png)
-
-In the preceding example, a limited set of standard metrics are available for your hosts, but memory metrics are not. Memory metrics are part of extended metrics. In this case, extended metrics are not enabled for the VM. You must perform some additional steps to enable extended metrics. The following information guides you through enabling them.
-
-## Enable extended metrics in the Azure portal
-
-Standard metrics are host computer metrics. The _Percentage CPU_ metric is one example. There are also basic metrics for guest VMs and they're also called extended metrics. Examples of extended metrics include _\Memory\% Committed Bytes In Use_ and _\Memory\Available Bytes_.
-
-Enabling extended metrics is straightforward. For each VM, enable guest-level monitoring. When you enable guest-level monitoring, the Azure diagnostics agent is installed on the VM. By default, a basic set of extended metrics are added. The following process is the same for classic and regular VMs and the same for Windows and Linux VMs.
-
-Keep in mind that both Windows and Linux guest-level monitoring require a storage account. When you enable guest-level monitoring, if you don't choose an existing storage account, then one is created for you.
-
-### Enable guest-level monitoring on existing VMs
-
-1. In **Virtual Machines**, view your list of your VMs and then select a VM.
-2. Under **Monitoring**, select **Diagnostic settings**.
-3. On the Diagnostics settings page, click **Enable guest-level monitoring**.
- ![Enable guest level monitoring on the Overview page](./media/azure-vm-extended-metrics/enable-guest-monitoring.png)
-4. After a few minutes, the Azure diagnostics agent is installed on the VM. A basic set of metrics are added. Refresh the page. The added performance counters appear on the Overview tab.
-5. Under Monitoring, select **Metrics**.
-6. In the metrics chart under **Metric Namespace**, select **Guest (Classic)**.
-7. In the Metric list, you can view all of the available performance counters for the guest VM.
- ![list of example extended metrics](./media/azure-vm-extended-metrics/extended-metrics.png)
-
-### Enable guest-level monitoring on new VMs
-
-When you create new VMs, on the Management tab, select **On** for **OS guest diagnostics**.
-
-![set Guest OS diagnostics to On](./media/azure-vm-extended-metrics/new-enable-diag.png)
-
-For more information about enabling extended metrics for Azure virtual machines, see [Understanding and using the Azure Linux agent](../../virtual-machines/extensions/agent-linux.md) and [Azure Virtual Machine Agent overview](../../virtual-machines/extensions/agent-windows.md).
-
-## Resource Manager credentials
-
-After you enable extended metrics, ensure that Cloudyn has access to your [Resource Manager credentials](./activate-subs-accounts.md). Your credentials are required for Cloudyn to collect and display performance data for your VMs. They're also used to create cost optimization recommendations. Cloudyn needs at least three days of performance data from an instance to determine if it is a candidate for a downsizing recommendation.
-
-## Enable VM metrics with a script
-
-You can enable VM metrics with Azure PowerShell scripts. When you have many VMs that you want to enable metrics on, you can use a script to automate the process. Example scripts are on GitHub at [Azure Enable Diagnostics](https://github.com/Cloudyn/azure-enable-diagnostics).
-
-## View Azure performance metrics
-
-To view performance metrics on your Azure Instances in the Cloudyn portal, navigate to **Assets** > **Compute** > **Instance Explorer**. In the list of VM instances, expand an instance and then expand a resource to view details.
-
-![example information shown in Instance Explorer](./media/azure-vm-extended-metrics/instance-explorer.png)
-
-## Next steps
--- If you haven't already enabled Azure Resource Manager API access for your accounts, proceed to [Activate Azure subscriptions and accounts](./activate-subs-accounts.md).
cost-management-billing Connect Aws Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/cloudyn/connect-aws-account.md
- Title: Connect an Amazon Web Services account to Cloudyn in Azure
-description: Connect an Amazon Web Services account to view cost and usage data in Cloudyn reports.
-- Previously updated : 03/12/2020--------
-# Connect an Amazon Web Services account
-
-You have two options to connect your Amazon Web Services (AWS) account to Cloudyn. You can connect with an IAM role or with a read-only IAM user account. The IAM role is recommended because it allows you to delegate access with defined permissions to trusted entities. The IAM role doesn't require you to share long-term access keys. After you connect an AWS account to Cloudyn, cost and usage data is available in Cloudyn reports. This document guides you through both options.
-
-For more information about AWS IAM identities, see [Identities (Users, Groups, and Roles)](https://docs.aws.amazon.com/IAM/latest/UserGuide/id.html).
-
-Also, you enable AWS detailed billing reports and store the information in an AWS simple storage service (S3) bucket. Detailed billing reports include billing charges with tag and resource information on an hourly basis. Storing the reports allows Cloudyn to retrieve them from your bucket and display the information in its reports.
-
-## AWS role-based access
-
-The following sections walk you through creating a read-only IAM role to provide access to Cloudyn.
-
-### Get your Cloudyn account external ID
-
-The first step is to get the unique connection passphrase from the Cloudyn portal. It is used in AWS as the **External ID**.
-
-1. Open the Cloudyn portal from the Azure portal or navigate to [https://azure.cloudyn.com](https://azure.cloudyn.com) and sign in.
-2. Click the cog symbol and then select **Cloud Accounts**.
-3. In Accounts Management, select the **AWS Accounts** tab and then click **Add new +**.
-4. In the **Add AWS Account** dialog, copy the **External ID** and save the value for AWS Role creation steps in the next section. The External ID is unique to your account. In the following image, the example External ID is _Contoso_ followed by a number. Your ID differs.
- ![External ID shown in the Add AWS Account box](./media/connect-aws-account/external-id.png)
-
-### Add AWS read-only role-based access
-
-1. Sign in to the AWS console at [https://console.aws.amazon.com/iam/home](https://console.aws.amazon.com/iam/home) and select **Roles**.
-2. Click **Create Role** and then select **Another AWS account**.
-3. In the **Account ID** box, paste `432263259397`. This Account ID is the Cloudyn data collector account assigned by AWS to the Cloudyn service. Use the exact Account ID shown.
-4. Next to **Options**, select **Require external ID**. Paste your unique value that copied previously from the **External ID** field in Cloudyn. Then click **Next: Permissions**.
- ![paste External ID from Cloudyn on the Create role page](./media/connect-aws-account/create-role01.png)
-5. Under **Attach permissions policies**, in the **Policy type** filter box search, type `ReadOnlyAccess`, select **ReadOnlyAccess**, then click **Next: Review**.
- ![select Read-only access in the list of policy names](./media/connect-aws-account/readonlyaccess.png)
-6. On the Review page, ensure your selections are correct and type a **Role name**. For example, *Azure-Cost-Mgt*. Enter a **Role description**. For example, _Role assignment for Cloudyn_, then click **Create role**.
-7. In the **Roles** list, click the role you created and copy the **Role ARN** value from the Summary page. Use the Role ARN (Amazon Resource Name) value later when you register your configuration in Cloudyn.
- ![copy the Role ARN from the Summary page](./media/connect-aws-account/role-arn.png)
-
-### Configure AWS IAM role access in Cloudyn
-
-1. Open the Cloudyn portal from the Azure portal or navigate to https://azure.cloudyn.com/ and sign in.
-2. Click the cog symbol and then select **Cloud Accounts**.
-3. In Accounts Management, select the **AWS Accounts** tab and then click **Add new +**.
-4. In **Account Name**, type a name for the account.
-5. Next to **Access Type**, select **IAM Role**.
-6. In the **Role ARN** field, paste the value you previously copied and then click **Save**.
- ![paste the Role ARN in the Add AWS Account box](./media/connect-aws-account/add-aws-account-box.png)
--
-Your AWS account appears in the list of accounts. The **Owner ID** listed matches your Role ARN value. Your **Account Status** should have a green check mark symbol indicating that Cloudyn can access your AWS account. Until you enable detailed AWS billing, your consolidation status appears as **Standalone**.
-
-![AWS account status shown on the Accounts Management page](./media/connect-aws-account/aws-account-status01.png)
-
-Cloudyn starts collecting the data and populating reports. Next, [enable detailed AWS billing](#enable-detailed-aws-billing).
--
-## AWS user-based access
-
-The following sections walk you through creating a read-only user to provide access to Cloudyn.
-
-### Add AWS read-only user-based access
-
-1. Sign in to the AWS console at [https://console.aws.amazon.com/iam/home](https://console.aws.amazon.com/iam/home) and select **Users**.
-2. Click **Add User**.
-3. In the **User name** field, type a user name.
-4. For **Access type**, select **Programmatic access** and click **Next: Permissions**.
- ![enter a user name on the Add user page](./media/connect-aws-account/add-user01.png)
-5. For permissions, select **Attach existing policies directly**.
-6. Under **Attach permissions policies**, in the **Policy type** filter box search, type `ReadOnlyAccess`, select **ReadOnlyAccess**, and then click **Next: Review**.
- ![select ReadOnlyAccess to set permissions for the user](./media/connect-aws-account/set-permission-for-user.png)
-7. On the Review page, ensure your selections are correct then click **Create user**.
-8. On the Complete page, your Access key ID and Secret access key are shown. You use this information to configure registration in Cloudyn.
-9. Click **Download .csv** and save the credentials.csv file to a secure location.
- ![click Download .csv to save the credentials](./media/connect-aws-account/download-csv.png)
-
-### Configure AWS IAM user-based access in Cloudyn
-
-1. Open the Cloudyn portal from the Azure portal or navigate to https://azure.cloudyn.com/ and sign in.
-2. Click the cog symbol and then select **Cloud Accounts**.
-3. In Accounts Management, select the **AWS Accounts** tab and then click **Add new +**.
-4. For **Account Name**, type an account name.
-5. Next to **Access Type**, select **IAM User**.
-6. In **Access Key**, paste the **Access key ID** value from the credentials.csv file.
-7. In **Secret Key**, paste the **Secret access key** value from the credentials.csv file and then click **Save**.
-
-Your AWS account appears in the list of accounts. Your **Account Status** should have a green check mark symbol.
-
-Cloudyn starts collecting the data and populating reports. Next, [enable detailed AWS billing](#enable-detailed-aws-billing).
-
-## Enable detailed AWS billing
-
-Use the following steps to get your AWS Role ARN. You use the Role ARN to grant read permissions to a billing bucket.
-
-1. Sign in to the AWS console at [https://console.aws.amazon.com](https://console.aws.amazon.com) and select **Services**.
-2. In the Service Search box type *IAM*, and select that option.
-3. Select **Roles** from the left-hand menu.
-4. In the list of Roles, select the role that you created for Cloudyn access.
-5. On the Roles Summary page, click to copy the **Role ARN**. Keep the Role ARN handy for later steps.
-
-### Create an S3 bucket
-
-You create an S3 bucket to store detailed billing information.
-
-1. Sign in to the AWS console at [https://console.aws.amazon.com](https://console.aws.amazon.com) and select **Services**.
-2. In the Service Search box type *S3*, and select **S3**.
-3. On the Amazon S3 page, click **Create bucket**.
-4. In the Create bucket wizard, choose a Bucket name and Region and then click **Next**.
- ![example information one the Create bucket page](./media/connect-aws-account/create-bucket.png)
-5. On the **Set properties** page, keep the default values, and then click **Next**.
-6. On the Review page, click **Create bucket**. Your bucket list is displayed.
-7. Click the bucket that you created and select the **Permissions** tab and then select **Bucket Policy**. The Bucket policy editor opens.
-8. Copy the following JSON example and paste it in the Bucket policy editor.
- - Replace `<BillingBucketName>` with the name of your S3 bucket.
- - Replace `<ReadOnlyUserOrRole>` with the Role or User ARN that you had previously copied.
-
- ```json
- {
- "Version": "2012-10-17",
- "Id": "Policy1426774604000",
- "Statement": [
- {
- "Sid": "Stmt1426774604000",
- "Effect": "Allow",
- "Principal": {
- "AWS": "arn:aws:iam::386209384616:root"
- },
- "Action": [
- "s3:GetBucketAcl",
- "s3:GetBucketPolicy"
- ],
- "Resource": "arn:aws:s3:::<BillingBucketName>"
- },
- {
- "Sid": "Stmt1426774604001",
- "Effect": "Allow",
- "Principal": {
- "AWS": "arn:aws:iam::386209384616:root"
- },
- "Action": "s3:PutObject",
- "Resource": "arn:aws:s3:::<BillingBucketName>/*"
- },
- {
- "Sid": "Stmt1426774604002",
- "Effect": "Allow",
- "Principal": {
- "AWS": "<ReadOnlyUserOrRole>"
- },
- "Action": [
- "s3:List*",
- "s3:Get*"
- ],
- "Resource": "arn:aws:s3:::<BillingBucketName>/*"
- }
- ]
- }
- ```
-
-9. Click **Save**.
- ![click Save in the Bucket policy editor](./media/connect-aws-account/bucket-policy-editor.png)
--
-### Enable AWS billing reports
-
-After you create and configure the S3 bucket, navigate to [Billing Preferences](https://console.aws.amazon.com/billing/home?#/preference) in the AWS console.
-
-1. On the Preferences page, select **Receive Billing Reports**.
-2. Under **Receive Billing Reports**, enter the name of the bucket that you created and then click **Verify**.
-3. Select all four report granularity options and then click **Save preferences**.
- ![select granularity to enable reports](./media/connect-aws-account/enable-reports.png)
-
-Cloudyn retrieves detailed billing information from your S3 bucket and populates reports after detailed billing is enabled. It can take up to 24 hours until detailed billing data appears in the Cloudyn console. When detailed billing data is available, your account consolidation status appears as **Consolidated**. Account status appears as **Completed**.
-
-![consolidation status shown on the AWS Accounts tab](./media/connect-aws-account/consolidated-status.png)
-
-Some of the optimization reports may require a few days of data to get an adequate data sample size for accurate recommendations.
-
-## Next steps
--- To learn more about Cloudyn, continue to the [Review usage and costs](tutorial-review-usage.md) tutorial for Cloudyn.
cost-management-billing Cost Mgt Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/cloudyn/cost-mgt-faq.md
- Title: Frequently asked questions for Cloudyn in Azure
-description: Learn how to use the Cloudyn portal to resolve common indirect enterprise setup problems and answer other frequently asked questions.
-- Previously updated : 10/23/2020--------
-# Frequently asked questions for Cloudyn
-
-This article addresses some common questions about Cloudyn. If you have questions about Cloudyn, you can ask them at [FAQs for Cloudyn](https://social.msdn.microsoft.com/Forums/en-US/231bf072-2c71-4121-8339-ac9d868137b9/faqs-for-cloudyn-cost-management?forum=Cloudyn).
--
-## How can I resolve common indirect enterprise setup problems?
-
-When you first use the Cloudyn portal, you might see the following messages if you are an Enterprise Agreement or Cloud Solution Provider (CSP) user:
--- "The specified API key is not a top level enrollment key" displayed in the **Set Up Cloudyn** wizard.-- "Direct Enrollment ΓÇô No" displayed in the Enterprise Agreement portal.-- "No usage data was found for the last 30 days. Please contact your distributor to make sure markup was enabled for your Azure account" displayed in the Cloudyn portal.-
-The preceding messages indicate that you purchased an Azure Enterprise Agreement through a reseller or CSP. Your reseller or CSP needs to enable _markup_ for your Azure account so that you can view your data in Cloudyn.
-
-Here's how to fix the problems:
-
-1. Your reseller needs to enable _markup_ for your account. For instructions, see the [Indirect Customer Onboarding Guide](https://ea.azure.com/api/v3Help/v2IndirectCustomerOnboardingGuide).
-
-2. You generate the Azure Enterprise Agreement key for use with Cloudyn.
-
-Only an Azure service administrator can enable Cloudyn. Co-administrator permissions are insufficient.
-
-Before you can generate the Azure Enterprise Agreement API key to set up Cloudyn, you must enable the Azure Billing API by following the instructions at:
--- [Overview of Reporting APIs for Enterprise customers](../manage/enterprise-api.md)-- [Microsoft Azure enterprise portal Reporting API](https://ea.azure.com/helpdocs/reportingAPI) under **Enabling data access to the API**--
-You also might need to give department administrators, account owners, and enterprise administrators permissions to _view charges_ with the Billing API.
-
-## Why don't I see Optimizer recommendations?
-
-Recommendation information is only available for accounts that are activated. You will not see any recommendation information in **Optimizer** report categories for accounts that are *unactivated*, including:
--- Optimization Manager-- Sizing Optimization-- Inefficiencies-
-If you cannot view any Optimizer recommendation data, then most likely, you have accounts that are unactivated. To activate an account, you need to register it with your Azure credentials.
-
-To activate an account:
-
-1. In the Cloudyn portal, click **Settings** in the upper right and select **Cloud Accounts**.
-2. On the Microsoft Azure Accounts tab, look for accounts that have an **unactivated** subscription.
-3. To the right of an unactivated account, click the **edit** symbol that resembles a pencil.
-4. Your tenant ID and rate ID is automatically detected. Click **Next**.
-5. You're redirected to the Azure portal. Sign in to the portal and authorize Cloudyn Collector to access your Azure data.
-6. Next, you're redirected to the Cloudyn Accounts management page and your subscription is updated with **active** Account Status. It shows a green check mark symbol.
-7. If you don't see a green checkmark symbol for one or more of the subscriptions, it means that you do not have permissions to create a reader app (the CloudynCollector) for the subscription. A user with higher permissions for the subscription needs to repeat steps 3 and 4.
-
-After you complete the preceding steps, you can view Optimizer recommendations within one to two days. However, it can take up to five days before full optimization data is available.
--
-## How do I enable suspended or locked-out users?
-
-First, let's look at the most common scenario that causes user accounts to get *initiallySuspended*.
-
-> Admin1 might be a Microsoft Cloud Solution Provider or Enterprise Agreement user. Their organization is ready to start using Cloudyn. He registers through the Azure portal and signs into the Cloudyn portal. As the person who registers the Cloudyn service and signs into the Cloudyn portal, Admin1 becomes the *primary administrator*. Admin1 does not create any user accounts. However, using the Cloudyn portal, they do create Azure accounts and set up an entity hierarchy. Admin1 informs Admin2, a tenant administrator, that they need to register with Cloudyn and sign in to the Cloudyn portal.
->
-> Admin2 registers through the Azure portal. However when they try to sign in to the Cloudyn portal, they get an error saying their account is **suspended**. The primary administrator, Admin1, is notified of the account suspension. Admin1 needs to activate Admin2's account and grant *admin entity access* for the appropriate entities and allows user management access and active the user account.
--
-If you receive an alert with a request to allow access for a user, you need to activate the user account.
-
-To activate the user account:
-
-1. Sign in to Cloudyn by using the Azure administrative user account that you used to set up Cloudyn. Or, sign in with a user account that was granted administrator access.
-2. Select the gear symbol in the upper right, and select **User Management**.
-3. Find the user, select the pencil symbol, and then edit the user.
-4. Under **User status**, change the status from **Suspended** to **Active**.
-
-Cloudyn user accounts connect by using single sign-on from Azure. If a user mistypes their password, they might get locked out of Cloudyn, even though they can still access Azure.
-
-If you change your e-mail address in Cloudyn from the default address in Azure, your account can get locked out. It might show "status initiallySuspended." If your user account is locked out, contact an alternate administrator to reset your account.
-
-We recommend that you create at least two Cloudyn administrator accounts in case one of the accounts gets locked out.
-
-If you can't sign in to the Cloudyn portal, ensure that you're using the correct URL to sign in to Cloudyn. Use [https://azure.cloudyn.com](https://ms.portal.azure.com/#blade/Microsoft_Azure_CostManagement/CloudynMainBlade).
-
-Avoid using the Cloudyn direct URL `https://app.cloudyn.com`.
-
-## How do I activate unactivated accounts with Azure credentials?
-
-As soon as your Azure accounts are discovered by Cloudyn, cost data is immediately provided in cost-based reports. However, for Cloudyn to provide usage and performance data, you need to register your Azure credentials for the accounts. For instructions, see [Add an account or update a subscription](activate-subs-accounts.md#add-an-account-or-update-a-subscription).
-
-To add Azure credentials for an account, in the Cloudyn portal, select the edit symbol to the right of the account name, not the subscription.
-
-Until your Azure credentials are added to Cloudyn, the account appears as _un-activated_.
-
-## How do I add multiple accounts and entities to an existing subscription?
-
-Additional entities are used to add additional Enterprise Agreements to a Cloudyn subscription. For more information, see [Create and manage entities](tutorial-user-access.md#create-and-manage-entities).
-
-For CSPs:
-
-To add additional CSP accounts to an entity, select **MSP Access** instead of **Enterprise** when you create the new entity. If your account is registered as an Enterprise Agreement and you want to add CSP credentials, Cloudyn support personnel might need to modify your account settings. If you're a paid Azure subscriber, you can create a new support request in the Azure portal. Select **Help + support**, and then select **New support request**.
-
-## Currency symbols in Cloudyn reports
-
-You might have multiple Azure accounts using different currencies. However, cost reports in Cloudyn do not show more than one currency type per report.
-
-If you have multiple subscriptions using different currencies, a parent entity and its child entity currencies are displayed in USD **$**. Our suggested best practice is to avoid using different currencies in the same entity hierarchy. In other words, all your subscriptions organized in an entity structure should use the same currency.
-
-Cloudyn automatically detects your Enterprise Agreement subscription currency and presents it properly in reports. However, Cloudyn only displays USD **$** for CSP and web-direct Azure accounts.
-
-## What are Cloudyn data refresh timelines?
-
-Cloudyn has the following data refresh timelines:
--- **Initial**: After you set up, it can take up to 24 hours to view cost data in Cloudyn. It can also take up to 10 days for Cloudyn to collect enough data to display sizing recommendations.-- **Daily**: From the tenth day to the end of each month, Cloudyn should show your data up to date from the previous day after about UTC+3 the next day.-- **Monthly**: From the first day to the tenth day of each month, Cloudyn might show your data only through the end of the previous month.-
-Cloudyn processes data for the previous day when full data from the previous day is available. The previous day's data is usually available in Cloudyn by about UTC+3 each day. Some data, such as tags, can take an additional 24 hours to process.
-
-Data for the current month isn't available for collection at the beginning of every month. During the period, service providers finalize their billing for the previous month. The previous month's data appears in Cloudyn 5 to 10 days after the start of each month. During this time, you might see only amortized costs from the previous month. You might not see daily billing or usage data. When the data becomes available, Cloudyn processes it retroactively. After processing, all the monthly data is displayed between the fifth day and the tenth day of each month.
-
-If there is a delay sending data from Azure to Cloudyn, data is still recorded in Azure. The data is transferred to Cloudyn when the connection is restored.
-
-## Cost fluctuations in Cloudyn Cost Reports
-
-Cost reports can show cost fluctuations whenever cloud service providers send updated billing files. Fluctuating costs occur when new files are received from a cloud service provider outside of the usual daily or monthly reporting schedule. Cost changes don't result from Cloudyn recalculation.
-
-Throughout the month, all billing files sent by your cloud service provider are an estimation of your daily costs. Sometimes data is updated frequently ΓÇö occasionally multiple times per day. Updates are more frequent with AWS than Azure. Cost totals should remain stable when the billing calculation for the previous month is complete and the final billing file is received. Usually, by the 10th of the month.
-
-Changes occur when you receive cost adjustments from your cloud service provider. Receiving credits is one example. Changes can occur months after the relevant month was closed. Changes are shown whenever a recalculation is made by your cloud service provider. Cloudyn updates its historical data to make sure that all adjustments are recalculated. It also verifies that the costs are shown accurately in it reports.
-
-## How can a direct CSP configure Cloudyn access for indirect CSP customers or partners?
-
-For instructions, see [Configure indirect CSP access in Cloudyn](quick-register-csp.md#configure-indirect-csp-access-in-cloudyn).
-
-## What causes the Optimizer menu item to appear?
-
-After you add Azure Resource Manager access and data is collected, you should see the **Optimizer** option. To activate Azure Resource Manager access, see [How do I activate unactivated accounts with Azure credentials?](#how-do-i-activate-unactivated-accounts-with-azure-credentials)
-
-## Is Cloudyn agent based?
-
-No. Agents are not used. Azure virtual machine metric data for VMs is gathered from the Microsoft Insights API. If you want to gather metric data from Azure VMs, they need to have diagnostics settings enabled.
-
-## Do Cloudyn reports show more than one AD tenant per report?
-
-Yes. You can [create a corresponding cloud account entity](tutorial-user-access.md#create-and-manage-entities) for each AD tenant that you have. Then you can view all of your Azure AD tenant data and other cloud platform providers including Amazon Web Services and Google Cloud Platform.
cost-management-billing Dashboards https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/cloudyn/dashboards.md
- Title: View key metrics with Cloudyn dashboards in Azure
-description: This article describes how you can view key metrics with dashboards in Cloudyn.
-- Previously updated : 03/12/2020--------
-# View key cost metrics with dashboards
-
-Dashboards in Cloudyn provide a high-level view of reports. Dashboards allow you to view key cost metrics in a single view. They also provide business trend highlights to help you make important business decisions.
--
-Dashboards are also used to create views for people with different responsibilities in your organization, which might include:
--- Financial controller-- Application or project owner-- DevOps engineer-- Executives-
-Dashboards are made up of widgets and each widget is essentially a report thumbnail. Click a widget to open its report. When you customize reports, you save them to My Reports and they're added to the dashboard.
-
-Dashboard versions differ for Management (MSP), Enterprise, and Premium Cloudyn users. The differences are determined by entity access levels. For more information about access levels, see [Entity access levels](tutorial-user-access.md#entity-access-levels).
-
-Dashboard availability depends on the type of cloud service provider account that is used when viewing dashboards. The type of information available and collected by Cloudyn affects reports in dashboards. For example, if you don't have an AWS account then you won't see the S3 Tracker dashboard. Similarly, if you don't enable Azure Resource Manager access to Cloudyn then you won't see any Azure-specific information in Optimizer dashboard widgets.
-
-You can use any of the premade dashboards or you can create your own dashboard with customized reports. If you're unfamiliar with Cloudyn reports, see [Use Cloudyn reports](use-reports.md).
-
-## Create a custom dashboard
-
-To quickly get started with a custom dashboard, you can duplicate an existing one to use its properties. Then you can modify it to suit your needs. On the dashboard you want to copy, click **Save As**. You can only duplicate customized dashboards ΓÇö you can't duplicate the dashboards that are included with Cloudyn.
-
-To create a custom dashboard:
-
-1. On the homepage, click **Add New +**. The My Dashboard page is displayed.
- ![My dashboard page where you add new reports](./media/dashboards/my-dashboard.png)
-2. Click **Add New Report**. The Add Report box is displayed.
-3. Select the report that you want to add to the dashboard widget. The widget is added to the dashboard.
-4. Repeat the preceding steps until the dashboard is complete.
-5. To change the name of the dashboard, click the name of the dashboard on the Dashboard home page and type the new name.
-
-## Modify a custom dashboard
-
-Like creating a custom dashboard, you can't modify the dashboards included with Cloudyn. To modify a custom dashboard report:
-
-1. In the dashboard, find the report you want to modify and click **Edit**. The report is displayed.
-2. Make any changes that you want to the report and click **Save**. The report is updated and displays your changes.
-
-## Share a custom dashboard
-
-You can share a custom dashboard with others to _Public_ or _My Entity_. When you share to Public, all users can view the dashboard. Only users with access to the current entity can view the dashboard when you share to My Entity. The steps to share a custom dashboard with Public and My Entity are similar.
-
-To share a custom dashboard to Public:
-
-1. In a dashboard, click **Dashboard Settings**. The Dashboard Settings box is displayed.
- ![dashboard settings for a custom dashboard](./media/dashboards/dashboard-options.png)
-2. In the Dashboard Settings box, click the arrow symbol and then click **Public**. The Public Dashboard confirmation dialog box is displayed.
-3. Click **Yes**. The dashboard is now available to others.
-
-## Delete a custom dashboard report
-
-You can delete a custom report component from the dashboard. Deleting the report from the dashboard doesn't delete the report from the reports list. Instead, deleting the report removes it from the dashboard only.
-
-To delete a dashboard component, on the dashboard component, click **Delete**. Clicking **Delete** immediately deletes the dashboard component.
-
-## Share a dashboard (Enterprise)
-
-You can share custom dashboards to all users in your organization or with the users of the current entity. Sharing a dashboard can give others a quick high-level view of your KPI. When you share a dashboard, it automatically replicates the dashboard to all your Cloudyn entities/customers. Changes to the shared dashboard are automatically updated.
-
-To share a dashboard with all users including subentities:
-
-1. On the dashboard home page, click **Edit**.
-2. Click **Share** and then select **Public**.
-3. The Global Public Dashboard confirmation box is displayed.
-4. Click **Yes** to set the dashboard as a global public dashboard.
-
-To share a dashboard with all users of a current entity:
-
-1. From the Dashboard home page, click **Edit**.
-2. Click **Share** and then select **My Entity**.
-3. Click **Yes** to set the dashboard as a public dashboard.
-
-## Duplicate a custom dashboard
-
-When you create a new dashboard, you might want to use similar properties from an existing dashboard. You can duplicate the dashboard to create a new one.
-
-You can only duplicate custom dashboards. You can't duplicate standard dashboards.
-
-To duplicate (clone) a custom dashboard:
-
-1. On the Dashboard that you want to duplicate, click **Save As**. A new dashboard opens with the same name and a number.
-2. Rename the duplicated dashboard and modify it as you like.
---Or--
-1. In Dashboard Settings, click **Save As** on the line of the dashboard that you want to duplicate.
-2. The duplicated dashboard opens.
-3. Rename the dashboard and modify it as you like.
-
-## Set a default dashboard
-
-You can set any dashboard as your default. Setting it to your default makes it appear as the left-most tab in the dashboard tab list. The default dashboard displays when open the Cloudyn portal.
--- Click the dashboard tab you would like to set as default, then click **Default** on the right.---Or--
-1. Click **Dashboard Settings** to see the list of available dashboards and select the dashboard that you want to set as the default.
- ![dashboard options for a default dashboard](./media/dashboards/dashboard-options.png)
-2. Click **Default** in the line of the dashboard. The Default Dashboard confirmation box is displayed.
-3. Click **Yes**. The dashboard is set to default.
-
-## Management dashboard
-The Management (or MSP dashboard for MSP users) dashboard includes highlights of the main report types.
-![Management dashboard showing various reports](./media/dashboards/management-dash.png)
-
-### Cost Entity Summary (Enterprise only)
-This widget summarizes the managed cost entities, including the number of entities and number of accounts.
-- Click the widget to open the Enterprise Details report.-
-### Cost Over Time
-This widget can help you spot cost trends. It highlights the cost for the last day, based on the trend of the last 30 days.
-- Click the widget to open the Actual Cost Over Time report to view and filter additional details.-
-### Asset Controller
-This widget highlights the number of running instances from the previous day, above the usage trend over the last 30 days.
-- Click the widget to open the Asset Controller dashboard.-
-### Unused RI Detector
-This widget highlights the number of Amazon EC2 unused reservations.
-- Click the widget to open the Currently Unused Reservations report where you can view the unused reservations you can modify.-
-### Cost by Service
-This widget highlights amortized costs by service for the last 30 days. Hover over the pie chart to see the costs per service.
-- Click the widget to open the Actual Cost Analysis report.-
-### Potential savings
-This widget shows instance type pricing recommendations for Amazon EC2 and Amazon RDS.
-- Click the widget open the Savings Analysis report. It lists your costs by instance types with potential savings.-
-### Compute Instances - Daily Trend
-This widget displays the active instances by type, for the last 30 days.
-- Click the widget to open the Instances Over Time report, where you can view a breakdown of all instances running during the last 30 days.-
-### Storage by department
-This widget displays the storage services used by departments. Hover over the pie chart to see your storage consumption by department.
-- Click the widget to open the S3 Tracker dashboard.-
-## Cost Controller dashboard
-The Cost Controller dashboard shows pre-set cost allocation highlights.
-![Cost Controller dashboard showing various reports](./media/dashboards/cost-controller-dashboard.png)
-
-### Cost Over Time
-This widget helps you spot cost trends. It highlights the cost for the last day, based on the trend of the last 30 days.
-- Click the widget to open the Actual Cost Over Time report to view and filter additional details.-
-### Monthly Cost Trends
-This widget highlights projected amortized spending and your actual spend since the beginning of the month.
-- Click the widget to open the Current Month Projected Cost report, which provides a month-to-date cost summary.-
-This report shows the cost from the beginning of month, the cost of previous month, and the current month projected cost. The current month projected cost is calculated by adding the up-to-date monthly cost and projection. The projection is based on the cost monitored over the last 30 days.
-
-### 12 Month Planner
-This widget highlights the projected costs over the next 12 months and the potential savings.
-- Click the widget to open the Annual Projected Cost report.-
-### Cost by Service
-This widget highlights amortized costs by service for the last 30 days.
-- Hover over the pie chart to see the costs per service.-- Click the widget to open the Actual Cost Analysis report.-
-### Cost by Account
-This widget highlights amortized costs by account for the last 30 days.
-- Hover over the pie chart to see the costs per account.-- Click the widget to open the Actual Cost Analysis report.-
-### Cost Trend by Day
-This widget highlights spend over the last 30 days.
-- Hover over the bar graph to see costs per day.-- Click the widget to open the Actual Cost Over Time report.-
-### Cost Trend by Month - Last 6 months
-
-This widget highlights spend over the last six months.
-- Hover over the bar graph to see costs per month.-- Click the widget to open the Actual Cost Over Time report.-
-## Asset Controller dashboard
-
-This dashboard displays the number of running instances, available and in-use disks, distribution of instance types, and storage information.
-![Asset Controller dashboard showing various reports](./media/dashboards/asset-controller-dashboard.png)
-
-### Compute Instances
-This widget displays the number of running instances based on the usage trend over the last 30 days.
-- Click the widget to open the Instances Over Time report.-
-### Disks
-This widget highlights the total number and volume of disks, that are in-use and available.
-- Click the widget to open the Active Disks report.-
-### Instance Type Distribution
-This widget highlights the instance types in a pie chart.
-- Click on the widget to open the Instance Distribution report, which provides a breakdown of your active instances by the selected aggregation.-
-### Compute Instances - Daily Trend
-This widget highlights the compute instances (spot, reserved, and on-demand) per day for the last 30 days.
-- Hover over the graph to view the number of compute instances, per type per day.-- Click the widget to open the Instances Over Time report.-
-### All Buckets (S3)
-This widget highlights the total S3 storage and number of objects stored.
-- Click the widget to open the S3 Tracker Dashboard. The dashboard helps you find, analyze, and display your current storage usage and trends.-
-### SQL DB Instances (RDS)
-This widget highlights the number of running Amazon RDS instances based on the trend of the last 30 days.
-- Click the widget to open the RDS Instance Over Time report.-
-## Optimizer Dashboard
-This dashboard displays downsizing recommendations, unused resources, and potential savings.
-![Optimizer dashboard showing various reports](./media/dashboards/optimizer-dashboard.png)
-
-### RI Calculator
-This widget displays the number of RI buying recommendations and highlights the potential annual savings.
-- Click the widget to open the Reserved Instance Calculator where you can determine when to use on-demand vs. reserved pricing plans.-
-### Sizing
-This widget highlights the sizing recommended and potential savings, if implemented.
-- Click the widget to open the EC2 Cost Effective Sizing Recommendations report.-
-### Unused RI Detector
-This widget highlights the number of Amazon EC2 unused reservations.
-- Click the widget to open the Currently Unused Reservations report where you can view the unused reservations that you can modify.-
-### Available Disks
-This widget highlights the number of unattached disks in your deployment.
-- Click the widget to open the Unattached Disks report.-
-### RDS RI Calculator
-This widget highlights the number of reservation recommendations for your Amazon RDS instances and the potential savings.
-- Click the widget to open the RDS RI Buying Recommendations report where you can see Cloudyn recommendations to use reserved instances instead of on-demand Instances.-
-### RDS Sizing
-This widget shows the number of sizing recommendations and the potential savings.
-- Click the widget to open the RDS Sizing Recommendations report, which displays detailed Amazon RDS sizing recommendations.-
-The optimization recommendations are based on the usage and performance data monitored in the last month.
-
-## S3 Tracker dashboard
-The S3 Tracker dashboard helps you find, analyze, and display your current storage usage and trends.
-![S3 Tracker dashboard showing various reports](./media/dashboards/s3-tracker-dashboard.png)
-
-### All Buckets
-This widget highlights the total size of all your buckets, in GB, and the total number of objects in your buckets.
-- Click the widget to open the Distribution of S3 Size report. The report helps you analyze your S3 size by bucket, top-level folder, storage class, and versioning state.-
-### Bucket Properties
-This widget highlights the total number of storage buckets.
-- Click the widget to view the S3 Bucket Properties report.-
-### Scan Status
-This widget highlights when the last S3 scan was done and when the next one will start.
-- Click the widget to open the S3 Scan Status report.-
-### Storage by Bucket
-This widget highlights the percentage that each bucket storage class is using.
-- Click the widget to open the Distribution of S3 Size report. The report helps you analyze your S3 size by bucket, top-level folder, storage class, and versioning state.-
-### Number of Objects by Bucket
-This widget highlights the number of objects per bucket in actual number and percentage. Hover over the bucket to see the total objects.
-- Click the widget to open the Distribution of S3 Size report (Scan based).-
-## Cloud Comparison Dashboard
-The Cloud Comparison dashboard helps you compare costs from different cloud providers based on pricing, CPU type, and RAM size.
-![Cloud Comparison dashboard showing various reports](./media/dashboards/cloud-comparison-dashboard.png)
-
-### EC2 Cost in Azure by Instance Type
-This widget highlights the last 30 days of usage in on-demand rates. It compares the cost with the current Amazon EC2 cost vs the potential cost in Azure.
-- Hover over the bars to compare costs per instance type.-- Click the widget to open the Porting Your Deployment ΓÇô Cost Analysis report.-
-### EC2 Cost in Azure
-This widget shows your current Amazon EC2 costs and compares them to Azure. The comparison is based on the last 30 days of usage in on-demand rates.
-- Click the widget to open the Porting Your Deployment - Cost Analysis report.-
-### EC2/Azure Instance Type Mapping
-This widget highlights the best mapping of elastic compute units between Amazon EC2 and Azure.
-- Click the widget to open the Instances Type Mapping report.-
-## Next steps
-- Read the [Use Cloudyn reports](use-reports.md) article to learn more about reports.
cost-management-billing Manage Budgets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/cloudyn/manage-budgets.md
- Title: Manage Cloudyn budgets in Azure
-description: This article helps you quickly create budgets and start managing them in Cloudyn.
-- Previously updated : 03/12/2020--------
-# Manage Azure budgets with Cloudyn
-
-Setting up budgets and budget-based alerts help to improve your cloud governance and accountability. This article helps you quickly create budgets and start managing them in Cloudyn.
-
-When you have an Enterprise or MSP account, you can use your hierarchical cost entity structure to assign monthly budget quotas to different business units, departments, or any other cost entity. When you have a Premium account, you can use the budget management functionality, which is then applied to your entire cloud expenditure. All budgets are manually assigned.
-
-Based on assigned budgets, you can set threshold alerts based on the percentage of your budget that's consumed and define the severity of each threshold.
-
-Budget reports show the assigned budget. Users can view when their spending is over, under, or at par with their consumption over time. When you select **Show/Hide Fields** at the top of a budget report, you can view cost, budget, accumulated cost, or total budget.
-
-Azure Cost Management offers similar functionality to Cloudyn. Azure Cost Management is a native Azure cost management solution. It helps you analyze costs, create and manage budgets, export data, and review and act on optimization recommendations to save money. For more information about budgets in Cost Management, see [Create and manage budgets](../costs/tutorial-acm-create-budgets.md).
--
-## Create budgets
-
-When you create a budget, you set it for your fiscal year and it applies to a specific entity.
-
-To create a budget and assign it to an entity:
-
-1. Navigate to **Costs** &gt; **Cost Management** &gt; **Budget**.
-2. On the Budget Management page, under **Entities**, select the entity where you want to create the budget.
-3. In the budget year, select the year where you want to create the budget.
-4. For each month, set a budget value. When you're done, click **Save**.
-In this example, the monthly budget for June 2018 is set to $135,000. The total budget for the year is $1,615,000.00.
-![Create a budget page where you set a budget for each month](./media/manage-budgets/set-budget.png)
--
-To import a file for the annual budget:
-
-1. Under **Actions**, select **Export** to download an empty CSV template to use as your basis for the budget.
-2. Fill in the CSV file with your budget entries and save it locally.
-3. Under **Actions**, select **Import**.
-4. Select your saved file and then click **OK**.
-
-To export your completed budget as a CSV file, under **Actions**, select **Export** to download the file.
-
-## View budget in reports
-
-When completed, your budget is shown in most Cost reports under **Costs** &gt; **Cost Analysis** and in the Cost vs. Budget Over Time report. You can also schedule reports based on budget thresholds using **Actions**.
-
-Here's an example of the Cost Analysis report. It shows the total budget and cost by workload and usage types since the beginning of the year.
-
-![Example Cost Analysis report with budget](./media/manage-budgets/cost-analysis-budget-example.png)
-
-In this example, assume the current date is June 22. The cost for June 2018 is $71,611.28 compared to the monthly budget of $135,000. The cost is much lower than the monthly budget because there are still eight days of spending before the end of the month.
-
-Another way to view the report is to look at accumulated cost vs your budget. To see accumulated costs, under **Show/Hide Fields**, select **Accumulated Cost** and **Total Budget**. Here's an example showing the accumulated cost since the beginning of the year.
-
-![Example accumulated cost and total budget shown in the Cost vs. Budget Over Time report](./media/manage-budgets/accumulated-budget.png)
-
-Sometime in the future your accumulated cost might exceed your budget. You can more easily see that if you change the chart view to the _line_ type.
-
-![Budget shown in a line chart in the Cost by Months report](./media/manage-budgets/budget-line.png)
-
-## Create budget alerts for a filter
-
-In the previous example, you can see that the accumulated cost approached the budget. You can create automatic budget alerts so that you're notified when spending approaches or exceeds your budget. Basically, the alert is a scheduled report with a threshold. Budget alert threshold metrics include:
--- Remaining cost vs. budget ΓÇô to specify a currency value threshold-- Cost percentage vs. budget ΓÇô to specify a percentage value threshold-
-Let's look at an example.
-
-In the Cost vs. Budget Over Time report, click **Actions** and then select **Schedule report**. On the Threshold tab, select a threshold metric. For example, **Cost percentage vs budget**. Select an alert type and enter a percentage value of the budget. If you want to get notified only once, select **Number of consecutive alerts** and then type _1_. Click **Save**.
-
-![Creating a budget alert on the Save or Schedule this report box](./media/manage-budgets/budget-alert.png)
-
-## Next steps
--- If you haven't already completed the first tutorial for Cloudyn, read it at [Review usage and costs](tutorial-review-usage.md).-- Learn more about the [reports available in Cloudyn](use-reports.md).
cost-management-billing Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/cloudyn/overview.md
- Title: Overview of Cloudyn in Azure
-description: Cloudyn is a multi-cloud cost management solution that helps you use Azure and other cloud resources.
-- Previously updated : 04/15/2021--------
-# What is the Cloudyn service?
--
-## Next steps
--- [Review usage and costs](tutorial-review-usage.md)
cost-management-billing Quick Register Csp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/cloudyn/quick-register-csp.md
-- Title: Register using CSP Partner information with Cloudyn in Azure
-description: Learn details about the registration process used by partners to onboard their customers to the Cloudyn portal.
-- Previously updated : 10/23/2020--------
-# Register with the CSP Partner program and view cost data
-
-As a CSP partner and a registered Cloudyn user, you can view and analyze your cloud spend in Cloudyn. [Azure Cost Management is natively available for direct partners](../costs/get-started-partners.md) who have onboarded their customers to a Microsoft Customer Agreement and have purchased an Azure Plan.
-
-Your registration provides access to the Cloudyn portal. This quickstart details the registration process needed to create a Cloudyn trial subscription and sign in to the Cloudyn portal.
--
-## Configure indirect CSP access in Cloudyn
-
-By default, the Partner Center API is only accessible to direct CSPs. However, a direct CSP provider can configure access for their indirect CSP customers or partners using entity groups in Cloudyn.
-
-To enable access for indirect CSP customers or partners, complete the following steps to segment indirect CSP data by using Cloudyn entity groups. Then, assign the appropriate user permissions to the entity groups.
-
-1. Create an entity group with the information at [Create entities](tutorial-user-access.md#create-and-manage-entities).
-2. Follow the steps at [Assigning subscriptions to Cost Entities](https://www.youtube.com/watch?v=d9uTWSdoQYo). Associate the indirect CSP customer's account and their Azure subscriptions to the entity that you create previously.
-3. Follow the steps at [Create a user with admin access](tutorial-user-access.md#create-a-user-with-admin-access) to create a user account with Admin access. Then, ensure the user account has admin access to the specific entities that you created previously for the indirect account.
-
-Indirect CSP partners sign in to the Cloudyn portal using the accounts that you created for them.
---
-## Next steps
-
-In this quickstart, you used your CSP information to register with Cloudyn. You also signed into the Cloudyn portal and started viewing cost data. To learn more about Cloudyn, continue to the tutorial for Cloudyn.
-
-> [!div class="nextstepaction"]
-> [Review usage and costs](tutorial-review-usage.md)
cost-management-billing Ref Videos https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/cloudyn/ref-videos.md
- Title: Training videos for Cloudyn in Azure
-description: The training videos for Cloudyn walk you through getting started and using its features.
-- Previously updated : 03/12/2020--------
-# Cloudyn walk-through training videos
-
-The following videos provide demonstrations to walk you through getting started with Cloudyn and using its features. Cloudyn supports multi-cloud cost tracking and optimization including Microsoft Azure, Amazon Web Services, and Google Cloud Platform.
--
-## Overview video
-
-[Introduction to Cloudyn](https://youtu.be/NWIRny6Wpsk)
-
->[!VIDEO https://www.youtube.com/embed/NWIRny6Wpsk]
-
-## Walk-through videos
-
-[Analyzing your cloud billing data vs. time with Cloudyn](https://youtu.be/7LsVPHglM0g)
-
->[!VIDEO https://www.youtube.com/embed/7LsVPHglM0g]
-
-[Adding Users to Cloudyn](https://youtu.be/Nzn7GLahx30)
-
->[!VIDEO https://www.youtube.com/embed/Nzn7GLahx30?ecver=1]
-
-[Creating a Cost Entity Hierarchy in Cloudyn](https://youtu.be/dAd9G7u0FmU)
-
->[!VIDEO https://www.youtube.com/embed/dAd9G7u0FmU?ecver=1]
-
-[Optimizing VM Size in Cloudyn](https://youtu.be/1xaZBNmV704)
-
->[!VIDEO https://www.youtube.com/embed/1xaZBNmV704?ecver=1]
-
-[Defining a Cost Allocation Model in Cloudyn](https://youtu.be/FJzof_agKHY)
-
->[!VIDEO https://www.youtube.com/embed/FJzof_agKHY?ecver=1]
-
-[Defining Custom Charges in Cloudyn](https://youtu.be/3HcgkGPQjXE)
-
->[!VIDEO https://www.youtube.com/embed/3HcgkGPQjXE?ecver=1]
-
-[How to Find Your EA Enrollment ID and API Key for use in Cloudyn](https://youtu.be/u_phLs_udig)
-
->[!VIDEO https://www.youtube.com/embed/u_phLs_udig?ecver=1]
-
-[Finding your Directory GUID and Rate ID for use in Cloudyn](https://youtu.be/PaRjnyaNGMI)
-
->[!VIDEO https://www.youtube.com/embed/PaRjnyaNGMI?ecver=1]
-
-[Assigning Accounts and Subscriptions to Cost Entities in Cloudyn](https://youtu.be/d9uTWSdoQYo)
-
->[!VIDEO https://www.youtube.com/embed/d9uTWSdoQYo?ecver=1]
-
-[Connecting to Azure Resource Manager with Cloudyn](https://youtu.be/oCIwvfBB6kk)
-
->[!VIDEO https://www.youtube.com/embed/oCIwvfBB6kk?ecver=1]
-
-[Analyzing your cloud billing data with Cloudyn](https://youtu.be/G0pvI3iLH-Y)
-
->[!VIDEO https://www.youtube.com/embed/G0pvI3iLH-Y?ecver=1]
cost-management-billing Storage Accounts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/cloudyn/storage-accounts.md
- Title: Configure storage accounts for Cloudyn in Azure
-description: This article describes how you configure Azure storage accounts and AWS storage buckets for Cloudyn.
-- Previously updated : 03/12/2020--------
-# Configure storage accounts for Cloudyn
-
-<! intent: As a Cloudyn user, I want to configure Cloudyn to use my cloud service provider storage account to store my reports. -->
-
-You can save Cloudyn reports in the Cloudyn portal, Azure storage, or AWS storage buckets. Saving your reports to the Cloudyn portal is free of charge. However, saving your reports to your cloud service provider's storage is optional and incurs additional cost. This article helps you configure Azure storage accounts and Amazon Web Services (AWS) storage buckets to store your reports.
--
-## Prerequisites
-
-You must have either an Azure storage account or an Amazon storage bucket.
-
-If you don't have an Azure storage account, you need to create one. For more information about creating an Azure storage account, see [Create a storage account](../../storage/common/storage-account-create.md).
-
-If you don't have an AWS simple storage service (S3) bucket, you need to create one. For more information about creating an S3 bucket, see [Create a Bucket](https://docs.aws.amazon.com/AmazonS3/latest/gsg/CreatingABucket.html).
-
-## Configure your Azure storage account
-
-Configuring you Azure storage for use by Cloudyn is straightforward. Gather details about the storage account and copy them in the Cloudyn portal.
-
-1. Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
-2. Click **All Services**, select **Storage accounts**, scroll to the storage account that you want to use, and then select the account.
-3. On your storage account page under **Settings**, click **Access Keys**.
-4. Copy your **Storage account name** and **Connection string** under key1.
- ![Copy storage account name and connection string](./media/storage-accounts/azure-storage-access-keys.png)
-5. Open the Cloudyn portal from the Azure portal or navigate to [https://azure.cloudyn.com](https://azure.cloudyn.com) and sign in.
-6. Click the cog symbol and then select **Reports Storage Management**.
-7. Click **Add new +** and ensure that Microsoft Azure is selected. Paste your Azure storage account name in the **Name** area. Paste your **connection string** in the corresponding area. Enter a container name and then click **Save**.
- ![Paste Azure storage account name and connection string in the Add a new report storage box](./media/storage-accounts/azure-cloudyn-storage.png)
-
- Your new Azure report storage entry appears in the storage account list.
- ![New Azure report storage entry in list](./media/storage-accounts/azure-storage-entry.png)
--
-You can now save reports to Azure storage. In any report, click **Actions** and then select **Schedule report**. Name the report and then either add your own URL or use the automatically created URL. Select **Save to storage** and then select the storage account. Enter a prefix that gets appended to the report file name. Select either CSV or JSON file format and then save the report.
-
-## Configure an AWS storage bucket
-
-The Cloudyn uses existing AWS credentials: User or Role, to save the reports to your bucket. To test the access, Cloudyn tries to save a small text file to the bucket with the file name _check-bucket-permission.txt_.
-
-You provide the Cloudyn role or user with the PutObject permission to your bucket. Then, use an existing bucket or create a new one to save reports. Finally, decide how to manage the storage class, set lifecycle rules, or remove any unnecessary files.
-
-### Assign permissions to your AWS user or role
-
-When you create a new policy, you provide the exact permissions needed to save a report to a S3
-bucket.
-
-1. Sign in to the AWS console and select **Services**.
-2. Select **IAM** from the list of services.
-3. Select **Policies** on the left side of the console and then click **Create Policy**.
-4. Click the **JSON** tab.
-5. The following policy allows you to save a report to a S3 bucket. Copy and paste the following policy example to the **JSON** tab. Replace &lt;bucketname&gt; with your bucket name.
-
- ```json
- {
- "Version": "2012-10-17",
- "Statement": [
- {
- "Sid": "CloudynSaveReport2S3",
- "Effect": "Allow",
- "Action": [
- "s3:PutObject"
- ],
- "Resource": [
- "arn:aws:s3:::<bucketname>/*"
- ]
- }
- ]
- }
- ```
-
-6. Click **Review policy**.
- ![AWS JSON policy showing example information](./media/storage-accounts/aws-policy.png)
-7. On the Review policy page, type a name for your policy. For example, _CloudynSaveReport2S3_.
-8. Click **Create policy**.
-
-### Attach the policy to a Cloudyn role or user in your account
-
-To attach the new policy, you open the AWS console and edit the Cloudyn role or user.
-
-1. Sign in to the AWS console and select **Services**, then select **IAM** from the list of services.
-2. Select either **Roles** or **Users** from the left side of the console.
-
-**For roles:**
-
- 1. Click your Cloudyn role name.
- 2. On the **Permissions** tab, click **Attach Policy**.
- 3. Search for the policy that you created and select it, then click **Attach Policy**.
- ![Example policy attached to your Cloudyn role](./media/storage-accounts/aws-attach-policy-role.png)
-
-**For users:**
-
-1. Select the Cloudyn User.
-2. On the **Permissions** tab, click **Add permissions**.
-3. In the **Grant Permission** section, select **Attach existing policies directly**.
-4. Search for the policy that you created and select it, then click **Next: Review**.
-5. On the Add permissions to role name page, click **Add permissions**.
- ![Example policy attached to your Cloudyn user](./media/storage-accounts/aws-attach-policy-user.png)
--
-### Optional: Set permission with bucket policy
-
-You can also set permission to create reports on your S3 bucket using a bucket policy. In the classic S3 view:
-
-1. Create or select an existing bucket.
-2. Select the **Permissions** tab and then click **Bucket policy**.
-3. Copy and paste the following policy sample. Replace &lt;bucket\_name&gt; and &lt;Cloudyn\_principle&gt; with the ARN of your bucket. Replace the ARN of either the role or user used by Cloudyn.
-
- ```
- {
- "Id": "Policy1485775646248",
- "Version": "2012-10-17",
- "Statement": [
- {
- "Sid": "SaveReport2S3",
- "Action": [
- "s3:PutObject"
- ],
- "Effect": "Allow",
- "Resource": "<bucket_name>/*",
- "Principal": {
- "AWS": [
- "<Cloudyn_principle>"
- ]
- }
- }
- ]
- }
- ```
-
-4. In the Bucket policy editor, click **Save**.
-
-### Add AWS report storage to Cloudyn
-
-1. Open the Cloudyn portal from the Azure portal or navigate to [https://azure.cloudyn.com](https://azure.cloudyn.com) and sign in.
-2. Click the cog symbol and then select **Reports Storage Management**.
-3. Click **Add new +** and ensure that AWS is selected.
-4. Select an account and storage bucket. The name of the AWS storage bucket is automatically filled-in.
- ![Example information in the Add a new report storage box](./media/storage-accounts/aws-cloudyn-storage.png)
-5. Click **Save** and then click **Ok**.
-
- Your new AWS report storage entry appears in the storage account list.
- ![New AWS report storage entry show in storage account list](./media/storage-accounts/aws-storage-entry.png)
--
-You can now save reports to Azure storage. In any report, click **Actions** and then select **Schedule report**. Name the report and then either add your own URL or use the automatically created URL. Select **Save to storage** and then select the storage account. Enter a prefix that gets appended to the report file name. Select either CSV or JSON file format and then save the report.
-
-## Next steps
--- Review [Understanding Cloudyn reports](understanding-cost-reports.md) to learn about the basic structure and functions of Cloudyn reports.
cost-management-billing Tutorial Forecast Spending https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/cloudyn/tutorial-forecast-spending.md
- Title: Tutorial - Forecast spending with Cloudyn in Azure
-description: In this tutorial you learn how to forecast spending using historical usage and spending data.
-- Previously updated : 03/12/2020--------
-# Tutorial: Forecast future spending
-
-Cloudyn helps you forecast future spending using historical usage and spending data. You use Cloudyn reports to view all cost projection data. The examples in this tutorial walk you through reviewing cost projections using the reports. In this tutorial, you learn how to:
-
-> [!div class="checklist"]
-> * Forecast future spending
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
--
-## Prerequisites
--- You must have an Azure account.-- You must have either a trial registration or paid subscription for Cloudyn.-
-## Forecast future spending
-
-Cloudyn includes cost projection reports to help you forecast spending based on your usage over time. Their primary purpose is to help you ensure that your cost trends do not exceed your organization's expectations. The reports you use are Current Month Projected Cost and Annual Projected Cost. Both show projected future spending if your usage remains relatively consistent with your last 30 days of usage.
-
-The Current Month Projected Cost report shows the costs of your services. It uses costs from the beginning of the month and the previous month to show the projected cost. On the reports menu at the top of the portal, click **Costs** > **Projection and Budget** > **Current Month Projected Cost**. The following image shows an example.
-
-![Example information shown in the Current month projected cost report](./media/tutorial-forecast-spending/project-month01.png)
-
-In the example, you can see which services spent the most. Azure costs were lower than AWS costs. If you want to see cost projection details for Azure VMs, in the **Filter** list, select **Azure/VM**.
-
-![Example showing the Azure VM current month projected cost](./media/tutorial-forecast-spending/project-month02.png)
-
-Follow the same basic preceding steps to look at monthly cost projections for other services you're interested in.
-
-The Annual Projected Cost report shows the extrapolated cost of your services over the next 12 months.
-
-On the reports menu at the top of the portal, click **Costs** > **Projection and Budget** > **Annual Projected Cost**. The following image shows an example.
-
-![Example showing the Annual projected cost report](./media/tutorial-forecast-spending/project-annual01.png)
-
-In the example, you can see which services spent the most. Like the monthly example, Azure costs were lower than AWS costs. If you want to see cost projection details for Azure VMs, in the **Filter** list, select **Azure/VM**.
-
-![Example showing the Annual projected cost of VMs](./media/tutorial-forecast-spending/project-annual02.png)
-
-In the image above, the annual projected cost of Azure VMs is $28,374.
-
-## Next steps
-
-In this tutorial, you learned how to:
-
-> [!div class="checklist"]
-> * Forecast future spending
--
-Advance to the next tutorial to learn how to manage costs with cost allocation and showback reports.
-
-> [!div class="nextstepaction"]
-> [Manage costs with cost allocation and showback reports](./tutorial-manage-costs.md)
cost-management-billing Tutorial Manage Costs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/cloudyn/tutorial-manage-costs.md
- Title: Tutorial - Manage costs with Cloudyn in Azure
-description: In this tutorial you learn to manage costs by using cost allocation and showback and chargeback reports.
-- Previously updated : 03/12/2020--------
-# Tutorial: Manage costs by using Cloudyn
-
-You manage costs and produce showback reports in Cloudyn by allocating costs based on tags. The process of cost allocation assigns costs to your consumed cloud resources. Costs are fully allocated when all your resources are categorized with tags. After costs are allocated, you can provide showback or chargeback to your users with dashboards and reports. However, many resources might be untagged or untaggable when you start to use Cloudyn.
-
-For example, you might want to get reimbursed for engineering costs. You need to be able to show your engineering team that you need a specific amount, based on resource costs. You can show them a report for all the consumed resources that are tagged *engineering*.
-
-In this article, tags and categories are sometimes synonymous. Categories are broad collections and can be many things. They might include business units, cost centers, web services, or anything that is tagged. Tags are name/value pairs that enable you to categorize resources and to view and manage consolidated billing information by applying the same tag to multiple resources and resource groups. In earlier versions of the Azure portal, a *tag name* was referred to as a *key*. Tags are created for and stored by a single Azure subscription. Tags in AWS consist of key/value pairs. Because both Azure and AWS have used the term *key*, Cloudyn uses that term. Category Manager uses keys (tag names) to merge tags.
-
-In this tutorial, you learn how to:
-
-> [!div class="checklist"]
-> * Use custom tags to allocate costs.
-> * Create showback and chargeback reports.
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
--
-## Prerequisites
--- You must have an Azure account.-- You must have either a trial registration or paid subscription for Cloudyn.-- [Unactivated accounts must be activated](activate-subs-accounts.md) in the Cloudyn portal.-- [Guest-level monitoring](azure-vm-extended-metrics.md) must be enabled on your virtual machines.--
-## Use custom tags to allocate costs
-
-Cloudyn gets resource group tag data from Azure and automatically propagates tag information to resources. In cost allocation, you can see cost by resource tags.
-
-Using the Cost Allocation model, you define categories (tags) that get applied internally to uncategorized (untagged) resources to group your costs and define rules to handle the untagged costs. Cost allocation rules are your saved instructions where a service's costs are distributed to some other service. Afterward, those resources then show tags/categories in *cost allocation* reports by selecting the model that you created.
-
-Keep in mind that tag information doesn't appear for those resources in *cost analysis* reports. Also, tags applied in Cloudyn using cost allocation aren't sent to Azure, so you won't see them in the Azure portal.
-
-When you start cost allocation, the first thing you do is define the scope by using a cost model. The cost model does not change costs, it distributes them. When you create a cost model, you segment your data by cost entity, account, or subscription, and by multiple tags. Common example tags might include a billing code, cost center, or group name. Tags also help you perform showback or chargeback to other parts of your organization.
-
-To create a custom cost allocation model, select **Costs** &gt; **Cost Management** &gt; **Cost Allocation 360┬░** on the report's menu.
-
-![Example showing a dashboard where you select Cost Allocation 360](./media/tutorial-manage-costs/cost-allocation-360.png)
-
-On the **Cost Allocation 360** page, select **Add** and then enter a name and description for your cost model. Select either all accounts or individual accounts. If you want to use individual accounts, you can select multiple accounts from multiple cloud service providers. Next, click **Categorization** to choose the discovered tags that categorize your cost data. Choose tags (categories) that you want to include in your model. In the following example, the **Unit** tag is selected.
-
-![Example showing cost model categorization](./media/tutorial-manage-costs/cost-model01.png)
-
-The example shows that $19,680 is uncategorized (without tags).
-
-Next, select **Uncategorized Resources** and select services that have unallocated costs. Then, define rules to allocate costs.
-
-For example, you might want to take your Azure storage costs and distribute the costs equally to Azure virtual machines (VMs). To do so, select the **Azure/Storage** service, select **Proportional to Categorized**, and then select **Azure/VM**. Then, select **Create**.
-
-![Example cost model allocation rule for equal distribution](./media/tutorial-manage-costs/cost-model02.png)
---
-In a different example, you might want to allocate all your Azure network costs to a specific business unit in your organization. To do so, select the **Azure/Network** service and then under **Define Allocation Rule**, select **Explicit Distribution**. Then, set the distribution percentage to 100 and select the business unitΓÇö**G&amp;A** in the following image:
-
-![Example cost model allocation rule for a specific business unit](./media/tutorial-manage-costs/cost-model03.png)
---
-For all remaining uncategorized resources, create additional allocation rules.
-
-If you have any unallocated Amazon Web Services (AWS) reserved instances, you can assign them to tagged categories with **Reserved Instances**.
-
-To view information about the choices that you made to allocate costs, select **Summary**. To save your information and to continue working on additional rules later, select **Save As Draft**. Or, to save your information and have Cloudyn start processing your cost allocation model, select **Save and Activate**.
-
-The list of cost models shows your new cost model with **Processing status**. It can take some time before the Cloudyn database is updated with your cost model. When processing is done, the status is updated to **Completed**. You can then view data from your cost model in the Cost Analysis report under **Extended Filters** &gt; **Cost Model**.
-
-### Category Manager
-
-Category Manager is a data-cleansing tool that helps you merge the values of multiple categories (tags) to create new ones. It's a simple rule-based tool where you select a category and create rules to merge existing values. For example, you might have existing categories for **R&amp;D** and **dev** where both represent the development group.
-
-In the Cloudyn portal, click the gear symbol in the upper right and select **Category Manager**. To create a new category, select the plus symbol (**+**). Enter a name for the category and then under **Keys**, enter the category keys that you want to include in the new category.
-
-When you define a rule, you can add multiple values with an OR condition. You can also do some basic string operations. For either case, click the ellipsis symbol (**…**) to the right of **Rule**.
-
-To define a new rule, in the **Rules** area, create a new rule. For example, enter **dev** under **Rules** and then enter **R&amp;D** under **Actions**. When you're done, save your new category.
-
-The following image shows an example of rules created for a new category named **Work-Load**:
-
-![Example showing the new work-load category](./media/tutorial-manage-costs/category01.png)
-
-### Tag sources and reports
-
-Tag data that you see in Cloudyn reports originates in three places:
--- Cloud provider resources APIs-- Cloud provider billing APIs-- Manually-created tags from the following sources:
- - Cloudyn entity tags - user-defined meta data applied to Cloudyn entities
- - Category Manager - a data cleansing tool that creates new tags based on rules that are applied to existing tags
-
-To view cloud provider tags in Cloudyn cost reports you must create a custom cost allocation model using Cost Allocation 360. To do so, go to **Costs** > **Cost Management** > **Cost Allocation 360**, select the desired tags, and then define rules to handle untagged costs. Then, create a new cost model. Afterward, you can view reports in Cost Allocation Analysis to view, filter, and sort on your Azure resource tags.
-
-Azure resource tags only appear in **Costs** > **Cost Allocation Analysis** reports.
-
-Cloud provider billing tags appear in all cost reports.
-
-Cloudyn entity tags and tags that you manually create appear in all cost reports.
--
-## Create showback and chargeback reports
-
-The method that organizations use to perform showback and chargeback varies greatly. However, you can use any of the dashboards and reports in the Cloudyn portal as the basis for either purpose. You can provide user access to anyone in your organization so that they can view dashboards and reports on demand. All Cost Analysis reports support showback because they show users the resources that they consumed. And, they allow users to drill into cost or usage data that's specific to their group within your organization.
-
-To view the results of cost allocation, open the Cost Analysis report and select the cost model that you created. Then, add a grouping by one or more of the tags selected in the cost model.
-
-![Cost Analysis report showing an example of data from the new cost](./media/tutorial-manage-costs/cost-analysis.png)
-
-You can easily create and save reports that focus on specific services consumed by specific groups. For example, you might have a department that uses Azure VMs extensively. You can create a report that's filtered on Azure VMs to show consumption and costs.
-
-If you need to provide snapshot data to other teams, you can export any report in PDF or CSV format.
--
-## Next steps
-
-In this tutorial, you learned how to:
-
-> [!div class="checklist"]
-> * Use custom tags to allocate costs.
-> * Create showback and chargeback reports.
---
-Advance to the next tutorial to learn about controlling access to data.
-
-> [!div class="nextstepaction"]
-> [Control access to data](tutorial-user-access.md)
cost-management-billing Tutorial Optimize Reserved Instances https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/cloudyn/tutorial-optimize-reserved-instances.md
- Title: Tutorial - Optimize reserve instance cost with Cloudyn - Azure
-description: In this tutorial, you learn how to optimize your reserved instance costs for Azure and Amazon Web Services (AWS).
-- Previously updated : 03/12/2020--------
-<!-- Intent: As a cloud-consuming administrator, I need to ensure that my reserved instances are optimized for cost and usage
>-
-# Tutorial: Optimize reserved instances
-
-In this tutorial, you learn how Cloudyn can help you optimize your reserved instance costs and utilization for Azure and Amazon Web Services (AWS). A reserved instance with either cloud service provider is a commitment to a long-term contract where you commit up-front for future use of the VM. And, it can potentially offer considerable savings versus standard Pay-per-Use VM pricing model. Potential savings are only realized when you fully use the capacity of your reserved instances.
-
-This tutorial explains how Azure and AWS Reserved Instances (RIs) are supported by Cloudyn. It also describes how you can optimize reserved instance costs. Primarily, by ensuring that your reservations are fully utilized. In this tutorial, you will:
-
-> [!div class="checklist"]
-> * Understand Azure RI costs
-> * Learn about the benefits of RIs
-> * Optimize Azure RI costs
-> * View RI costs
-> * Assess Azure RI cost effectiveness
-> * Optimize AWS RI costs
-> * Buy recommended RIs
-> * Modify unused reservations
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
--
-## Prerequisites
--- You must have an Azure account.-- You must have either a trial registration or paid subscription for Cloudyn.-- You must have purchased RIs in Azure or AWS.-
-## Understand Azure RI costs
-
-When you buy Azure Reserved VM Instances, you pay up-front for future use. The up-front payment covers the cost of your future use of the VMs:
--- of a specific type-- in a specific region-- for a term of either one or three years-- up to a purchased VM quantity.-
-You can view your purchased Azure Reserved VM Instances in the Azure portal at [Reservations](https://portal.azure.com/#blade/Microsoft_Azure_Reservations/ReservationsBrowseBlade).
-
-The term _Azure Reserved VM Instance_ applies only to a pricing model. It doesn't change your running VMs at all. The term is specific to Azure and is it more generally referred to as _reserved instance_ or _reservation_. Reserved instances that you've purchased do not apply to specific VMs - they apply to any matching VM. For example, a reservation for a VM type that runs in a region that you chose for your purchased reservation.
-
-Purchased reserved instances apply only to the basic hardware. They don't cover software licenses of a VM. For example, you might reserve an instance and you have a matching VM running Windows. The reserved instance only covers the base cost of the VM. In this example, you pay the full price of any required Windows licenses. To get a discount on the operating system or other software running on your VMs, you should consider using [Azure Hybrid Benefits](https://azure.microsoft.com/pricing/hybrid-benefit). Hybrid Benefits offer you a similar type of discount for your software licenses as the reserved instances do for the base VMs.
-
-Reserved instance utilization does not directly affect cost. In other words, running a VM at 100% CPU utilization or at 0% CPU utilization has the same effect: you are pre-paying for the VM allocationΓÇönot its actual utilization.
-
-Let's see how standard on-demand VM usage relates to costs in relation to reserved instances, in the following image:
-
-![On-demand costs versus reserved instance costs](./media/tutorial-optimize-reserved-instances/azure01.png)
---
-The red bars show the accumulated cost of the reserved instance purchase. You pay only the one-time fee. VM usage is free. The blue bars show the accumulated cost of the same VM running with the pay-as-you-go or on-demand pricing model. Somewhere between the seventh and eighth months in to VM usage there's a *break-even point*. Starting at the eighth month you start saving money, in this example.
-
-## Benefits of RIs
-
-Every reserved instance purchase applies to a VM of a specific size and location. For example, D2s\_v3 running in the West US location as shown in the following image:
-
-![Azure reserved instance details](./media/tutorial-optimize-reserved-instances/azure02.png)
-
-The reserved instance purchase becomes beneficial when a VM runs a sufficient number of hours to reach the reservation break-even point. The VM must match the size and a location of your reserved instance. For example, the break-even point is at about the seventh and a half month in the preceding chart. So, the purchase is beneficial when the VM matching the reservation runs at least 7.5 months \* 30 days \* 24 hours = 5,400 hours. If the matching VM runs less than 5,400 hours, the reservation is more expensive than pay-as-you-go.
-
-The break-even point might differ for each VM size and for each location. It also depends on your negotiated VM pay-as-you-go price. Before you make a purchase, you should check the break-even point applicable to your case.
-
-Another point to consider when you purchase the reservation is the reserved instance scope. The scope determines whether the benefit of the reservation is shared or if it applies to a specific subscription. Shared reserved instances are randomly applied across all your subscriptions to first-found matching VMs.
-
-The shared purchase scope is the most flexible and it is recommended whenever possible. Your chances of utilizing all your reserved instances are significantly higher with the shared scope. However, when the owner of a subscription pays for the reserved instance, they may have no choice but to purchase it with the Single Subscription scope.
-
-## Optimize Azure RI costs
-
-Cloudyn supports reserved instances and Hybrid Benefits by:
--- Showing you the costs associated with pricing models-- Tracking RI usage-- Assessing RI impact-- Allocating RI costs according to your policies-
-The first action you should take before you purchase a reserved instance, is to assess the impact of the RI purchase:
--- How much will it cost you?-- How much will you save?-- What is the break-even point?-
-The Reserved Instance Purchase Impact report can help answer those questions.
-
-## Assess Azure RI cost effectiveness
-
-In the Cloudyn portal, navigate to **Optimizer** > **RI Comparison** and then select **Reserved Instance Purchase Impact**.
-
-In the Reserved Instance Purchase Impact report, select a VM size (Instance Type), Location (Region), reservation term, quantity, and the expected runtime. Then you can assess whether your purchase will save you money.
-
-For example, if you purchase a reservation for a VM of type DS1\_v2 in East US and it runs 24x7 through an entire year, then you could save $369.48 annually. The break-even point is at five months. See the following image:
-
-![Azure reserved instance break-even point](./media/tutorial-optimize-reserved-instances/azure03.png)
-
-However, if it runs only 50% of the time, the break-even point will be at 10 months and the saving will be only $49.74 annually. You might not benefit by purchasing the reservation for that instance type in this example. See the following image:
-
-![Example of the break-even point for Azure VMs](./media/tutorial-optimize-reserved-instances/azure04.png)
-
-## View RI costs
-
-When you purchase a reservation, you make a one-time payment. There are two ways to view the payment in Cloudyn:
--- Actual Cost-- Amortized Cost-
-### Actual reserved instance cost
-
-The Actual Cost Analysis and Analysis Over Time reports show the full amount that you paid for the reservation, starting in the month of purchase. They help you see your actual spending over a period.
-
-Navigate to **Costs** > **Cost Analysis** > in the Cloudyn portal and then select either **Actual Cost Analysis** or **Actual Cost Over Time**. Set the filters next. For example, filter just Azure/VM service and group by Resource Type and Price Model. See the following image:
-
-![Example of the actual cost of reserved instances](./media/tutorial-optimize-reserved-instances/azure05.png)
-
-You can filter by a service, **Azure/VM** in this example, and group by **Price Model** and **Resource Type** as shown in the following image:
-
-![Example of actual cost report groups and filters grouped by price model and resource type](./media/tutorial-optimize-reserved-instances/azure06.png)
-
-You can also analyze the type of payments you've made such as one-time fees, usage fees, and license fees.
-
-### Amortized reserved instance cost
-
-You pay an up-front fee which is visible at the month of the purchase when you purchase an RI. It is not visible in your subsequent invoices. So, looking at your monthly usage may be misleading. Your month truly costs you the monthly usage plus the proportional (amortized) part of any previously made one-time fees. The Amortized Cost report can help you get the true picture.
-
-The amortized reserved instance cost is calculated by taking the reservation one-time fee and amortizing it over the reservation term. In Actual Cost reports, one-time fees are visible in the month of the reservation purchase. Daily and monthly spending does not appear in the actual cost of the deployment. Amortized Cost reports show the actual cost of the deployment over time. The amortized cost report is the only way to see your true cost trends. It is also the only way to project your future spending.
-
-In the Actual Cost report, you saw a spike for an RI purchase on November 16 of $747. In the Amortized Cost report (see the following image), there's a partial day cost on November 16. Starting on November 17 you see the amortized RI cost of $747/365 = $2.05. Incidentally, you also notice that the purchased reservation is unused, so you can optimize it by switching it to a different VM size.
-
-To view it, navigate to **Costs** > **Cost Analysis** > and then select **Amortized Cost Analysis** or **Amortized Cost Over Time**.
-
-![Example report showing amortized reserved instance cost](./media/tutorial-optimize-reserved-instances/azure07.png)
-
-## Optimize AWS RI costs
-
-Reserved instances are an open commitment. They are useful when you have sustained usage for VMs because reserved instances are less expensive than on-demand instances. However, they need to be sufficiently used. The commitment is to use resources, typically VMs, for a defined periodΓÇöone or three years. When you make the commitment to buy, you prepay for the resources with a reservation. However, you might not always fully use what you've committed to in the reservation.
-
-For example, you might assess your environment and determine that you had 20 standard D2 instances running constantly over the last year. You could purchase a reservation for them and potentially save significant money. In a different example, you might have committed to using ten MA4 instances for the year. But you might have only used five to date. Both examples illustrate inefficient RI use. There are two ways to optimize costs for reserved instances with Cloudyn optimization reports:
--- Review buying recommendations for what you could buy based on your historical usage-- Modify unused reservations-
-You use the _EC2 RI Buying Recommendations_ and _EC2 Currently Unused Reservations_ reports to improve your reserved instance usage and costs.
-
-## Buy recommended RIs
-
-Cloudyn compares on-demand instance usage and compares it to potential reserved instances. Where it finds possible savings, its recommendations are shown in the EC2 Buying Recommendations report.
-
-On the reports menu at the top of the portal, click **Optimizer** > **Pricing Optimization** > **EC2 RI Buying Recommendations**.
-
-The following image shows buying recommendations from the report.
-
-![Example showing buying recommendations in the EC2 Buying Recommendations report](./media/tutorial-optimize-reserved-instances/aws01.png)
-
-In this example, the Cloudyn\_A account has 32 reserve instance buying recommendations. If you follow all the buying recommendations, you could potentially save $137,770 annually. Keep in mind that the purchase recommendations provided by Cloudyn assume that usage for your running workloads will remain consistent.
-
-To view details explaining why each purchase is recommended, click the plus symbol ( **+** ) under **Justifications** . Here's an example for the first recommendation in the list.
-
-![Example showing purchase justification details](./media/tutorial-optimize-reserved-instances/aws02.png)
-
-The preceding example shows that running the workload on-demand would cost $90,456 annually. However, if you purchase the reservation in advance, the same workload would cost $56,592 and save you $33,864 annually.
-
-Click the plus symbol next to **EC2 RI Purchase Impact** to view your break-even point over a year to see approximately when your purchase investment is realized. About eight months after making the purchase the on-demand accumulated cost starts to exceed the RI accumulated cost in the following example:
-
-![Example showing purchase impact details](./media/tutorial-optimize-reserved-instances/aws03.png)
-
-You start saving money at that point.
-
-You can review **Instances over Time** to verify the accuracy of the suggested buying recommendation. In this example, you can see that six instances were used on average for the workload over the last 30-day period.
-
-![Example showing historical usage of instances over time](./media/tutorial-optimize-reserved-instances/aws04.png)
-
-## Modify unused reservations
-
-Unused reservations are common in many cloud resource consumer's computing environments. Ensuring that unused reservations are fully used can save you money when you modify the reservations to meet your current needs. For example, you might have a subscription containing standard D3 instances running on Linux. If you will not fully utilize the reservation, then you can change the instance type. Or, you might move the unused resources to a different reservation or to a different account.
-
-AWS sells reserved instances for specific availability zones and regions. If you've purchased reserved instances for a specific availability zone, then you cannot move the reservations between zones. However, you can easily move regional reserved instances between zones using the **EC2 Currently Unused Reservations** report. Or alternatively, you may modify them to have a regional scope, and then they'll apply matching instances across all availability zones.
-
-On the reports menu at the top of the portal, click **Optimizer** > **Inefficiencies** > **EC2 Currently Unused Reservations**.
-
-The following images show the report with unused reserved instances.
-
-![Example showing summarized information about unused reservations](./media/tutorial-optimize-reserved-instances/unused-ri01.png)
-
-Click the plus symbol under **Details** to view reservation details for a specific reservation.
-
-![Example showing unused reservations details](./media/tutorial-optimize-reserved-instances/unused-ri02.png)
-
-In the preceding example, there are 77 unused reservations total in various availability zones. The first reservation has 51 unused instances. Looking lower in the list, there are potential reservation instance modifications that you can make using the **m3.2xlarge** instance type in the **us-east-1c** availability zone.
-
-Click **Modify** for the first reservation in the list to open the **Modify RI** page that shows data about the reservation.
-
-![Example showing reservations that you can modify](./media/tutorial-optimize-reserved-instances/unused-ri03.png)
-
-Reserve instances that you can modify are listed. In the following example image, there are 51 unused reservations that you can modify but there is a need for 54 between the two reservations. If you modify your unused reservations to use them all, four instances will continue to run on demand. For this example, split your unused reservations where the first reservation will use 30 and the second reservation will use 21.
-
-Click the plus symbol for the first reservation entry and set the **Reservation quantity** to **30**. For the second entry, set the reservation quantity to **21** and then click **Apply**.
-
-![Example showing changes to the reservation quantity](./media/tutorial-optimize-reserved-instances/unused-ri04.png)
-
-All your unused instances for the reservation are fully utilized and 51 instances are no longer running on-demand. In this example, you save your organization money by significantly reducing on-demand use and using reservations that are already paid for.
-
-## Next steps
-
-In this tutorial, you successfully accomplished the following tasks:
-
-> [!div class="checklist"]
-> * Understood Azure RI costs
-> * Learned about the benefits of RIs
-> * Optimized Azure RI costs
-> * Viewed RI costs
-> * Assessed Azure RI cost effectiveness
-> * Optimized AWS RI costs
-> * Bought recommended RIs
-> * Modified unused reservations
--
-Advance to the next tutorial to learn about controlling access to data.
-
-> [!div class="nextstepaction"]
-> [Control access to data](tutorial-user-access.md)
cost-management-billing Tutorial Review Usage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/cloudyn/tutorial-review-usage.md
- Title: Tutorial - Review usage and costs with Cloudyn in Azure
-description: In this tutorial, you review usage and costs to track trends, detect inefficiencies, and create alerts.
-- Previously updated : 03/12/2020-------
-<!-- Intent: As a cloud-consuming user, I need to view usage and costs for my cloud resources and services.
>-
-# Tutorial: Review usage and costs
-
-Cloudyn shows you usage and costs so that you can track trends, detect inefficiencies, and create alerts. All usage and cost data is displayed in Cloudyn dashboards and reports. The examples in this tutorial walk you though reviewing usage and costs using dashboards and reports.
-
-Azure Cost Management offers similar functionality to Cloudyn. Azure Cost Management is a native Azure cost management solution. It helps you analyze costs, create and manage budgets, export data, and review and act on optimization recommendations to save money. For more information, see [Azure Cost Management](../cost-management-billing-overview.md).
-
-In this tutorial, you learn how to:
-
-> [!div class="checklist"]
-> * Track usage and cost trends
-> * Detect usage inefficiencies
-> * Create alerts for unusual spending or overspending
-> * Export data
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
--
-## Prerequisites
--- You must have an Azure account.-- You must have either a trial registration or paid subscription for Cloudyn.-
-## Open the Cloudyn portal
-
-You review all usage and costs in the Cloudyn portal. Open the Cloudyn portal from the Azure portal or navigate to https://azure.cloudyn.com and sign in.
-
-## Track usage and cost trends
-
-You track actual money spent for usage and costs with Over Time reports to identify trends. To start looking at trends, use the Actual Cost Over Time report. On the top left of the portal, click **Costs** > **Cost Analysis** > **Actual Cost Over Time**. When you first open the report, no groups or filters are applied to it.
-
-Here is an example report:
-
-![Example Actual Cost Over Time report](./media/tutorial-review-usage/actual-cost01.png)
-
-The report shows all spending over the last 30 days. To view only spending for Azure services, apply the Service group and then filter for all Azure services. The following image shows the filtered services.
-
-![Example showing filtered Azure services](./media/tutorial-review-usage/actual-cost02.png)
-
-In the preceding example, less money was spent starting on 2018-10-29. But, too many columns can obscure an obvious trend. You can change the report view to a line or area chart to see the data displayed in other views. The following image shows the trend more clearly.
-
-![Example showing a decreasing Azure VM cost trend](./media/tutorial-review-usage/actual-cost03.png)
-
-Continuing with the example, you can see that the cost for Azure VM dropped. Costs for other Azure services also started dropping on that day. So, what caused that reduction in spending? In this example, a large work project was completed so consumption of many Azure services also dropped.
-
-To watch a tutorial video about tracking usage and cost trends, see [Analyzing your cloud billing data vs. time with Cloudyn](https://youtu.be/7LsVPHglM0g).
-
-## Detect usage inefficiencies
-
-Optimizer reports improve efficiency, optimize usage, and identify ways to save money spent on your cloud resources. They are especially helpful with cost-effective sizing recommendations intended to help reduce idle or expensive VMs.
-
-A common problem that affects organizations when they initially move resources in to the cloud is their virtualization strategy. They often use an approach similar to the one they used for creating virtual machines for the on-premises virtualization environment. And, they assume that costs are reduced by moving their on-premises VMs to the cloud, as-is. However, that approach is not likely to reduce costs.
-
-The problem is that their existing infrastructure was already paid for. Users could create and keep large VMs running if they likedΓÇöidle or not and with little consequence. Moving large or idle VMs to the cloud is likely to *increase* costs. Cost allocation for resources is important when you enter into agreements with cloud service providers. You must pay for what you commit to whether you use the resource fully or not.
-
-The Cost Effective Sizing Recommendations report identifies potential annual savings by comparing VM instance type capacity to their historical CPU and memory usage data.
-
-On the menu at the top of the portal, click **Optimizer** > **Sizing Optimization** > **Cost Effective Sizing Recommendations**. If useful, apply a filter to reduce results. Here's an example image.
-
-![Cost effective sizing recommendation report for Azure VMs](./media/tutorial-review-usage/sizing01.png)
-
-In this example, $2,382 could be saved by following the recommendations to change the VM instance types. Click the plus symbol (+) under **Details** for the first recommendation. Here are details about the first recommendation.
-
-![Example showing recommendation details](./media/tutorial-review-usage/sizing02.png)
-
-View VM instance IDs by clicking the plus symbol next to **List of Candidates**.
-
-![Example showing a list of VM candidates to resize](./media/tutorial-review-usage/sizing03.png)
-
-To watch a tutorial video about detecting usage inefficiencies, see [Optimizing VM Size in Cloudyn](https://youtu.be/1xaZBNmV704).
-
-Azure Cost Management also provides cost-saving recommendations for Azure services. For more information, see [Tutorial: Optimize costs from recommendations](../costs/tutorial-acm-opt-recommendations.md).
-
-## Create alerts for unusual spending
-
-Alerts allow you to automatically notify stakeholders of spending anomalies and overspending risks. You can create alerts using reports that support alerts based on budget and cost thresholds.
-
-This example uses the **Actual Cost Over Time** report to send a notification when your spending on an Azure VM nears your total budget. In this scenario, you have a total budget of $20,000 and you want to receive a notification when costs are approaching half of your budget, $9,000, and an additional alert when costs reach $10,000.
-
-1. From the menu at the top of the Cloudyn portal, select **Costs** > **Cost Analysis** > **Actual Cost Over Time**.
-2. Set **Groups** to **Service** and set **Filter on the service** to **Azure/VM**.
-3. In the top right of the report, select **Actions** and then select **Schedule report**.
-4. To send yourself an email of the report at scheduled interval, select the **Scheduling** tab in the **Save or Schedule this** report dialog. Be sure to select **Send via email**. Any tags, grouping, and filtering you use are included in the emailed report.
-5. Select the **Threshold** tab and then select **Actual Cost vs. Threshold**.
- 1. In the **Red alert** threshold box enter 10000.
- 2. In the **Yellow alert** threshold box enter 9000.
- 3. In the **Number of consecutive alerts** box, enter the number of consecutive alerts to receive. When you receive the total number of alerts that you specified, no additional alerts are sent.
-6. Select **Save**.
-
-![Example showing red and yellow alerts based on spending thresholds](./media/tutorial-review-usage/schedule-alert01.png)
-
-You can also choose the **Cost Percentage vs. Budget** threshold metric to create alerts. This allows you to specify the thresholds as percentages of your budget instead of currency values.
-
-## Export data
-
-Similar to way you create alerts for reports, you can also export data from any report. For example, you might want to export a list of Cloudyn accounts or other user data. To export any report, open the report and then in the top right of the report, click **Actions**. Some of the actions you might want to take are **Export all report data** so that you can download or print the information. Or, you can select **Schedule report** to schedule the report to get sent as an email.
-
-## Next steps
-
-In this tutorial, you learned how to:
-
-> [!div class="checklist"]
-> * Track usage and cost trends
-> * Detect usage inefficiencies
-> * Create alerts for unusual spending or overspending
-> * Export data
--
-Advance to the next tutorial to learn how to forecast spending using historical data.
-
-> [!div class="nextstepaction"]
-> [Forecast future spending](./tutorial-forecast-spending.md)
cost-management-billing Tutorial User Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/cloudyn/tutorial-user-access.md
- Title: Tutorial - Assign access with Cloudyn in Azure
-description: In this tutorial you learn how to assign access to Cloudyn data with user accounts that define access levels to entities.
-- Previously updated : 03/12/2020---------
-# Tutorial: Assign access to Cloudyn data
-
-Access to Cloudyn data is provided by user or entity management. Cloudyn user accounts determine access to *entities* and administrative functions. There two types of access: admin and user. Unless modified per user, admin access allows a user unrestricted use of all functions in the Cloudyn portal, including: user management, recipient lists management and root entity access to all entity data. User access is intended for end users to view reports and create reports using the access they have to entity data.
-
-Entities are used to reflect your business organization's hierarchical structure. They identify departments, divisions, and teams in your organization in Cloudyn. The entity hierarchy helps you accurately track spending by the entities.
-
-When you registered your Azure agreement or account, an account with admin permission was created in Cloudyn, so you can perform all the steps in this tutorial. This tutorial covers access to Cloudyn data including user management and entity management. You learn how to:
-
-> [!div class="checklist"]
-> * Create a user with admin access
-> * Create a user with user access
-> * Delete a user
-> * Delete or export personal data
-> * Create and manage entities
--
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
--
-## Prerequisites
--- You must have an Azure account.-- You must have either a trial registration or paid subscription for Cloudyn.-
-## Create a user with admin access
-
-Although you already have admin access, coworkers in your organization might also need to have admin access. In the Cloudyn portal, click the gear symbol in the upper right and select **User Management**. Click **Add New User** to add a new user.
-
-Enter required information about the user. The **login ID** must be a valid e-mail address. Choose permissions to allow User Management so that the user can create and modify other users. Recipient Lists Management allow the user to edit recipient lists. A link with sign in information gets sent to the user by e-mail from Cloudyn when you select **Notify user by email**. On first sign-in the user sets a password.
-
-Under **User has admin access**, the root entity of your organization is selected. Leave root selected and then save the user information. Selecting the root entity allows the user to have admin permission not only to the root entity in the tree, but also to all the entities that reside below it.
- ![Example showing admin access in the Add new user box](./media/tutorial-user-access/new-admin-access.png)
-
-## Create a user with user access
-Typical users that need access to Cloudyn data like dashboards and reports should have user access to view them. Create a new user with user access similar to the one you created with admin access, with the following differences:
--- Clear **Allow User Management**, **Allow Recipient lists Management**, and clear all in the **User has admin access** list.-- Select the entities that the user needs access to in the **User has user access** list.-- You can also allow admin to access to specific entities, as needed.-
-![Example showing user access in the Add new user box](./media/tutorial-user-access/new-user-access.png)
-
-To watch a tutorial video about adding users, see [Adding Users to Cloudyn](https://youtu.be/Nzn7GLahx30).
-
-## Delete a user
-
-When you delete a user, any entities that the user has access to remain intact. Saved *personal* reports are removed when the user is deleted. Saved *public* reports created by the user are not deleted.
-
-You cannot remove yourself as a user.
-
-> [!WARNING]
-> When you delete a user, it can't be restored.
-
-1. In the Cloudyn portal, click the gear symbol in the upper right and then select **User Management**.
-2. In the list of users, select the user that you want to delete and then click **Delete User** (the trash can symbol).
-3. In the Delete User box, click **Yes** and then click **OK**.
--
-## Delete or export personal data
-
-If you want to delete or export personal data from Cloudyn, you need to create a support ticket. When the support ticket is created, it acts as formal request - a Data Subject Request. Microsoft then takes prompt action to remove the account and delete any customer or personal data.
-
-## Create and manage entities
-
-When you define your cost entity hierarchy, a best practice is to identify the structure of your organization. Entities allow you to segment spending by individual accounts or subscriptions. You create cost entities to create logical groups to manage and track spending. As you build the tree, consider how you want or need to see their costs segregated by business units, cost centers, environments, and sales departments. The entity tree in Cloudyn is flexible due to entity inheritance.
-
-Individual subscriptions for your cloud accounts are linked to specific entities. You can associate an entity with a cloud service provider account or subscription. So, entities are multi-tenant. You can assign specific users access to only their segment of your business using entities. Doing so keeps data isolated, even across large portions of a business like subsidiaries. And, data isolation helps with governance.
-
-When you registered your Azure agreement or account with Cloudyn, your Azure resource data including usage, performance, billing, and tag data from your subscriptions was copied to your Cloudyn account. However, you must manually create your entity tree. If you skipped the Azure Resource Manager registration, then only billing data and a few asset reports are available in the Cloudyn portal.
-
-In the Cloudyn portal, click the gear symbol in the upper right and select **Cloud Accounts**. You start with a single entity (root) and build your entity tree under the root. Here's an example of an entity hierarchy that might resemble many IT organizations after the tree is complete:
-
-![Example of an entity tree shown on the Accounts Management page](./media/tutorial-user-access/entity-tree.png)
-
-Next to **Entities**, click **Add Entity**. Enter information about the person or department that you want to add. The **Full Name** and **Email** fields to do not have to match existing users. If you want to view a list of access levels, search in help for *Adding an entity*.
-
-![Example showing entity name and access levels in the Add entity box](./media/tutorial-user-access/add-entity.png)
-
-When you're done, **Save** the entity.
-
-### Entity access levels
-
-Entity access levels in conjunction with a user's access allows you to define what type of actions are available in the Cloudyn portal.
--- **Enterprise** - Provides the ability to create and manage child cost entities.-- **Enterprise + Cost Allocation** - Provides the ability to create and manage child cost entities including cost allocation for consolidated accounts.-- **Enterprise, Cost based on parent cost allocation** - Provides the ability to create and manage child cost entities. Costs for the account are based on the parent's cost allocation model.-- **Custom Dashboards Only** - Provides the user to only see predefined custom dashboards.-- **Dashboards Only** - Provides the user the ability to only see dashboards.-
-### Create a cost entity hierarchy
-
-To create a cost entity hierarchy, you must have an account with enterprise or enterprise + cost allocation access.
-
-In the Cloudyn portal, click the gear symbol in the upper right and select **Cloud Accounts**. The **Entities** tree is shown in the left pane. If necessary, expand the entity tree so that you can view the entity that you want to associate with an account. Your cloud service provider accounts are shown on tabs in the right pane. Select a tab and then click and drag an account/subscription to the entity, then drop it. The **Move** box informs you that the account was successfully moved. Click **OK**.
-
-You can also associate multiple accounts to an entity. Select the accounts and then click **Move**. In the Move Accounts box, select the entity where you want to move the account to and then click **Save**. The Move accounts box asks you to verify that you want to move the accounts. Click **Yes**, and then click **OK**.
-
-To watch a tutorial video about creating a cost entity hierarchy, see [Creating a Cost Entity Hierarchy in Cloudyn](https://youtu.be/dAd9G7u0FmU).
-
-If you are an Azure Enterprise Agreement user, watch a tutorial video about associating accounts and subscriptions to entities at [Connecting to Azure Resource Manager with Cloudyn](https://youtu.be/oCIwvfBB6kk).
-
-## Next steps
-
-In this tutorial, you learned how to:
-
-> [!div class="checklist"]
-> * Create a user with admin access
-> * Create a user with user access
-> * Delete a user
-> * Delete or export personal data
-> * Create and manage entities
--
-If you haven't already enabled Azure Resource Manager API access for your accounts, proceed to the following article.
-
-> [!div class="nextstepaction"]
-> [Activate Azure subscriptions and accounts](./activate-subs-accounts.md)
cost-management-billing Understanding Cost Reports https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/cloudyn/understanding-cost-reports.md
- Title: Understanding Cloudyn cost management reports in Azure
-description: This article helps you understand Cloudyn cost management reports basic structure and functions.
-- Previously updated : 03/12/2020--------
-# Understanding Cloudyn cost management reports
-
-This article helps you understand Cloudyn cost management reports basic structure and functions. Most Cloudyn reports are intuitive and have a uniform look and feel. After you read this article, are ready to use all the cost management reports. Many standard features are available throughout the various reports, allowing you to navigate the reports with ease. Reports are customizable, and you can select from several options to calculate and display results.
--
-## Report fields and options
-
-Here's a look at an example of the Cost Over Time report. Most Cloudyn reports have a similar layout.
-
-![Example of the Cost Over Time report with numbered areas corresponding to descriptions](./media/understanding-cost-reports/sample-report.png)
-
-Each numbered area in the preceding image is described in detail in the following information:
-
-1. **Date Range**
-
- Use the Date Range list to define a report time interval using a preset or custom.
-2. **Saved Filter**
-
- Use the Saved Filter list to save the current groups and filters that are applied to the report. Saved filters are available across cost and performance reports, including:
-
- - Cost Analysis
- - Allocation
- - Asset Management
- - Optimization
-
- Type a filter name and the click **Save**.
-
-3. **Tags**
-
- Use the Tags area to group by tag categories. Tags listed in the menu are Azure department or cost center tags or they are Cloudyn's cost entity and subscription tags. Select tags to filter results. You can also type a tag name (keyword) to filter results.
-
- ![Example of a list of tags to filter results by](./media/understanding-cost-reports/select-options.png)
-
- Click **Add** to add a new filter.
-
- ![Add filter box showing options and conditions to filter by](./media/understanding-cost-reports/add-filter.png)
-
- Tag grouping or filtering does not relate to Azure resources or resource group tags.
-
- Cost allocation tag grouping and filtering are available in the **Groups** menu option.
-
-4. **Groups in reports**
-
- Use groups in Cost Analysis reports to show standard, itemized categories from billing data in your report. However, groups in Cost Allocation reports show view tag-based categories. Tag-based categories are defined in the cost allocation model and standard itemized categories from billing data.
-
- ![First example list of tags that you can group by](./media/understanding-cost-reports/groups-tags01.png)
-
- ![Second example list of tags that you can group by](./media/understanding-cost-reports/groups-tags02.png)
-
- In Cost Allocation Reports, groups in tag-based group categories might include:
- - Tags
- - resource group tags
- - Cloudyn cost entity tags
- - Subscription tag categories for cost allocation purposes
-
- Examples might include:
- - Cost center
- - Department
- - Application
- - Environment
- - Cost code
-
- Here's a list of built-in groups available in reports:
-
- - **Cost Type**
- - Select a cost type or multiple cost types, or select all. Cost types include:
- - One-Time Fee
- - Support
- - Usage Cost
- - **Customer**
- - Select a specific customer, multiple customers, or select all customers.
- - **Account Name**
- - The account or subscription name. In Azure, it is the name of the Azure subscription.
- - **Account No**
- - Select an account, multiple accounts, or all accounts. In Azure, it is the Azure subscription's GUID.
- - **Parent Account**
- - Select the parent account, multiple accounts, or select all.
- - **Service**
- - Select a service, multiple services, or select all services.
- - **Provider**
- - The cloud provider where assets and expenses are associated.
- - **Region**
- - Region where the resource is hosted.
- - **Availability Zone**
- - AWS isolated locations within a region.
- - **Resource Type**
- - The type of resource in use.
- - **Sub-Type**
- - Select the sub-type.
- - **Operation**
- - Select the operation or **Show all**.
- - **Price Model**
- - All Upfront
- - No Upfront
- - Partial Upfront
- - On Demand
- - Reservation
- - Spot
- - **Charge Type**
- - Select Negative or Positive charge type or both.
- - **Tenancy**
- - Whether a machine is running as a dedicated machine.
- - **Usage Type**
- - Usage type can be one-time fees or recurring fees.
-
-5. **Filters**
-
- Use single or multi-select filters to set ranges to selected values. To set a filter, click **Add** and then select filter categories and values.
-
-6. **Cost Model**
-
- Use Cost Model to select a cost model that you previously created with Cost Allocation 360. You might have multiple Cloudyn cost models, depending on your cost allocation requirements. Some of your organizational teams might have cost allocation requirements that differ from others. Each team can have their own dedicated cost model.
-
- For information about creating a cost allocation model definition, see [Use custom tags to allocate costs](tutorial-manage-costs.md#use-custom-tags-to-allocate-costs).
-
-7. **Amortization**
-
- Use Amortization in Cost Allocation reports to view non-usage based service fees or one-time payable costs and spread their cost over time evenly during their lifespan. Examples of one-time fees might include:
- - Annual support fees
- - Annual security components fees
- - Reserved instances purchase fees
- - Some Azure Marketplace items.
-
- Under Amortization, select **Amortized cost** or **Actual Cost**.
-
-8. **Resolution**
-
- Use Resolution to select the time resolution within the selected date range. Your time resolution determines how units are displayed in the report and can be:
- - Daily
- - Weekly
- - Monthly
- - Quarterly
- - Annual
-
-9. **Allocation rules**
-
- Use Allocation Rules to apply or disable the cost allocation cost recalculation. You can enable or disable the cost allocation recalculation for billing data. The recalculation applies to the selected categories in the report. It allows you to assess the cost allocation recalculation impact against raw billing data.
-
-10. **Uncategorized**
-
- Use Uncategorized to include or exclude uncategorized costs in the report.
-
-11. **Show/hide fields**
-
- The Show/hide option does not have any effect in reports.
-
-12. **Display formats**
-
- Use Display formats to select various graph or table views.
-
- ![Symbols of display formats that you can select](./media/understanding-cost-reports/display-formats.png)
-
-13. **Multi-color**
-
- Use Multi-color to set the color of charts in your report.
-
-14. **Actions**
-
- Use Actions to save, export, or schedule the report.
-
-15. **Policy**
-
- Although not pictured, some reports include a projected cost calculation policy. In those reports, the **Consolidated** policy shows recommendations for all accounts and subscriptions under the current entity such as Microsoft enrollment or AWS payer. The **Standalone** policy shows recommendations for one account or subscription as if no other subscriptions exist. The policy that you select varies on the optimization strategy used by your organization. Cost projections are based on the last 30 days of usage.
-
-## Save and schedule reports
-
-After you create a report, you can save it for future use. Saved reports are available in **My Tools** > **My Reports**. If you make changes to an existing report and save it, the report is saved as a new version. Or, you can save it as a new report.
-
-### Save a report to the Cloudyn portal
-
-While viewing any report, click **Actions** and then select **Save to my reports**. Name the report and then either add a your own URL or use the automatically created URL. You can optionally **Share** the report publicly with others in your organization or you can share it to your entity. If you do not share the report, it remains a personal report and that only you can view. Save the report.
--
-### Save a report to cloud provider storage
-
-In order to save a report to your cloud service provider, you must have already configured a storage account. While viewing any report, click **Actions** and then select **Schedule report**. Name the report and then either add a your own URL or use the automatically created URL. Select **Save to storage** and then select the storage account or add a new one. Enter a prefix that gets appended to the report file name. Select a CSV or JSON file format and then save the report.
-
-### Schedule a report
-
-You can run reports at scheduled intervals and you can sent them to a recipient list or cloud service provider storage account. While viewing any report, click **Actions** and then select **Schedule report**. You can send the report by email and save to a storage account. Under **Schedule**, select the interval (daily, weekly or monthly). For weekly and monthly, select the day or dates to deliver and select the time. Save the scheduled report. If you select the Excel report format, the report is sent as an attachment. When you select email content format, report results that are displayed in chart format are delivered as a graph.
-
-### Export a report as a CSV file
-
-While viewing any report, click **Actions** and then select **Export all report data**. A pop-up window appears and a CSV file is downloaded.
-
-## Next steps
--- Learn about the reports that are included in Cloudyn at [Use Cloudyn reports](./use-reports.md).-- Learn about how to use reports to create [dashboards](./dashboards.md).
cost-management-billing Use Reports https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/cloudyn/use-reports.md
- Title: Use Cloudyn reports in Azure
-description: This article describes the purpose of the Cloudyn reports that are included in the Cloudyn portal to help you effectively use them.
-- Previously updated : 03/12/2020--------
-# Reports available in the Cloudyn portal
-
-This article describes the purpose of the Cloudyn reports that are included in the Cloudyn portal. It also describes how you can effectively use the reports. Most reports are intuitive and have a uniform look and feel. Most of the actions that you can do in one report, you can also do in other reports. For an overview about how to use Cloudyn reports, including how to customize and save or to schedule reports, see [Understanding cost reports](understanding-cost-reports.md).
-
-Azure Cost Management offers similar functionality to Cloudyn. Azure Cost Management is a native Azure cost management solution. It helps you analyze costs, create and manage budgets, export data, and review and act on optimization recommendations to save money. For more information, see [Azure Cost Management](../cost-management-billing-overview.md).
--
-## Report types
-
-There are three types of Cloudyn reports:
--- Over-time reports. For example, the Cost Over Time report. Over-time reports show a time series of data over a selected interval with a predefined resolution and show a weekly resolution for last two months. You can use grouping and filtering to zoom in to various data points.
- - Over-time reports can help you view trends and detect spikes or anomalies.
-- Analysis reports. For example, the Cost Analysis report. These reports show aggregated data over a period that you define and allow grouping and filtering on the data.
- - Analysis reports can help you view spikes and determine anomaly root-causes and to show you a granular break-down of your data.
-- Tabular reports. You can view any report as a table, but some reports are viewed only as a table. These reports provide you detailed lists of items.
- - Recommendations are tabular reportsΓÇöthere are no visualizations for recommendations. However, you can visualize recommendation results. For example, savings over time.
- - Tabular reports are useful as lists of actions or for data export for further processing. For example, a chargeback report.
-
-Cost reports show either _actual_ or _amortized_ costs.
-
-Actual cost reports display the payments made during the selected time frame. For example, all one-time fees such as reserved instance (RI) purchases are shown in actual cost reports as spikes in cost.
-
-Amortized cost reports spread one-time fees over a period to which they apply. For example, one-time fees for RI purchases are spread over the reservation term and are not shown as a spike. The amortized view is the only way to see true trends and make cost projections.
-
-In some cases, the amortization is presented as a separate report. Examples include the Cost Analysis and Amortized Cost Analysis reports. In other cases, amortization is a report policy such as the Cost Allocation and Cost Analysis reports.
-
-You can schedule any report for periodic delivery. Cost reports allow setting a threshold, so they're useful for alerts.
-
-## Cost analysis vs. cost allocation
-
-_Cost analysis_ reports display billing data from your cloud providers. Using the reports, you can group and drill into various data segments itemized from the billing file. The reports enable granular cost navigation across your cloud vendor's raw billing data.
-
-Some _cost analysis_ reports don't group costs by resource tags. And, tag-based billing information only appears in reports after you allocate costs by creating a cost model using [Cost Allocation 360](tutorial-manage-costs.md#use-custom-tags-to-allocate-costs).
-
-_Cost allocation_ reports are available after you create a cost model using [Cost Allocation 360](tutorial-manage-costs.md#use-custom-tags-to-allocate-costs). Cloudyn processes cost and billing data and _matches_ the data to the usage and tag data of your cloud accounts. To match the data, Cloudyn requires access to your usage data. If you have accounts that are missing credentials, they are labeled as _uncategorized resources_.
-
-## Dashboards
-
-Dashboards in Cloudy provide a high-level view of reports. Dashboards are made up of widgets and each widget is essentially a report thumbnail. When you [customize reports](understanding-cost-reports.md#save-and-schedule-reports), you save them to My Reports and they're added to the dashboard. For more information about dashboards, see [View key cost metrics with dashboards](dashboards.md).
-
-## Budget information in reports
-
-Many Cloudyn reports show budget information after you've manually created one. So reports won't show budget information until you create a budget. For more information, see [Budget Management settings](#budget-management-settings).
-
-## Reports and reporting features
-
-Cloudyn includes the following reports and reporting features.
-
-### Cost Navigator report
-
-The Cost Navigator report is a quick way to view your billing consumption using a dashboard view. It has a subset of filters and basic views to immediately show a summarized view of organization's costs. Costs are shown by date. Because the report is intended as an initial view of your costs, it's not as flexible or as comprehensive as many other reports or custom dashboards that you create yourself.
-
-By default, major views in the report show:
--- Cost over time showing a work week bar chart view. You can change the **Date Range** to change date range bar chart.-- Expenditures by service, using a pie chart.-- Resource categorization by tags, using a pie chart.-- Expenditures by cost entities, using a pie chart.-- Cost total, per date in a list view.-
-### Cost Analysis report
-
-The Cost Analysis report is a calculation of showback and chargeback, based on your policy. It aggregates your cloud consumption during a selected time frame, after having applied all allocation rules to your cost. For example, it calculates the costs by tags, reassigns the costs of untagged resources and optionally allocates the utilization of reserved instances.
-
-The policies set in [Cost Allocation 360](tutorial-manage-costs.md#use-custom-tags-to-allocate-costs) are used in the Cost Analysis report and results are then combined with information from your cloud vendor's raw data.
-
-How is this report calculated? The Cloudyn service ensures allocation retains the integrity of each linked account by applying _account affinity_. Affinity ensures an account that doesn't use a specific service doesn't have any costs of this service allocated to it. The costs accrued in that account remain in that account and are not calculated by the allocation policies. For example, you might have five linked accounts. If only three of them use storage services, then the cost of storage services is only allocated across tags in the three accounts.
-
-Use the Cost Analysis report to:
--- Calculate your organization chargeback/showback-- Categorize all your costs-- Display an aggregated view of your entire deployment for a specific time frame.-- View costs by tag categories based on policies created in the cost model.-
-To use the Cost Analysis report:
-
-1. Select a date range.
-2. Add tags, as needed.
-3. Add groups.
-4. Choose a cost model that you created previously.
-
-### Cost Over Time report
-
-The Cost over Time report displays the results of cost allocation as time series. It allows you to observe trends and detect irregularities in your deployment. It essentially shows costs distributed over a defined period. The report includes your main cost contributors including ongoing costs and one-time reserved instance fees that are being spent during a selected time frame. Policies set in [Cost Allocation 360](tutorial-manage-costs.md#use-custom-tags-to-allocate-costs) are used in this report.
-
-Use the Cost Over Time report to:
--- See changes over time and which influences change from one day (or date range) to the next.-- Analyze costs over time for a specific instance.-- Understand why there was a cost increase for a specific instance.-
-To use the Cost Over Time report:
-
-1. Select a date range.
-2. Add tags, as needed.
-3. Add groups.
-4. Choose a cost model that you created previously.
-5. Select actual costs or amortized costs.
-6. Choose whether to apply allocation rules to view raw billing data view or to recalculated cost view.
-
-### Actual Cost Analysis report
-
-The Actual Cost Analysis report shows provider costs with no modifications. It shows your main cost contributors, including ongoing costs and one-time fees.
-
-You can use the report to view cost information for your subscriptions. In the report, Azure subscriptions are shown as **account name** and **account number**. **Linked accounts** show AWS subscriptions. To view per subscription costs, a breakdown for each account, under **Groups**, select the type of subscription that you have.
-
-Use the Actual Cost Analysis report to:
--- Analyze and monitor raw provider costs spent during a specified time frame.-- Schedule a threshold alert.-- Analyze unmodified costs incurred by your accounts and entities.-
-### Actual Cost Over Time report
-
-The Actual Cost Over Time report is a standard cost analysis report distributing cost over a defined time resolution. The report displays spending over time to allow you to observe trends and detect spending irregularities. This report shows your main cost contributors including ongoing costs and one-time reserved instance fees that are being spent during a selected time frame.
-
-Use the Actual Cost Over Time report to:
--- See cost trends over time.-- Find irregularities in cost.-- Find all cost-related questions related to cloud providers.-
-### Amortized cost reports
-
-This set of amortized cost reports shows linearized non-usage based service fees, or one-time payable costs and spread their cost over time evenly during their lifespan. For example, one-time fees might include:
--- Annual support fees-- Annual security component fees-- Reserved Instances purchase fees-- Some Azure Marketplace items-
-In the billing file, one-time fees are characterized when the service consumption start and end dates (timestamp) have equal values. The Cloudyn service then recognizes them as one-time fees that are amortized. Other consumption-based services with on-demand usage costs are not amortized.
-
-Amortized cost reports include:
--- Amortized cost analysis-- Amortized cost over time-
-### Cost Analysis report
-
-The Cost Analysis report provides insight into your cloud consumption and spending during a selected time frame. The policies set in the [Cost Allocation 360](tutorial-manage-costs.md#use-custom-tags-to-allocate-costs) are used in the Cost Analysis report.
-
-How does Cloudyn calculate this report?
-
-Cloudyn ensures that allocation retains the integrity of each linked account by applying _account affinity_. Affinity ensures an account that doesn't use a specific service also doesn't have any costs of this service allocated to it. The costs accrued in that account remain in that account and aren't calculated by the allocation policies. For example, you might have five linked accounts. If only three of them use storage services, then the cost of storage services is only allocated across tags in the three accounts.
-
-Use the Cost Analysis report to:
--- Display an aggregated view of your entire deployment for a specific time frame.-- View costs by tag categories based on policies created in the cost model.-
-### Cost Over Time report
-
-The Cost Over Time report displays spending over time so you can spot trends and notice irregularities in your deployment. It essentially shows costs distributed over a defined period. The report includes your main cost contributors including ongoing costs and one-time reserved instance fees that are being spent during a selected time frame. Policies set in [Cost Allocation 360](tutorial-manage-costs.md#use-custom-tags-to-allocate-costs) are used in this report.
-
-Use the Cost Over Time report to:
--- See changes over time and which influences change from one day (or date range) to the next.-- Analyze costs over time for a specific instance.-- Understand why there was a cost increase for a specific instance.-
-### Custom Charges report
-
-Enterprise and CSP users often find themselves providing added services to their external or internal customers, in addition to their own cloud resource consumption. You define custom charges for added services or discounts that are added to customer's billing or chargeback reports as custom line items.
-
-Custom service charges reflect services that aren't normally shown in a bill. The custom charges that you create are then shown in Cost reports.
-
-*Custom charges aren't custom pricing*. The list of custom charges doesn't show the different rates that you may be charging. For example, AWS billing charges are displayed just as they are charged.
-
-To create a custom charge:
-
-1. In **Custom Charges**, click **Add New**. The _Add New Custom Charge_ dialog box is displayed.
-2. In **Provider Name**, enter the name of the provider.
-3. In **Service Name**, enter the type of service.
-4. In **Description**, add a description for the custom charge.
-5. In **Type**, enter the select **Percentage** and then in Services dropdown, select the services to include as custom charges in the cost reports.
-6. In **Payment**, select if the charge is a One-Time Fee or Recurring Fee. If the charge is a Recurring Fee, select Amortized if you want the charge to be amortized and select the number of months.
-7. In **Dates**, if a one-time fee is selected, in **Effective Date**, enter the date the charge is paid. If Recurring Fee is selected, enter the date range including start date and the end date for the charge.
-8. In the **Entities tree**, select the entities that you want to apply the charge to and then select **On**.
-
-_When charges are assigned to an entity, users can't change them. Charges that are added by an administrator to a parent entity are read-only._
-
-To view custom charges:
-
-Custom charges are shown in Cost reports. For example, open the Actual Cost Analysis report, then under **Extended Filters**, select **Standalone**. Then filter to show **Custom Charges**.
-
-### Cost Allocation 360
-
-You use Cost Allocation 360 to create custom cost allocation models to assign costs to consumed cloud resources. Many reports show information from custom cost models that you've created with custom cost models. And, some reports only show information after you've created a custom cost model with cost allocation.
-
-For more information about creating custom cost models, see [Tutorial: Manage costs by using Cloudyn](tutorial-manage-costs.md).
-
-### Cost vs. Budget Over Time report
-
-The Cost vs. Budget Over Time report allows you to compare the main cost contributors against your budget. The assigned budget appears in the report so that you can view your (over/under/par) budget consumption over time. Using Show/Hide Fields at the top of the report, you can select to view cost, budget, accumulated cost, and total budget.
-
-### Current Month Projected Cost report
-
-The Current Month Projected Cost report provides insight into your current month-to-date cost summary. This report displays your costs from the beginning of month, from the previous month, and the total projected cost for the current month. The current month projected cost is calculated as sum of the up-to-date monthly cost and a projection based on the cost monitored in the last 30 days.
-
-Use the Current Month Projected Cost report to:
--- Project monthly costs by service-- Project monthly costs by account-
-### Annual Projected Cost report
-
-The Annual Projected Costs report allows you to view annual projected costs based on previous spending trends. It shows the next 12 months of overall projected costs. The projections are made using a trend function extrapolated over the next 12 months, based on the costs associated with the last 30 days of usage.
-
-### Budget Management settings
-
-Budget Management allows you to set a budget for your fiscal year.
-
-To add a budget to an entity:
-
-1. On the Budget Management page, under **Entities**, select the entity where you want to create the budget.
-2. In the budget year, select the year where you want to create the budget.
-3. In each month, set your budget and then and click **Save**.
-
-To import a file for the annual budget:
-
-1. Under **Actions**, select **Export** to download an empty CSV template to use as your basis for the budget.
-2. Fill in the CSV file with your budget entries and save it locally.
-3. Under **Actions**, select **Import**.
-4. Select your saved file and then click **OK**.
-
-To export your completed budget as a CSV file, under **Actions**, select **Export** to download the file.
-
-When completed, your budget is shown in Cost Analysis reports and in the Cost vs. Budget Over Time report. You can also schedule reports based on budget thresholds.
-
-### Azure Resource Explorer report
-
-The Azure Resource Explorer report shows a bulk list of all the Azure resources available in Cloudyn. To effectively use the report, your Azure accounts should have extended metrics enabled. Extended metrics provide Cloudyn access to your Azure VMs. For more information, see [Add extended metrics for Azure virtual machines](azure-vm-extended-metrics.md).
-
-### Azure Resources Over Time report
-
-The Azure Resources Over Time report shows a breakdown of all resources running over a specific period. To effectively use the report, your Azure accounts should have extended metrics enabled. Extended metrics provide Cloudyn access to your Azure VMs. For more information, see [Add extended metrics for Azure virtual machines](azure-vm-extended-metrics.md).
-
-### Instance Explorer report
-
-The Instance Explorer report is used to view various metrics for assets of your virtual machines. You can drill-into specific instances to view information such as:
-- Instance running intervals-- Life cycle in the selected period-- CPU utilization-- Network input-- Output traffic-- Active disks-
-The Instance Explorer report collects all running intervals within the defined date range and aggregates data accordingly. To view each of the running intervals during the date range, expand the instance. The cost of each instance is calculated for the date range selected based on AWS and Azure list prices. No discounts are applied. You can add additional fields to the report using Show/Hide Fields.
-
-Use Instance Explorer report to:
--- Calculate the estimated cost per machine.-- Create a full list, including aggregated running hours, of all machines that were active during a time range.-- Create a list by cloud service provider or account.-- View machines created or terminated during a time range.-- View all currently stopped machines.-- View the tags of each machine.-
-### Instances Over Time report
-
-Using the Instances Over Time report, you can see the maximum number of machines that were active each during the selected time range. If the defined resolution is by week or month, results are the maximum number of machines active on any given day during that month. Select a date range to select the filters that you want displayed in the report.
-
-### Instance Utilization Over Time report
-
-This report shows a breakdown of CPU or memory use over time for all your instances.
-
-### Compute Power Cost Over Time report
-
-The Compute Power Over Time report provides a breakdown of compute power over a specified date range. Although other reports show the number of running machines or the runtime hours, this report shows Core hours, Compute unit hours, or GB RAM hours.
-
-Use the report to:
--- Check compute power within a specified date range.-- View compute times based on cost allocation models.-
-This report is linked to your [Cost Allocation 360](tutorial-manage-costs.md#use-custom-tags-to-allocate-costs) policies so results are shown based on the defined tagging and policies your selected cost policy. When you don't have a policy created, then results aren't shown.
-
-### Compute Power Average Cost Over Time report
-
-You use the Compute Power Average Cost Over Time report to view more than just the cost of each running machine. The report shows your average cost per instance hour, core hour, compute unit hour, and GB RAM hour. The report provides insight into the efficiency of your deployment.
-
-This report is linked to your [Cost Allocation 360](tutorial-manage-costs.md#use-custom-tags-to-allocate-costs) policies so results are displayed based on the defined tagging and policies your selected cost policy. When you don't have a policy created, then results aren't shown.
-
-### S3 Cost Over Time report
-
-The S3 Cost Over Time report provides a breakdown of Amazon Simple Storage Service (S3) costs per bucket over time for a specified time frame. The report helps you find the buckets that are your main cost drivers and it shows you trends in your S3 usage and spending.
-
-### S3 Distribution of Cost report
-
-Use the report to analyze your S3 cost for the last month by bucket and storage class. You can use the pie chart view to set the visibility threshold. Or, you can use the table view to see subtotals.
-
-### S3 Bucket Properties report
-
-Use the report to view S3 bucket properties. You can use the pie chart view to set the visibility threshold. Or, you can use the table view to see subtotals.
-
-### RDS Instances Over Time report
-
-Use the report to view a breakdown of all Amazon Relational Database Service (RDS) instances running during the specified period.
-
-### RDS Active Instances report
-
-Use the report to analyze RDS active instances. In the report, expand the line item to view additional information.
-
-### Azure Reserved Instances report
-
-The Azure Reserved Instances report provides you with a single view of all your Azure reserved instances. This report displays each purchase as is its own line item. The report also shows details about that purchase such as the account that purchased it, the type of purchase and instance type, days remaining and so on. You can show or hide report data using Show/Hide Fields.
-
-Use the Azure Reserved Instances report to view:
--- A list of all reservations by purchase date.-- Time remaining until the RI expires.-- One-time fees.-- The account that purchased RIs, and when.-
-### AWS Reserved Instances report
-
-The AWS Reserved Instances report provides you with a single view of all AWS reserved instances. This report displays each purchase is its own line item and details about that purchase such as the account that purchased it, the type of purchase and instance type, days remaining and so on. You can show or hide report data using Show/Hide Fields.
-
-Use the AWS Reserved Instances report to view:
--- A list of all reservations by purchase date.-- Time remaining until the RI expires.-- One-time fees.-- Original purchase ID (reservation ID).-- The account that purchased RIs and when.-
-### EC2 RI Buying Recommendations report
-
-The foundation of cloud resource consumption is the on-demand model, where resources incur cost only when used. There are no up-front commitments ΓÇö you pay only for what you use, when you use it.
-
-AWS offers an alternative pricing model for its Elastic Compute Cloud (EC2) services ΓÇö the reserved instance (RI). This pricing model guarantees users the capacity whenever they need it for the duration of the RI. The RI offers significant price discounts over on-demand pricing. In return, users make an upfront commitment for the use of a virtual instance. The commitment is bound to a specific family, size, availability zone (AZ), and operating system, over the period of commitment (one or three years). The RI allows AWS to efficiently plan future capacity, as well as to gain customer commitment to using its services.
-
-Three payment options for RIs, which are all-upfront:
--- Bulk sum at day 0, offering the highest discount-- No upfront - in which the cost of RI is paid in monthly installments over the duration of the RI, offering the lowest discount-- Partial upfront, in which ┬╝ - ┬╜ of the price is paid up front, and the rest in monthly installments, with a discount rate that is lower, but close, to the all-upfront rate-
-Cloudyn evaluates the uptime of each machine for the last 30 days. Cloudyn recommends buying RIs when it is more cost-effective to run the machine with an RI at the current uptime level.
-
-The report shows the justification for its recommendations to save the most money over the year. The recommendations suggest replacing on-demand instances with RIs. You can purchase RIs directly from the report.
-
-Each tab opens as a full report. Notable sections in tabs include:
--- **EC2 RI Purchase Impact** - This section provides a simulation of the difference between on-demand vs reserved instances. Click **Zoom in**, to see the full EC2 RI Purchase Impact report with the filters already defined to your recommendation. This report shows the purchase impact of all potential RI purchases. You can adjust the expected average uptime to see the potential saving when you purchase EC2 Reserved Instances.--- **Saving Analysis** - This section provides the potential savings achieved and the month the savings are actualized when following Cloudyn recommendations. The actual savings and the percent saved are highlighted in red.--- **EC2 RI Type Comparison** - This section emphasizes the ROI highlights of Cloudyn's recommended deployment, including all relevant options. The results in this report assume that the machine is running at 100% uptime. Click **Zoom In** to open the detailed report.--- **Instances Over Time** - This section displays a breakdown of all instances associated with the recommendation, OnDemand, Reserved Instances, and Spot. Click **Zoom In** to open the detailed report.-- **Breakeven Points** - This section displays a table of all the possible recommended deployments and the ROI and the month when the ROI occurs. Click **Zoom In** to open the detailed report.-
-### EC2 Reservations Over Time report
-
-The EC2 Reservations Over Time report tracks the status of your usage of your purchased EC2 RIs. You can set the resolution of the report to hour, day, or week.
-
-Use the report to:
--- Display reservations purchased that are used and not used.-- Drill in to the resolution by hour to see RI usage per hour.-
-### Savings Over Time report
-
-Use the Savings Over Time report to view the savings achieved using reserved instances as well as spot instances. The report shows the ROI achieved over time resulting from RI purchases.
-
-To view savings from RIs, group the results by **Price Model** and select **Reservation**. To view RI savings achieved by a specific account or instance type, add the relevant grouping and filter to the account or instance type.
-
-To see savings from Spot instance use, filter the **Price Model** to **Spot**. The default filter for this report is RI and Spot Instances.
-
-### RDS RI Buying Recommendations report
-
-RDS RI Buying Recommendations report recommends when to use RDS RIs instead of on-demand instances.
-
-Each tab opens as a full report. Notable sections in tabs include:
--- **RDS RI Purchase Impact** - This section provides a simulation of the difference between on demand vs reserved instances. Click **Zoom in** to see the full RDS RI Purchase Impact report with the filters already defined to your recommendation. This report allows you to see the purchase impact of all potential RI purchases. You can adjust the expected average uptime and see the potential saving by purchasing RIs.-- **Saving Analysis** ΓÇô This section provides the potential savings achieved and the month the savings are actualized when following Cloudyn recommendations. The actual savings and the percent saved are highlighted in red.--- **RDS RI Type Comparison** - This section emphasizes the ROI highlights of the recommended deployment, including all relevant options. The results in this report assume that the machine is running at 100% uptime. Click **Zoom In** to open the detailed report for the selected machine.-- **Instances Over Time** ΓÇô This section displays a breakdown of all instances associated with the recommendation, OnDemand, Reserved Instances, and Spot. Click **Zoom In** to open the detailed report.--- **Breakeven Points** ΓÇô This section displays a table of all the possible recommended deployments and the ROI and the month when the ROI occurs. Click **Zoom In** to open the detailed report.-
-### RDS Reservations Over Time report
-
-Use the RDS Reservation Over Time report to view a breakdown of both your used and unused reservations during the specified period.
-
-### Reserved Instance Purchase Impact report
-
-The EC2 RI Purchase Impact report allows you to simulate reserved instance cost versus on-demand cost over time. It can help you make better purchasing decisions. Adjust the filters such as average runtime, term, platform, and others to make informed decisions when you consider RI purchases.
-
-### Cost-Effective Sizing Recommendations report
-
-The Cost-Effective Sizing Recommendations report provides results for AWS and Azure. For AWS users, your RI purchases are taken into consideration and the results don't include machines running as RI's. This report provides a list of underutilized instances that are candidates to downsize. Recommendations are based on your usage and performance data from the last 30 days. In each recommendation is a list of candidates to downsize, the justification to downsize, and a link to view complete details and performance metrics of the instance. And when relevant recommendations advise changing to newer generation instance types.
-
-You can't download the list of instance IDs that are recommended to downsize from this report. To download Instance IDs, use the All Sizing Recommendations report.
-
-Consider the following downsizing example:
-
-You have six m3.xlarge running instances. Cloudyn analysis shows that five of them have low CPU utilization. Consider downsizing them.
-
-In Cost Impact, the cost impact is calculated. In this example, by expanding the line item, you can see the current price for one m3.xlarge instance (Linux/Unix) costs $0.266 per hour and one m3.large instance (Linux/Unix) costs $0.133 per hour. So, the annual cost is $11,651 for five m3.xlarge instances running at 100% utilization. The annual cost is $5,825 for five m3.large instances running at 100% utilization. The potential savings are $5,825.
-
-To view cost-effective sizing justifications, click + to expand the line item. In **Details**:
--- The **Recommendation Justification** section displays the current deployment and the number of instances recommended to downsize.-- The **Cost Impact** section displays the calculation used to determine potential savings.-- The **Potential Annual Savings** section displays the potential annual savings when downsizing per Cloudyn's recommendation.-
-### All Sizing Recommendations report
-
-This report provides a list of underutilized instances that are candidates to downsize. The recommendations are based on your usage and performance data from the last 30 days. In each recommendation, you can view complete details and performance metrics of the instance.
-
-If you've purchased AWS reserved instances, this report contains results for all running instances, including instances running as RIs.
-
-Use the All Sizing Recommendations report to:
--- See a list of all your instances that are candidates to downsize.-- Export a report list containing Instance Names and IDs.-
-To view recommendation details for a specific Instance, click **+** to expand the details. The Recommendation Details section provides an overview of the recommendation.
-
-The **Tags** section provides the list of the tag keys and values for the selected instance. Use Tags in the left pane to filter the section.
-
-The **CPU Utilization** section provides the CPU utilization for the instance over the last month, by day.
-
-Click the graph to drill down and open the Instance CPU Over Time Report to see a breakdown of the instances.
--- Use **Show/Hide Fields** to add or remove fields: Timestamp, Avg CPU, Min CPU, Max CPU.-- Use **Date Range** to enter a date or date range and drill into a specific InstanceID.-- Use **Extended Filters** to show all or a specific Instance ID-- Click **Zoom in** to open the CPU Utilization Report-
-If the instance hasn't been monitored for 30 days, incomplete data is shown.
-
-The **Memory Utilization (GB)** section provides information about the memory utilized. For AWS users, memory metrics are not automatically available and need to be added per instance through AWS. AWS charges you to enable memory metrics for EC2 instances.
-
-The **Memory Utilization (%)** section displays the percent of memory used.
-
-The **Network Input Traffic** section displays a snapshot over time of the network traffic, average, and maximum, for the selected instance. Hover over the lines to see the date and maximum traffic for that time. Click **Zoom In** to open the Network Input Traffic Report.
-
-The **Network Output Traffic** section displays a snapshot of the network output traffic for the selected instance. Hover over the lines to see the date and maximum traffic for that time. Click **Zoom In** to open the Network Output Traffic report.
-
-### Instance Metrics Explorer report
-
-The Instance Metrics Explorer report shows cross-cloud performance metrics per instance. Use the report to view instances that are over or under-utilized based on CPU, memory, and network metric thresholds.
-
-To view cross-cloud performance per instance:
-
-1. In **Date Range**, select a date range for which you want to view performance.
-2. In **Tags**, select any tags that you want to view.
-3. In **Filters**, select the filters you want to display in the report.
-4. In **Extended Filters**, adjust the report thresholds for:
- - Avg CPU
- - Max CPU
- - Avg Memory
- - Max Memory
-5. In **Extended Filters**, click **Show** and then select the type of instances to display.
-
-To view a specific instance's metrics over time:
--- Go to the Instance Metrics Explorer report and click **+** to view details.-
-### RDS Sizing Recommendations report
-
-The RDS Sizing Recommendations report provides RDS sizing recommendations to optimize your cloud usage. It provides a list of underutilized instances that are candidates to downsize. Cloudyn recommendations are based on the usage and performance data of the last 30 days. You can filter recommendations by Account Name, Region, Instance Type, and Status.
-
-### Sizing Threshold Manager report
-
-Cloudyn's built-in sizing recommendations are calculated using a complex algorithm to provide accurate sizing suggestions. You can adjust the thresholds for downsizing recommendations.
-
-To manually adjust threshold sizing recommendations:
-
-1. In Sizing Threshold Manager, adjust the following thresholds as you like:
- - Average CPU %
- - Maximum CPU %
- - Average Memory %
- - Maximum Memory %
-3. Click **Apply** to save changes.
-4. Changes apply immediately to all your recommendations.
-
-To restore default thresholds:
--- In Sizing Threshold Manager, click **Restore Defaults**.-
-### Compute Instance Types report
-
-Use the Instance Types report to:
--- View instance types by Service, Family, API Name, and Name.-- View details such as CPU, ECU, RAM, and Network.-
-You can use **Search** to find specific line items.
-
-## Next steps
--- Learn about how to use reports, including how to customize or save and schedule reports, see [Understanding cost reports](understanding-cost-reports.md).-- Learn about the dashboards included in Cloudyn and about how to create your own custom dashboards, see [View key cost metrics with dashboards](dashboards.md).
cost-management-billing Account Admin Tasks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/account-admin-tasks.md
This article explains how to perform the following tasks in the Azure portal:
You must be the Account Administrator to perform any of these tasks.
+## Accounts portal is retiring
+
+Accounts portal will retire and customers will be redirected to the Azure portal by December 31, 2021. The features supported in the Accounts portal will be migrated to the Azure portal. This article explains how to perform some of the most common operations in the Azure portal.
++ ## Navigate to your subscription's payment methods 1. Sign in to the Azure portal as the Account Administrator.
data-factory Create Self Hosted Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/create-self-hosted-integration-runtime.md
Here are details of the application's actions and arguments:
1. Go to the [Microsoft integration runtime download page](https://www.microsoft.com/download/details.aspx?id=39717). 2. Select **Download**, select the 64-bit version, and select **Next**. The 32-bit version isn't supported.
-3. Run the Managed Identity file directly, or save it to your hard drive and run it.
+3. Run the MSI file directly, or save it to your hard drive and run it.
4. On the **Welcome** window, select a language and select **Next**. 5. Accept the Microsoft Software License Terms and select **Next**. 6. Select **folder** to install the self-hosted integration runtime, and select **Next**.
data-factory Data Flow Create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-create.md
Previously updated : 06/04/2021 Last updated : 07/05/2021 # Create Azure Data Factory Data Flow
Last updated 06/04/2021
Mapping Data Flows in ADF provide a way to transform data at scale without any coding required. You can design a data transformation job in the data flow designer by constructing a series of transformations. Start with any number of source transformations followed by data transformation steps. Then, complete your data flow with sink to land your results in a destination.
-Get started by first creating a new V2 Data Factory from the Azure portal. After creating your new factory, click on the "Author & Monitor" tile to launch the Data Factory UI.
+Get started by first creating a new V2 Data Factory from the Azure portal. After creating your new factory, select "Open" in the "Open Azure Data Factory Studio" tile to launch the Data Factory UI.
![Screenshot shows the New data factory pane with V2 selected for Version.](media/data-flow/v2portal.png "data flow create")
data-factory Join Azure Ssis Integration Runtime Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/join-azure-ssis-integration-runtime-virtual-network.md
If your SSIS packages access Azure resources that support [virtual network servi
If your SSIS packages access data stores/resources that allow only specific static public IP addresses and you want to secure access to those resources from Azure-SSIS IR, you can associate [public IP addresses](../virtual-network/virtual-network-public-ip-address.md) with Azure-SSIS IR while joining it to a virtual network and then add an IP firewall rule to the relevant resources to allow access from those IP addresses. There are two alternative ways to do this: - When you create Azure-SSIS IR, you can bring your own public IP addresses and specify them via [Data Factory UI or SDK](#join-the-azure-ssis-ir-to-a-virtual-network). Only the outbound internet connectivity of Azure-SSIS IR will use your provided public IP addresses and other devices in the subnet will not use them.-- You can also setup [Virtual Network NAT](../virtual-network/nat-overview.md) for the subnet that Azure-SSIS IR will join and all outbound connectivity in this subnet will use your specified public IP addresses.
+- You can also setup [Virtual Network NAT](../virtual-network/nat-gateway/nat-overview.md) for the subnet that Azure-SSIS IR will join and all outbound connectivity in this subnet will use your specified public IP addresses.
In all cases, the virtual network can be deployed only through the Azure Resource Manager deployment model.
data-factory Load Azure Data Lake Storage Gen2 From Gen1 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/load-azure-data-lake-storage-gen2-from-gen1.md
Previously updated : 06/04/2021 Last updated : 07/05/2021 # Copy data from Azure Data Lake Storage Gen1 to Gen2 with Azure Data Factory
This article shows you how to use the Data Factory copy data tool to copy data f
3. Select **Create**. 4. After creation is finished, go to your data factory. You see the **Data Factory** home page as shown in the following image:
- ![Data factory home page](./media/load-azure-data-lake-storage-gen2-from-gen1/data-factory-home-page.png)
+ :::image type="content" source="./media/doc-common-process/data-factory-home-page.png" alt-text="Home page for the Azure Data Factory, with the Open Azure Data Factory Studio tile.":::
-5. Select the **Author & Monitor** tile to launch the Data Integration application in a separate tab.
+5. Select **Open** on the **Open Azure Data Factory Studio** tile to launch the Data Integration application in a separate tab.
## Load data into Azure Data Lake Storage Gen2
data-factory Load Azure Data Lake Storage Gen2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/load-azure-data-lake-storage-gen2.md
Previously updated : 06/04/2021 Last updated : 07/05/2021 # Load data into Azure Data Lake Storage Gen2 with Azure Data Factory
This article shows you how to use the Data Factory Copy Data tool to load data f
4. After creation is complete, go to your data factory. You see the **Data Factory** home page as shown in the following image:
- :::image type="content" source="./media/doc-common-process/data-factory-home-page.png" alt-text="Home page for the Azure Data Factory, with the Author & Monitor tile.":::
+ :::image type="content" source="./media/doc-common-process/data-factory-home-page.png" alt-text="Home page for the Azure Data Factory, with the Open Azure Data Factory Studio tile.":::
- Select the **Author & Monitor** tile to launch the Data Integration Application in a separate tab.
+ Select **Open** on the **Open Azure Data Factory Studio** tile to launch the Data Integration Application in a separate tab.
## Load data into Azure Data Lake Storage Gen2
data-factory Load Azure Data Lake Store https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/load-azure-data-lake-store.md
Previously updated : 06/04/2021 Last updated : 07/05/2021 # Load data into Azure Data Lake Storage Gen1 by using Azure Data Factory
This article shows you how to use the Data Factory Copy Data tool to _load data
3. Select **Create**. 4. After creation is complete, go to your data factory. You see the **Data Factory** home page as shown in the following image:
- ![Data factory home page](./media/load-data-into-azure-data-lake-store/data-factory-home-page.png)
+ :::image type="content" source="./media/doc-common-process/data-factory-home-page.png" alt-text="Home page for the Azure Data Factory, with the Open Azure Data Factory Studio tile.":::
- Select the **Author & Monitor** tile to launch the Data Integration Application in a separate tab.
+ Select **Open** on the **Open Azure Data Factory Studio** tile to launch the Data Integration Application in a separate tab.
## Load data into Data Lake Storage Gen1
data-factory Load Azure Sql Data Warehouse https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/load-azure-sql-data-warehouse.md
Previously updated : 06/04/2021 Last updated : 07/05/2021 # Load data into Azure Synapse Analytics by using Azure Data Factory
This article shows you how to use the Data Factory Copy Data tool to _load data
3. Select **Create**. 4. After creation is complete, go to your data factory. You see the **Data Factory** home page as shown in the following image:
- :::image type="content" source="./media/doc-common-process/data-factory-home-page.png" alt-text="Home page for the Azure Data Factory, with the Author & Monitor tile.":::
+ :::image type="content" source="./media/doc-common-process/data-factory-home-page.png" alt-text="Home page for the Azure Data Factory, with the Open Azure Data Factory Studio tile.":::
- Select the **Author & Monitor** tile to launch the Data Integration Application in a separate tab.
+ Select **Open** on the **Open Azure Data Factory Studio** tile to launch the Data Integration Application in a separate tab.
## Load data into Azure Synapse Analytics
data-factory Load Office 365 Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/load-office-365-data.md
description: 'Use Azure Data Factory to copy data from Office 365'
Previously updated : 06/04/2021 Last updated : 07/05/2021
This article shows you how to use the Data Factory _load data from Office 365 in
3. Select **Create**. 4. After creation is complete, go to your data factory. You see the **Data Factory** home page as shown in the following image:
- ![Data factory home page](./media/load-office-365-data/data-factory-home-page.png)
+ :::image type="content" source="./media/doc-common-process/data-factory-home-page.png" alt-text="Home page for the Azure Data Factory, with the Open Azure Data Factory Studio tile.":::
-5. Select the **Author & Monitor** tile to launch the Data Integration Application in a separate tab.
+5. Select **Open** on the **Open Azure Data Factory Studio** tile to launch the Data Integration Application in a separate tab.
## Create a pipeline
data-factory Load Sap Bw Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/load-sap-bw-data.md
Previously updated : 06/04/2021 Last updated : 07/05/2021 # Copy data from SAP Business Warehouse by using Azure Data Factory
This article shows how to use Azure Data Factory to copy data from SAP Business
## Do a full copy from SAP BW Open Hub
-In the Azure portal, go to your data factory. Select **Author & Monitor** to open the Data Factory UI in a separate tab.
+In the Azure portal, go to your data factory. Select **Open** on the **Open Azure Data Factory Studio** tile to open the Data Factory UI in a separate tab.
1. On the home page, select **Ingest** to open the Copy Data tool.
data-factory Quickstart Create Data Factory Copy Data Tool https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/quickstart-create-data-factory-copy-data-tool.md
Previously updated : 06/04/2021 Last updated : 07/05/2021 # Quickstart: Use the Copy Data tool to copy data
In this quickstart, you use the Azure portal to create a data factory. Then, you
1. Select **Create**.
-1. After the creation is complete, you see the **Data Factory** page. Select the **Author & Monitor** tile to start the Azure Data Factory user interface (UI) application on a separate tab.
+1. After the creation is complete, you see the **Data Factory** page. Select **Open** on the **Open Azure Data Factory Studio** tile to start the Azure Data Factory user interface (UI) application on a separate tab.
- :::image type="content" source="./media/doc-common-process/data-factory-home-page.png" alt-text="Home page for the Azure Data Factory, with the Author & Monitor tile.":::
+ :::image type="content" source="./media/doc-common-process/data-factory-home-page.png" alt-text="Home page for the Azure Data Factory, with the Open Azure Data Factory Studio tile.":::
## Start the Copy Data tool
data-factory Quickstart Create Data Factory Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/quickstart-create-data-factory-portal.md
description: Create a data factory with a pipeline that copies data from one loc
Previously updated : 06/16/2021 Last updated : 07/05/2021
Watching this video helps you understand the Data Factory UI:
1. Select **Review + create**, and select **Create** after the validation is passed. After the creation is complete, select **Go to resource** to navigate to the **Data Factory** page.
-1. Select the **Author & Monitor** tile to start the Azure Data Factory user interface (UI) application on a separate browser tab.
+1. Select **Open** on the **Open Azure Data Factory Studio** tile to start the Azure Data Factory user interface (UI) application on a separate browser tab.
- :::image type="content" source="./media/doc-common-process/data-factory-home-page.png" alt-text="Home page for the Azure Data Factory, with the Author & Monitor tile.":::
+ :::image type="content" source="./media/doc-common-process/data-factory-home-page.png" alt-text="Home page for the Azure Data Factory, with the Open Azure Data Factory Studio tile.":::
> [!NOTE] > If you see that the web browser is stuck at "Authorizing", clear the **Block third-party cookies and site data** check box. Or keep it selected, create an exception for **login.microsoftonline.com**, and then try to open the app again.
data-factory Quickstart Create Data Factory Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/quickstart-create-data-factory-resource-manager-template.md
Previously updated : 05/10/2021 Last updated : 07/05/2021 # Quickstart: Create an Azure Data Factory using ARM template
Keep the container page open, because you can use it to verify the output at the
1. Navigate to the **Data factories** page, and select the data factory you created.
-2. Select the **Author & Monitor** tile.
+2. Select **Open** on the **Open Azure Data Factory Studio** tile.
- :::image type="content" source="media/quickstart-create-data-factory-resource-manager-template/data-factory-author-monitor-tile.png" alt-text="Author & Monitor":::
+ :::image type="content" source="media/quickstart-create-data-factory-resource-manager-template/data-factory-open-tile.png" alt-text="Author & Monitor":::
2. Select the **Author** tab :::image type="icon" source="media/quickstart-create-data-factory-resource-manager-template/data-factory-author.png" border="false":::.
data-factory Tutorial Bulk Copy Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-bulk-copy-portal.md
Previously updated : 01/29/2021 Last updated : 07/06/2021 # Copy multiple tables in bulk by using Azure Data Factory in the Azure portal
To verify and turn on this setting, go to your server > Security > Firewalls and
1. Click **Create**. 1. After the creation is complete, select **Go to resource** to navigate to the **Data Factory** page.
-1. Click **Author & Monitor** tile to launch the Data Factory UI application in a separate tab.
+1. Select **Open** on the **Open Azure Data Factory Studio** tile to launch the Data Factory UI application in a separate tab.
## Create linked services
data-factory Tutorial Copy Data Portal Private https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-copy-data-portal-private.md
Previously updated : 06/04/2021 Last updated : 07/05/2021
In this step, you create a data factory and start the Data Factory UI to create
1. After the creation is finished, you see the notice in the Notifications center. Select **Go to resource** to go to the **Data Factory** page.
-1. Select **Author & Monitor** to launch the Data Factory UI in a separate tab.
+1. Select **Open** on the **Open Azure Data Factory Studio** tile to launch the Data Factory UI in a separate tab.
## Create an Azure integration runtime in Data Factory Managed Virtual Network In this step, you create an Azure integration runtime and enable Data Factory Managed Virtual Network.
data-factory Tutorial Copy Data Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-copy-data-portal.md
Previously updated : 06/04/2021 Last updated : 07/05/2021
In this step, you create a data factory and start the Data Factory UI to create
8. Select **Git configuration** tab on the top, and select the **Configure Git later** check box. 9. Select **Review + create**, and select **Create** after the validation is passed. 10. After the creation is finished, you see the notice in Notifications center. Select **Go to resource** to navigate to the Data factory page.
-11. Select **Author & Monitor** to launch the Azure Data Factory UI in a separate tab.
+11. Select **Open** on the **Open Azure Data Factory Studio** tile to launch the Azure Data Factory UI in a separate tab.
## Create a pipeline
data-factory Tutorial Copy Data Tool https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-copy-data-tool.md
Previously updated : 06/04/2021 Last updated : 07/06/2021 # Copy data from Azure Blob storage to a SQL Database by using the Copy Data tool
Prepare your Blob storage and your SQL Database for the tutorial by performing t
1. After creation is finished, the **Data Factory** home page is displayed.
- :::image type="content" source="./media/doc-common-process/data-factory-home-page.png" alt-text="Home page for the Azure Data Factory, with the Author & Monitor tile.":::
+ :::image type="content" source="./media/doc-common-process/data-factory-home-page.png" alt-text="Home page for the Azure Data Factory, with the Open Azure Data Factory Studio tile.":::
-1. To launch the Azure Data Factory user interface (UI) in a separate tab, select the **Author & Monitor** tile.
+1. To launch the Azure Data Factory user interface (UI) in a separate tab, select **Open** on the **Open Azure Data Factory Studio** tile.
## Use the Copy Data tool to create a pipeline
Prepare your Blob storage and your SQL Database for the tutorial by performing t
![Screenshot that shows the Azure Data Factory home page.](./media/doc-common-process/get-started-page.png)
-1. On the **Properties** page, under **Task name**, enter **CopyFromBlobToSqlPipeline**. Then select **Next**. The Data Factory UI creates a pipeline with the specified task name.
-
- ![Create a pipeline](./media/tutorial-copy-data-tool/create-pipeline.png)
+1. On the **Properties** page of the Copy Data tool, choose **Built-in copy task** under **Task type**, then select **Next**.
+ ![Screenshot that shows the Properties page](./media/tutorial-copy-data-tool/copy-data-tool-properties-page.png)
+
1. On the **Source data store** page, complete the following steps:
- a. Select **+ Create new connection** to add a connection
+ a. Select **+ Create new connection** to add a connection.
b. Select **Azure Blob Storage** from the gallery, and then select **Continue**.
- c. On the **New Linked Service** page, select your Azure subscription, and select your storage account from the **Storage account name** list. Test connection and then select **Create**.
-
- d. Select the newly created linked service as source, then select **Next**.
+ c. On the **New connection (Azure Blob Storage)** page, select your Azure subscription from the **Azure subscription** list, and select your storage account from the **Storage account name** list. Test connection and then select **Create**.
- ![Select source linked service](./media/tutorial-copy-data-tool/select-source-linked-service.png)
+ d. Select the newly created linked service as source in the **Connection** block.
-1. On the **Choose the input file or folder** page, complete the following steps:
+ e. In the **File or folder** section, select **Browse** to navigate to the **adfv2tutorial** folder, select the **inputEmp.txt** file, then select **OK**.
- a. Select **Browse** to navigate to the **adfv2tutorial/input** folder, select the **inputEmp.txt** file, then select **Choose**.
+ f. Select **Next** to move to next step.
- b. Select **Next** to move to next step.
+ :::image type="content" source="./media/tutorial-copy-data-tool/source-data-store.png" alt-text="Configure the source.":::
-1. On the **File format settings** page, enable the checkbox for *First row as header*. Notice that the tool automatically detects the column and row delimiters. Select **Next**. You can also preview data and view the schema of the input data on this page.
+1. On the **File format settings** page, enable the checkbox for *First row as header*. Notice that the tool automatically detects the column and row delimiters, and you can preview data and view the schema of the input data by selecting **Preview data** button on this page. Then select **Next**.
![File format settings](./media/tutorial-copy-data-tool/file-format-settings-page.png) 1. On the **Destination data store** page, completes the following steps:
- a. Select **+ Create new connection** to add a connection
+ a. Select **+ Create new connection** to add a connection.
b. Select **Azure SQL Database** from the gallery, and then select **Continue**.
- c. On the **New Linked Service** page, select your server name and DB name from the dropdown list, and specify the username and password, then select **Create**.
+ c. On the **New connection (Azure SQL Database)** page, select your Azure subscription, server name and database name from the dropdown list. Then select **SQL authentication** under **Authentication type**, specify the username and password. Test connection and select **Create**.
- ![Configure Azure SQL DB](./media/tutorial-copy-data-tool/config-azure-sql-db.png)
+ ![Configure Azure SQL DB](./media/tutorial-copy-data-tool/config-azure-sql-db.png)
d. Select the newly created linked service as sink, then select **Next**.
-1. On the **Table mapping** page, select the **[dbo].[emp]** table, and then select **Next**.
+1. On the **Destination data store** page, select **Use existing table** and select the **dbo.emp** table. Then select **Next**.
1. On the **Column mapping** page, notice that the second and the third columns in the input file are mapped to the **FirstName** and **LastName** columns of the **emp** table. Adjust the mapping to make sure that there is no error, and then select **Next**. ![Column mapping page](./media/tutorial-copy-data-tool/column-mapping.png)
-1. On the **Settings** page, select **Next**.
+1. On the **Settings** page, under **Task name**, enter **CopyFromBlobToSqlPipeline**, and then select **Next**.
+
+ :::image type="content" source="./media/tutorial-copy-data-tool/settings.png" alt-text="Configure the settings.":::
1. On the **Summary** page, review the settings, and then select **Next**.
-1. On the **Deployment page**, select **Monitor** to monitor the pipeline (task).
+1. On the **Deployment** page, select **Monitor** to monitor the pipeline (task).
![Monitor pipeline](./media/tutorial-copy-data-tool/monitor-pipeline.png)
-1. On the Pipeline runs page, select **Refresh** to refresh the list. Select the link under **PIPELINE NAME** to view activity run details or rerun the pipeline.
+1. On the Pipeline runs page, select **Refresh** to refresh the list. Select the link under **Pipeline name** to view activity run details or rerun the pipeline.
![Pipeline run](./media/tutorial-copy-data-tool/pipeline-run.png)
-1. On the Activity runs page, select the **Details** link (eyeglasses icon) under the **ACTIVITY NAME** column for more details about copy operation. To go back to the Pipeline Runs view, select the **ALL pipeline runs** link in the breadcrumb menu. To refresh the view, select **Refresh**.
+1. On the "Activity runs" page, select the **Details** link (eyeglasses icon) under **Activity name** column for more details about copy operation. To go back to the "Pipeline runs" view, select the **All pipeline runs** link in the breadcrumb menu. To refresh the view, select **Refresh**.
![Monitor activity runs](./media/tutorial-copy-data-tool/activity-monitoring.png)
data-factory Tutorial Hybrid Copy Data Tool https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-hybrid-copy-data-tool.md
Previously updated : 06/04/2021 Last updated : 07/05/2021 # Copy data from a SQL Server database to Azure Blob storage by using the Copy Data tool
You use the name and key of your storage account in this tutorial. To get the na
1. After the creation is finished, you see the **Data Factory** page as shown in the image.
- :::image type="content" source="./media/doc-common-process/data-factory-home-page.png" alt-text="Home page for the Azure Data Factory, with the Author & Monitor tile.":::
-1. Select **Author & Monitor** to launch the Data Factory user interface in a separate tab.
+ :::image type="content" source="./media/doc-common-process/data-factory-home-page.png" alt-text="Home page for the Azure Data Factory, with the Open Azure Data Factory Studio tile.":::
+
+1. Select **Open** on the **Open Azure Data Factory Studio** tile to launch the Data Factory user interface in a separate tab.
## Use the Copy Data tool to create a pipeline
data-factory Tutorial Hybrid Copy Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-hybrid-copy-portal.md
Previously updated : 06/04/2021 Last updated : 07/05/2021 # Copy data from a SQL Server database to Azure Blob storage
In this step, you create a data factory and start the Data Factory UI to create
1. After the creation is finished, you see the **Data Factory** page as shown in the image:
- :::image type="content" source="./media/doc-common-process/data-factory-home-page.png" alt-text="Home page for the Azure Data Factory, with the Author & Monitor tile.":::
-1. Select the **Author & Monitor** tile to launch the Data Factory UI in a separate tab.
+ :::image type="content" source="./media/doc-common-process/data-factory-home-page.png" alt-text="Home page for the Azure Data Factory, with the Open Azure Data Factory Studio tile.":::
+
+1. Select **Open** on the **Open Azure Data Factory Studio** tile to launch the Data Factory UI in a separate tab.
## Create a pipeline
data-factory Tutorial Incremental Copy Change Data Capture Feature Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-incremental-copy-change-data-capture-feature-portal.md
Previously updated : 06/07/2021 Last updated : 07/05/2021 # Incrementally load data from Azure SQL Managed Instance to Azure Storage using change data capture (CDC)
If you don't have an Azure subscription, create a [free](https://azure.microsoft
![Screenshot shows a message that your deployment is complete and an option to go to resource.](./media/tutorial-incremental-copy-change-data-capture-feature-portal/data-factory-deploy-complete.png) 9. After the creation is complete, you see the **Data Factory** page as shown in the image.
- ![Screenshot shows the data factory that you deployed.](./media/tutorial-incremental-copy-change-data-capture-feature-portal/data-factory-home-page.png)
-10. Click **Author & Monitor** tile to launch the Azure Data Factory user interface (UI) in a separate tab.
+ :::image type="content" source="./media/doc-common-process/data-factory-home-page.png" alt-text="Home page for the Azure Data Factory, with the Open Azure Data Factory Studio tile.":::
+
+10. Select **Open** on the **Open Azure Data Factory Studio** tile to launch the Azure Data Factory user interface (UI) in a separate tab.
11. In the home page, switch to the **Manage** tab in the left panel as shown in the following image: ![Screenshot that shows the Manage button.](media/doc-common-process/get-started-page-manage-button.png)
data-factory Tutorial Incremental Copy Change Tracking Feature Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-incremental-copy-change-tracking-feature-portal.md
Previously updated : 06/07/2021 Last updated : 07/05/2021 # Incrementally load data from Azure SQL Database to Azure Blob Storage using change tracking information using the Azure portal
Install the latest Azure PowerShell modules by following instructions in [How t
![deploying data factory tile](media/tutorial-incremental-copy-change-tracking-feature-portal/deploying-data-factory.png) 9. After the creation is complete, you see the **Data Factory** page as shown in the image.
- ![Data factory home page](./media/tutorial-incremental-copy-change-tracking-feature-portal/data-factory-home-page.png)
-10. Click **Author & Monitor** tile to launch the Azure Data Factory user interface (UI) in a separate tab.
+ :::image type="content" source="./media/doc-common-process/data-factory-home-page.png" alt-text="Home page for the Azure Data Factory, with the Open Azure Data Factory Studio tile.":::
+
+10. Select **Open** on the **Open Azure Data Factory Studio** tile to launch the Azure Data Factory user interface (UI) in a separate tab.
11. In the home page, switch to the **Manage** tab in the left panel as shown in the following image: ![Screenshot that shows the Manage button.](media/doc-common-process/get-started-page-manage-button.png)
data-factory Tutorial Incremental Copy Lastmodified Copy Data Tool https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-incremental-copy-lastmodified-copy-data-tool.md
Previously updated : 06/04/2021 Last updated : 07/05/2021 # Incrementally copy new and changed files based on LastModifiedDate by using the Copy Data tool
Prepare your Blob storage for the tutorial by completing these steps:
6. Under **Location**, select the location for the data factory. Only supported locations appear in the list. The data stores (for example, Azure Storage and Azure SQL Database) and computes (for example, Azure HDInsight) that your data factory uses can be in other locations and regions. 8. Select **Create**. 9. After the data factory is created, the data factory home page appears.
-10. To open the Azure Data Factory user interface (UI) on a separate tab, select the **Author & Monitor** tile:
+10. To open the Azure Data Factory user interface (UI) on a separate tab, select **Open** on the **Open Azure Data Factory Studio** tile:
- :::image type="content" source="./media/doc-common-process/data-factory-home-page.png" alt-text="Home page for the Azure Data Factory, with the Author & Monitor tile.":::
+ :::image type="content" source="./media/doc-common-process/data-factory-home-page.png" alt-text="Home page for the Azure Data Factory, with the Open Azure Data Factory Studio tile.":::
## Use the Copy Data tool to create a pipeline
data-factory Tutorial Incremental Copy Multiple Tables Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-incremental-copy-multiple-tables-portal.md
Previously updated : 06/04/2021 Last updated : 07/05/2021 # Incrementally load data from multiple tables in SQL Server to a database in Azure SQL Database using the Azure portal
END
8. Click **Create**. 9. After the creation is complete, you see the **Data Factory** page as shown in the image.
- :::image type="content" source="./media/doc-common-process/data-factory-home-page.png" alt-text="Home page for the Azure Data Factory, with the Author & Monitor tile.":::
-10. Click **Author & Monitor** tile to launch the Azure Data Factory user interface (UI) in a separate tab.
+ :::image type="content" source="./media/doc-common-process/data-factory-home-page.png" alt-text="Home page for the Azure Data Factory, with the Open Azure Data Factory Studio tile.":::
+
+10. Select **Open** on the **Open Azure Data Factory Studio** tile to launch the Azure Data Factory user interface (UI) in a separate tab.
## Create self-hosted integration runtime As you are moving data from a data store in a private network (on-premises) to an Azure data store, install a self-hosted integration runtime (IR) in your on-premises environment. The self-hosted IR moves data between your private network and Azure.
data-factory Tutorial Incremental Copy Multiple Tables Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-incremental-copy-multiple-tables-powershell.md
Previously updated : 06/04/2021 Last updated : 07/05/2021 # Incrementally load data from multiple tables in SQL Server to Azure SQL Database using PowerShell
The pipeline takes a list of table names as a parameter. The **ForEach activity*
3. Search for your data factory in the list of data factories, and select it to open the **Data factory** page.
-4. On the **Data factory** page, select **Author & Monitor** to launch Azure Data Factory in a separate tab.
+4. On the **Data factory** page, select **Open** on the **Open Azure Data Factory Studio** tile to launch Azure Data Factory in a separate tab.
5. On the Azure Data Factory home page, select **Monitor** on the left side.
data-factory Tutorial Incremental Copy Partitioned File Name Copy Data Tool https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-incremental-copy-partitioned-file-name-copy-data-tool.md
Previously updated : 06/04/2021 Last updated : 07/05/2021 # Incrementally copy new files based on time partitioned file name by using the Copy Data tool
Prepare your Blob storage for the tutorial by performing these steps.
6. Under **location**, select the location for the data factory. Only supported locations are displayed in the drop-down list. The data stores (for example, Azure Storage and SQL Database) and computes (for example, Azure HDInsight) that are used by your data factory can be in other locations and regions. 7. Select **Create**. 8. After creation is finished, the **Data Factory** home page is displayed.
-9. To launch the Azure Data Factory user interface (UI) in a separate tab, select the **Author & Monitor** tile.
+9. To launch the Azure Data Factory user interface (UI) in a separate tab, select **Open** on the **Open Azure Data Factory Studio** tile.
- :::image type="content" source="./media/doc-common-process/data-factory-home-page.png" alt-text="Home page for the Azure Data Factory, with the Author & Monitor tile.":::
+ :::image type="content" source="./media/doc-common-process/data-factory-home-page.png" alt-text="Home page for the Azure Data Factory, with the Open Azure Data Factory Studio tile.":::
## Use the Copy Data tool to create a pipeline
data-factory Tutorial Incremental Copy Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-incremental-copy-portal.md
Previously updated : 06/04/2021 Last updated : 07/05/2021 # Incrementally load data from Azure SQL Database to Azure Blob storage using the Azure portal
END
8. Click **Create**. 9. After the creation is complete, you see the **Data Factory** page as shown in the image.
- :::image type="content" source="./media/doc-common-process/data-factory-home-page.png" alt-text="Home page for the Azure Data Factory, with the Author & Monitor tile.":::
-10. Click **Author & Monitor** tile to launch the Azure Data Factory user interface (UI) in a separate tab.
+ :::image type="content" source="./media/doc-common-process/data-factory-home-page.png" alt-text="Home page for the Azure Data Factory, with the Open Azure Data Factory Studio tile.":::
+
+10. Select **Open** on the **Open Azure Data Factory Studio** tile to launch the Azure Data Factory user interface (UI) in a separate tab.
## Create a pipeline In this tutorial, you create a pipeline with two Lookup activities, one Copy activity, and one StoredProcedure activity chained in one pipeline.
defender-for-iot Resources Agent Frequently Asked Questions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/device-builders/resources-agent-frequently-asked-questions.md
Title: Azure Defender for IoT agent frequently asked questions
+ Title: Azure Defender for IoT for device builders frequently asked questions
description: Find answers to the most frequently asked questions about Azure Defender for IoT agent. Previously updated : 04/25/2021 Last updated : 07/07/2021
-# Azure Defender for IoT agent frequently asked questions
+# Azure Defender for IoT for device builders frequently asked questions
This article provides a list of frequently asked questions and answers about the Defender for IoT agent. ## Do I have to install an embedded security agent?
-Agent installation on your IoT devices isn't mandatory in order to enable Defender for IoT. You can choose between the following three options, gaining different levels of security monitoring and management capabilities according to your selection:
+Agent installation on your IoT devices isn't mandatory in order to enable Defender for IoT. You can choose between the following two options There are four different levels of security monitoring, and management capabilities which will provide different levels of protection:
-- Passive, non-invasive (agentless) deployment using NTA (Network Traffic Analysis) sensors to monitor and provide deep visibility into IoT/OT risk with zero performance impact on the network and devices - Install the Defender for IoT embedded security agent with or without modifications. This option provides the highest level of enhanced security insights into device behavior and access. -- Create your own agent and implement the Defender for IoT security message schema. This option enables usage of Defender for IoT analysis tools on top of your device security agent.--- No security agent installation on your IoT devices. This option enables IoT Hub communication monitoring, with reduced security monitoring and management capabilities.
+- No security agent installation on your IoT devices. This option enables IoT Hub communication monitoring, with reduced security monitoring, and management capabilities.
## What does the Defender for IoT agent do?
digital-twins Concepts Apis Sdks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-apis-sdks.md
This article gives an overview of the APIs available, and the methods for intera
## Overview: control plane APIs
-The control plane APIs are [ARM](../azure-resource-manager/management/overview.md) APIs used to manage your Azure Digital Twins instance as a whole, so they cover operations like creating or deleting your entire instance. You will also use these to create and delete endpoints.
+The control plane APIs are [ARM](../azure-resource-manager/management/overview.md) APIs used to manage your Azure Digital Twins instance as a whole, so they cover operations like creating or deleting your entire instance. You'll also use these APIs to create and delete endpoints.
The most current control plane API version is _**2020-12-01**_.
You can also exercise control plane APIs by interacting with Azure Digital Twins
## Overview: data plane APIs
-The data plane APIs are the Azure Digital Twins APIs used to manage the elements within your Azure Digital Twins instance. They include operations like creating routes, uploading models, creating relationships, and managing twins. They can be broadly divided into the following categories:
+The data plane APIs are the Azure Digital Twins APIs used to manage the elements within your Azure Digital Twins instance. They include operations like creating routes, uploading models, creating relationships, and managing twins, and can be broadly divided into the following categories:
* **DigitalTwinModels** - The DigitalTwinModels category contains APIs to manage the [models](concepts-models.md) in an Azure Digital Twins instance. Management activities include upload, validation, retrieval, and deletion of models authored in DTDL. * **DigitalTwins** - The DigitalTwins category contains the APIs that let developers create, modify, and delete [digital twins](concepts-twins-graph.md) and their relationships in an Azure Digital Twins instance. * **Query** - The Query category lets developers [find sets of digital twins in the twin graph](how-to-query-graph.md) across relationships.
You can also exercise date plane APIs by interacting with Azure Digital Twins th
## .NET (C#) SDK (data plane)
-The Azure Digital Twins .NET (C#) SDK is part of the Azure SDK for .NET. It is open source, and is based on the Azure Digital Twins data plane APIs.
+The Azure Digital Twins .NET (C#) SDK is part of the Azure SDK for .NET. It's open source, and is based on the Azure Digital Twins data plane APIs.
> [!NOTE] > For more information on SDK design, see the general [design principles for Azure SDKs](https://azure.github.io/azure-sdk/general_introduction.html) and the specific [.NET design guidelines](https://azure.github.io/azure-sdk/dotnet_introduction.html).
-To use the SDK, include the NuGet package **Azure.DigitalTwins.Core** with your project. You will also need the latest version of the **Azure.Identity** package. In Visual Studio, you can add these packages using the NuGet Package Manager (accessed through *Tools > NuGet Package Manager > Manage NuGet Packages for Solution*). Alternatively, you can use the .NET command line tool with the commands found in the NuGet package links below to add these to your project:
-* [Azure.DigitalTwins.Core](https://www.nuget.org/packages/Azure.DigitalTwins.Core). This is the package for the [Azure Digital Twins SDK for .NET](/dotnet/api/overview/azure/digitaltwins/client?view=azure-dotnet&preserve-view=true).
-* [Azure.Identity](https://www.nuget.org/packages/Azure.Identity). This library provides tools to help with authentication against Azure.
+To use the SDK, include the NuGet package **Azure.DigitalTwins.Core** with your project. You'll also need the latest version of the **Azure.Identity** package. In Visual Studio, you can add these packages using the NuGet Package Manager (accessed through *Tools > NuGet Package Manager > Manage NuGet Packages for Solution*). You can also use the .NET command-line tool with the commands found in the NuGet package links below to add these to your project:
+* [Azure.DigitalTwins.Core](https://www.nuget.org/packages/Azure.DigitalTwins.Core): The package for the [Azure Digital Twins SDK for .NET](/dotnet/api/overview/azure/digitaltwins/client?view=azure-dotnet&preserve-view=true).
+* [Azure.Identity](https://www.nuget.org/packages/Azure.Identity): The library that provides tools to help with authentication against Azure.
For a detailed walk-through of using the APIs in practice, see the [Tutorial: Code a client app](tutorial-code.md). ### Serialization helpers
-Serialization helpers are helper functions available within the SDK for quickly creating or deserializing twin data for access to basic information. Since the core SDK methods return twin data as JSON by default, it can be helpful to use these helper classes to break the twin data down further.
+Serialization helpers are helper functions available within the SDK for quickly creating or deserializing twin data for access to basic information. Since the core SDK methods return twin data as JSON by default, it can be helpful to use these helper classes to break down the twin data further.
The available helper classes are: * `BasicDigitalTwin`: Generically represents the core data of a digital twin
The available helper classes are:
> [!NOTE] > Please note that Azure Digital Twins doesn't currently support **Cross-Origin Resource Sharing (CORS)**. For more info about the impact and resolution strategies, see the [Cross-Origin Resource Sharing (CORS)](concepts-security.md#cross-origin-resource-sharing-cors) section of *Concepts: Security for Azure Digital Twins solutions*.
-The following list provides additional detail and general guidelines for using the APIs and SDKs.
+The following list provides more detail and general guidelines for using the APIs and SDKs.
* You can use an HTTP REST-testing tool like Postman to make direct calls to the Azure Digital Twins APIs. For more information about this process, see [How-to: Make API requests with Postman](how-to-use-postman.md).
-* To use the SDK, instantiate the `DigitalTwinsClient` class. The constructor requires credentials that can be obtained with a variety of authentication methods in the `Azure.Identity` package. For more on `Azure.Identity`, see its [namespace documentation](/dotnet/api/azure.identity?view=azure-dotnet&preserve-view=true).
-* You may find the `InteractiveBrowserCredential` useful while getting started, but there are several other options, including credentials for [managed identity](/dotnet/api/azure.identity.interactivebrowsercredential?view=azure-dotnet&preserve-view=true), which you will likely use to authenticate [Azure functions set up with MSI](../app-service/overview-managed-identity.md?tabs=dotnet) against Azure Digital Twins. For more about `InteractiveBrowserCredential`, see its [class documentation](/dotnet/api/azure.identity.interactivebrowsercredential?view=azure-dotnet&preserve-view=true).
-* Requests to the Azure Digital Twins APIs require a user or service principal that is a part of the same [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md) (Azure AD) tenant where the Azure Digital Twins instance resides. To prevent malicious scanning of Azure Digital Twins endpoints, requests with access tokens from outside the originating tenant will be returned a "404 Sub-Domain not found" error message. This error will be returned *even if* the user or service principal was given an Azure Digital Twins Data Owner or Azure Digital Twins Data Reader role through [Azure AD B2B](../active-directory/external-identities/what-is-b2b.md) collaboration. For information on how to achieve access across multiple tenants, see [How-to: Write app authentication code](how-to-authenticate-client.md#authenticate-across-tenants).
+* To use the SDK, instantiate the `DigitalTwinsClient` class. The constructor requires credentials that can be obtained with different kinds of authentication methods in the `Azure.Identity` package. For more on `Azure.Identity`, see its [namespace documentation](/dotnet/api/azure.identity?view=azure-dotnet&preserve-view=true).
+* You may find the `InteractiveBrowserCredential` useful while getting started, but there are several other options, including credentials for [managed identity](/dotnet/api/azure.identity.interactivebrowsercredential?view=azure-dotnet&preserve-view=true), which you'll likely use to authenticate [Azure functions set up with MSI](../app-service/overview-managed-identity.md?tabs=dotnet) against Azure Digital Twins. For more about `InteractiveBrowserCredential`, see its [class documentation](/dotnet/api/azure.identity.interactivebrowsercredential?view=azure-dotnet&preserve-view=true).
+* Requests to the Azure Digital Twins APIs require a user or service principal that is a part of the same [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md) (Azure AD) tenant where the Azure Digital Twins instance exists. To prevent malicious scanning of Azure Digital Twins endpoints, requests with access tokens from outside the originating tenant will be returned a "404 Sub-Domain not found" error message. This error will be returned *even if* the user or service principal was given an Azure Digital Twins Data Owner or Azure Digital Twins Data Reader role through [Azure AD B2B](../active-directory/external-identities/what-is-b2b.md) collaboration. For information on how to achieve access across multiple tenants, see [How-to: Write app authentication code](how-to-authenticate-client.md#authenticate-across-tenants).
* All service API calls are exposed as member functions on the `DigitalTwinsClient` class. * All service functions exist in synchronous and asynchronous versions. * All service functions throw an exception for any return status of 400 or above. Make sure you wrap calls into a `try` section, and catch at least `RequestFailedExceptions`. For more about this type of exception, see its [reference documentation](/dotnet/api/azure.requestfailedexception?view=azure-dotnet&preserve-view=true).
The following list provides additional detail and general guidelines for using t
* The underlying SDK is `Azure.Core`. See the [Azure namespace documentation](/dotnet/api/azure?view=azure-dotnet&preserve-view=true) for reference on the SDK infrastructure and types.
-Service methods return strongly-typed objects wherever possible. However, because Azure Digital Twins is based on models custom-configured by the user at runtime (via DTDL models uploaded to the service), many service APIs take and return twin data in JSON format.
+Service methods return strongly typed objects wherever possible. However, because Azure Digital Twins is based on models custom-configured by the user at runtime (via DTDL models uploaded to the service), many service APIs take and return twin data in JSON format.
## Monitor API metrics
digital-twins Concepts Data Explorer Plugin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-data-explorer-plugin.md
Combining data from a twin graph in Azure Digital Twins with time series data in
## Using the plugin
-In order to get the plugin running on your own Azure Data Explorer cluster that contains time series data, start by running the following command in Azure Data Explorer in order to enable the plugin:
-
-```kusto
-.enable plugin azure_digital_twins_query_request
-```
-
-This command requires **All Databases admin** permission. For more information on the command, see the [`.enable` plugin documentation](/azure/data-explorer/kusto/management/enable-plugin).
-
-Once the plugin is enabled, you can invoke it within a Kusto query with the following command. There are two placeholders, `<Azure-Digital-Twins-endpoint>` and `<Azure-Digital-Twins-query>`, which are strings representing the Azure Digital Twins instance endpoint and Azure Digital Twins query, respectively.
+You can invoke the plugin in a Kusto query with the following command. There are two placeholders, `<Azure-Digital-Twins-endpoint>` and `<Azure-Digital-Twins-query>`, which are strings representing the Azure Digital Twins instance endpoint and Azure Digital Twins query, respectively.
```kusto evaluate azure_digital_twins_query_request(<Azure-Digital-Twins-endpoint>, <Azure-Digital-Twins-query>)
digital-twins Concepts Ontologies Adopt https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-ontologies-adopt.md
# Adopting an industry ontology
-Because it can be easier to start with an open-source DTDL ontology than starting from a blank page, Microsoft is partnering with domain experts to publish ontologies, which represent widely accepted industry conventions and support various customer use cases.
+Because it can be easier to start with an open-source DTDL ontology than from a blank page, Microsoft is partnering with domain experts to publish ontologies. These ontologies represent widely accepted industry conventions and support various customer use cases.
-The result is a set of open-source DTDL-based ontologies, which learn from, build on, learn from, or directly use industry standards. The ontologies are designed to meet the needs of downstream developers, with the potential to be widely adopted and/or extended by the industry.
+The result is a set of open-source DTDL-based ontologies, which learn from, build on, or directly use industry standards. The ontologies are designed to meet the needs of downstream developers, with the potential to be widely adopted and extended by the industry.
At this time, Microsoft has worked with partners to develop ontologies for [smart buildings](#realestatecore-smart-building-ontology), [smart cities](#smart-cities-ontology), and [energy grids](#energy-grid-ontology), which provide common ground for modeling based on standards in these industries to avoid the need for reinvention.
You can also read more about the partnerships and approach for smart cities in t
*Get the ontology from the following repository:* [Digital Twins Definition Language (DTDL) ontology for Energy Grid](https://github.com/Azure/opendigitaltwins-energygrid/).
-This ontology was created to help solution providers accelerate development of digital twin solutions for energy use cases (monitoring grid assets, outage and impact analysis, simulation, and predictive maintenance) and facilitate digital transformation and modernization of the energy grid. It is adapted from the [Common Information Model (CIM)](https://cimug.ucaiug.org/), a global standard for energy grid assets management, power system operations modeling and physical energy commodity market.
+This ontology was created to help solution providers accelerate development of digital twin solutions for energy use cases (monitoring grid assets, outage and impact analysis, simulation, and predictive maintenance) and enable the digital transformation and modernization of the energy grid. It's adapted from the [Common Information Model (CIM)](https://cimug.ucaiug.org/), a global standard for energy grid assets management, power system operations modeling, and physical energy commodity market.
To learn more about the ontology, how to use it, and how to contribute, visit the ontology's repository on GitHub: [Azure/opendigitaltwins-energygrid](https://github.com/Azure/opendigitaltwins-energygrid/).
digital-twins Concepts Query Language https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-query-language.md
Recall that the center of Azure Digital Twins is the [twin graph](concepts-twins-graph.md), constructed from digital twins and relationships.
-This graph can be queried to get information about the digital twins and relationships it contains. These queries are written in a custom SQL-like query language, referred to as the **Azure Digital Twins query language**. This is similar to the [IoT Hub query language](../iot-hub/iot-hub-devguide-query-language.md) with many comparable features.
+This graph can be queried to get information about the digital twins and relationships it contains. These queries are written in a custom SQL-like query language, referred to as the **Azure Digital Twins query language**. This language is similar to the [IoT Hub query language](../iot-hub/iot-hub-devguide-query-language.md) with many comparable features.
This article describes the basics of the query language and its capabilities. For more detailed examples of query syntax and how to run query requests, see [How-to: Query the twin graph](how-to-query-graph.md).
You can use the Azure Digital Twins query language to retrieve digital twins acc
* relationships - properties of the relationships
-To submit a query to the service from a client app, you will use the Azure Digital Twins [Query API](/rest/api/digital-twins/dataplane/query). One way to use the API is through one of the [SDKs for Azure Digital Twins](concepts-apis-sdks.md#overview-data-plane-apis).
+To submit a query to the service from a client app, you'll use the Azure Digital Twins [Query API](/rest/api/digital-twins/dataplane/query). One way to use the API is through one of the [SDKs for Azure Digital Twins](concepts-apis-sdks.md#overview-data-plane-apis).
[!INCLUDE [digital-twins-query-reference.md](../../includes/digital-twins-query-reference.md)]
To submit a query to the service from a client app, you will use the Azure Digit
When writing queries for Azure Digital Twins, keep the following considerations in mind: * **Remember case sensitivity**: All Azure Digital Twins query operations are case-sensitive, so take care to use the exact names defined in the models. If property names are misspelled or incorrectly cased, the result set is empty with no errors returned.
-* **Escape single quotes**: If your query text includes a single quote character in the data, the quote will need to be escaped with the `\` character. Here is an example that deals with a property value of *D'Souza*:
+* **Escape single quotes**: If your query text includes a single quote character in the data, the quote will need to be escaped with the `\` character. Here's an example that deals with a property value of *D'Souza*:
:::code language="sql" source="~/digital-twins-docs-samples/queries/examples.sql" id="EscapedSingleQuote":::
-* **Consider possible latency**: After making a change to the data in your graph, there may be a latency of up to 10 seconds before the changes will be reflected in queries. The [GetDigitalTwin API](how-to-manage-twin.md#get-data-for-a-digital-twin) does not experience this delay, so if you need an instant response, use the API call instead of querying to see your change reflected immediately.
+* **Consider possible latency**: After making a change to the data in your graph, there may be a latency of up to 10 seconds before the changes will be reflected in queries. The [GetDigitalTwin API](how-to-manage-twin.md#get-data-for-a-digital-twin) doesn't experience this delay, so if you need an instant response, use the API call instead of querying to see your change reflected immediately.
## Next steps
digital-twins How To Manage Graph https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-manage-graph.md
The following code sample illustrates how to create a relationship in your Azure
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/graph_operations_sample.cs" id="CreateRelationshipMethod" highlight="13":::
-This custom function can now be called to create a _contains_ relationship like this:
+This custom function can now be called to create a _contains_ relationship in the following way:
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/graph_operations_sample.cs" id="UseCreateRelationship":::
Relationships can be classified as either:
* Outgoing relationships: Relationships belonging to this twin that point outward to connect it to other twins. The `GetRelationshipsAsync()` method is used to get outgoing relationships of a twin. * Incoming relationships: Relationships belonging to other twins that point towards this twin to create an "incoming" link. The `GetIncomingRelationshipsAsync()` method is used to get incoming relationships of a twin.
-There is no restriction on the number of relationships that you can have between two twinsΓÇöyou can have as many relationships between twins as you like.
+There's no restriction on the number of relationships that you can have between two twinsΓÇöyou can have as many relationships between twins as you like.
This means that you can express several different types of relationships between two twins at once. For example, Twin A can have both a *stored* relationship and *manufactured* relationship with Twin B.
-You can even create multiple instances of the same type of relationship between the same two twins, if desired. In this example, Twin A could have two different *stored* relationships with Twin B, as long as the relationships have different relationship IDs.
+You can even create multiple instances of the same type of relationship between the same two twins, if you want. In this example, Twin A could have two different *stored* relationships with Twin B, as long as the relationships have different relationship IDs.
## List relationships
You can always deserialize relationship data to a type of your choice. For basic
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/graph_operations_other.cs" id="ListRelationshipProperties":::
-### Find outgoing relationships from a digital twin
+### List outgoing relationships from a digital twin
To access the list of **outgoing** relationships for a given twin in the graph, you can use the `GetRelationships()` method like this: :::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/graph_operations_sample.cs" id="GetRelationshipsCall":::
-This returns an `Azure.Pageable<T>` or `Azure.AsyncPageable<T>`, depending on whether you use the synchronous or asynchronous version of the call.
+This method returns an `Azure.Pageable<T>` or `Azure.AsyncPageable<T>`, depending on whether you use the synchronous or asynchronous version of the call.
-Here is an example that retrieves a list of relationships. It uses the SDK call (highlighted) inside a custom method that might appear in the context of a larger program.
+Here's an example that retrieves a list of relationships. It uses the SDK call (highlighted) inside a custom method that might appear in the context of a larger program.
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/graph_operations_sample.cs" id="FindOutgoingRelationshipsMethod" highlight="8":::
You can now call this custom method to see the outgoing relationships of the twi
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/graph_operations_sample.cs" id="UseFindOutgoingRelationships":::
-You can use the retrieved relationships to navigate to other twins in your graph. To do this, read the `target` field from the relationship that is returned, and use it as the ID for your next call to `GetDigitalTwin()`.
+You can use the retrieved relationships to navigate to other twins in your graph by reading the `target` field from the relationship that is returned, and using it as the ID for your next call to `GetDigitalTwin()`.
-### Find incoming relationships to a digital twin
+### List incoming relationships to a digital twin
-Azure Digital Twins also has an API to find all **incoming** relationships to a given twin. This is often useful for reverse navigation, or when deleting a twin.
+Azure Digital Twins also has an SDK call to find all **incoming** relationships to a given twin. This SDK is often useful for reverse navigation, or when deleting a twin.
>[!NOTE] > `IncomingRelationship` calls don't return the full body of the relationship. For more information on the `IncomingRelationship` class, see its [reference documentation](/dotnet/api/azure.digitaltwins.core.incomingrelationship?view=azure-dotnet&preserve-view=true).
You can now call this custom method to see the incoming relationships of the twi
### List all twin properties and relationships
-Using the above methods for listing outgoing and incoming relationships to a twin, you can create a method that prints full twin information, including the twin's properties and both types of its relationships. Here is an example custom method showing how to combine the above custom methods for this purpose.
+Using the above methods for listing outgoing and incoming relationships to a twin, you can create a method that prints full twin information, including the twin's properties and both types of its relationships. Here's an example custom method showing how to combine the above custom methods for this purpose.
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/graph_operations_sample.cs" id="FetchAndPrintMethod":::
Relationships are updated using the `UpdateRelationship` method.
>[!NOTE] >This method is for updating the **properties** of a relationship. If you need to change the source twin or target twin of the relationship, you'll need to [delete the relationship](#delete-relationships) and [re-create one](#create-relationships) using the new twins.
-The required parameters for the client call are the ID of the source twin (the twin where the relationship originates), the ID of the relationship to update, and a [JSON Patch](http://jsonpatch.com/) document containing the properties and new values you want to update.
+The required parameters for the client call are:
+- The ID of the source twin (the twin where the relationship originates).
+- The ID of the relationship to update.
+- A [JSON Patch](http://jsonpatch.com/) document containing the properties and new values you want to update.
-Here is sample code showing how to use this method. This example uses the SDK call (highlighted) inside a custom method that might appear in the context of a larger program.
+Here's a sample code snippet showing how to use this method. This example uses the SDK call (highlighted) inside a custom method that might appear in the context of a larger program.
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/graph_operations_sample.cs" id="UpdateRelationshipMethod" highlight="6":::
-Here is an example of a call to this custom method, passing in a JSON Patch document with the information to update a property.
+Here's an example of a call to this custom method, passing in a JSON Patch document with the information to update a property.
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/graph_operations_sample.cs" id="UseUpdateRelationship":::
Here is an example of a call to this custom method, passing in a JSON Patch docu
The first parameter specifies the source twin (the twin where the relationship originates). The other parameter is the relationship ID. You need both the twin ID and the relationship ID, because relationship IDs are only unique within the scope of a twin.
-Here is sample code showing how to use this method. This example uses the SDK call (highlighted) inside a custom method that might appear in the context of a larger program.
+Here's sample code showing how to use this method. This example uses the SDK call (highlighted) inside a custom method that might appear in the context of a larger program.
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/graph_operations_sample.cs" id="DeleteRelationshipMethod" highlight="5":::
Consider the following data table, describing a set of digital twins 
| dtmi:example:Room;1 | Room1 | | | {"Temperature": 80} | | dtmi:example:Room;1 | Room0 | | | {"Temperature": 70} |
-One way to get this data into Azure Digital Twins is to convert the table to a CSV file and write code to interpret the file into commands to create twins and relationships. The following code sample illustrates reading the data from the CSV file and creating a twin graph in Azure Digital Twins.
+One way to get this data into Azure Digital Twins is to convert the table to a CSV file. Once the table is converted, code can be written to interpret the file into commands to create twins and relationships. The following code sample illustrates reading the data from the CSV file and creating a twin graph in Azure Digital Twins.
-In the code below, the CSV file is called *data.csv*, and there is a placeholder representing the **host name** of your Azure Digital Twins instance. The sample also makes use of several packages that you can add to your project to help with this process.
+In the code below, the CSV file is called *data.csv*, and there's a placeholder representing the **host name** of your Azure Digital Twins instance. The sample also makes use of several packages that you can add to your project to help with this process.
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/graphFromCSV.cs":::
Next, complete the following steps to configure your project code:
dotnet add package Azure.Identity ```
-You'll also need to set up local credentials if you want to run the sample directly. The next section walks through this.
+You'll also need to set up local credentials if you want to run the sample directly. The next section walks through this process.
[!INCLUDE [Azure Digital Twins: local credentials prereq (outer)](../../includes/digital-twins-local-credentials-outer.md)] ### Run the sample Now that you've completed setup, you can run the sample code project.
-Here is the console output of the program:
+Here's the console output of the program:
:::image type="content" source="./media/how-to-manage-graph/console-output-twin-graph.png" alt-text="Screenshot of the console output showing the twin details with incoming and outgoing relationships of the twins." lightbox="./media/how-to-manage-graph/console-output-twin-graph.png":::
digital-twins How To Manage Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-manage-model.md
This article describes how to manage the [models](concepts-models.md) in your Az
## Create models
-Models for Azure Digital Twins are written in DTDL, and saved as .json files. There is also a [DTDL extension](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.vscode-dtdl) available for [Visual Studio Code](https://code.visualstudio.com/), which provides syntax validation and other features to facilitate writing DTDL documents.
+Models for Azure Digital Twins are written in DTDL, and saved as .json files. There's also a [DTDL extension](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.vscode-dtdl) available for [Visual Studio Code](https://code.visualstudio.com/), which provides syntax validation and other features to make it easier to write DTDL documents.
Consider an example in which a hospital wants to digitally represent their rooms. Each room contains a smart soap dispenser for monitoring hand-washing, and sensors to monitor traffic through the room.
The first step towards the solution is to create models to represent aspects of
> [!NOTE] > This is a sample body for a .json file in which a model is defined and saved, to be uploaded as part of a client project. The REST API call, on the other hand, takes an array of model definitions like the one above (which is mapped to a `IEnumerable<string>` in the .NET SDK). So to use this model in the REST API directly, surround it with brackets.
-This model defines a name and a unique ID for the patient room, and properties to represent visitor count and hand-wash status (these counters will be updated from motion sensors and smart soap dispensers, and will be used together to calculate a *handwash percentage* property). The model also defines a relationship *hasDevices*, which will be used to connect any [digital twins](concepts-twins-graph.md) based on this Room model to the actual devices.
+This model defines a name and a unique ID for the patient room, and properties to represent visitor count and hand-wash status. These counters will be updated from motion sensors and smart soap dispensers, and will be used together to calculate a *handwash percentage* property. The model also defines a relationship *hasDevices*, which will be used to connect any [digital twins](concepts-twins-graph.md) based on this Room model to the actual devices.
Following this method, you can go on to define models for the hospital's wards, zones, or the hospital itself.
On upload, model files are validated by the service.
You can list and retrieve models stored on your Azure Digital Twins instance.
-Here are your options for this:
+Your options include:
* Retrieve a single model * Retrieve all models * Retrieve metadata and dependencies for models
Here are some example calls:
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/model_operations.cs" id="GetModels":::
-The API calls to retrieve models all return `DigitalTwinsModelData` objects. `DigitalTwinsModelData` contains metadata about the model stored in the Azure Digital Twins instance, such as name, DTMI, and creation date of the model. The `DigitalTwinsModelData` object also optionally includes the model itself. Depending on parameters, you can thus use the retrieve calls to either retrieve just metadata (which is useful in scenarios where you want to display a UI list of available tools, for example), or the entire model.
+The SDK calls to retrieve models all return `DigitalTwinsModelData` objects. `DigitalTwinsModelData` contains metadata about the model stored in the Azure Digital Twins instance, such as name, DTMI, and creation date of the model. The `DigitalTwinsModelData` object also optionally includes the model itself. Meaning that, depending on parameters, you can use the retrieve calls to either retrieve just metadata (which is useful in scenarios where you want to display a UI list of available tools, for example), or the entire model.
The `RetrieveModelWithDependencies` call returns not only the requested model, but also all models that the requested model depends on.
-Models are not necessarily returned in exactly the document form they were uploaded in. Azure Digital Twins only guarantees that the return form will be semantically equivalent.
+Models aren't necessarily returned in exactly the document form they were uploaded in. Azure Digital Twins only guarantees that the return form will be semantically equivalent.
## Update models
-Once a model is uploaded to your Azure Digital Twins instance, the model interface is immutable. This means there's no traditional "editing" of models. Azure Digital Twins also does not allow re-upload of the same exact model while a matching model is already present in the instance.
+Once a model is uploaded to your Azure Digital Twins instance, the model interface is immutable, which means there's no traditional "editing" of models. Azure Digital Twins also doesn't allow reupload of the same exact model while a matching model is already present in the instance.
Instead, if you want to make changes to a modelΓÇösuch as updating `displayName` or `description`, or adding and removing propertiesΓÇöyou'll need to replace the original model. There are two strategies to choose from when replacing a model: * [Option 1: Upload new model version](#option-1-upload-new-model-version): Upload the model, with a new version number, and update your twins to use that new model. Both the new and old versions of the model will exist in your instance until you delete one.
- - **Use this strategy when** you want to make sure twins stay valid at all times through the model transition, or you want to keep a record of what versions a model has gone through. This is also a good choice if you have many models that depend on the model you want to update.
-* [Option 2: Delete old model and re-upload](#option-2-delete-old-model-and-re-upload): Delete the original model and upload the new model with the same name and ID (DTMI value) in its place. Completely replaces the old model with the new one.
- - **Use this strategy when** you want to remove all record of the older model. Twins will be invalid for a short time while you're transitioning them from the old model to the new one.
+ - **Use this strategy when** you want to make sure twins stay valid constantly through the model transition, or you want to keep a record of what versions a model has gone through. This strategy is also a good choice if you have many models that depend on the model you want to update.
+* [Option 2: Delete old model and reupload](#option-2-delete-old-model-and-reupload): Delete the original model and upload the new model with the same name and ID (DTMI value) in its place. Completely replaces the old model with the new one.
+ - **Use this strategy when** you want to remove all record of the older model. Twins will be invalid for a short time while you're transitioning them from the old model to the new one, meaning that they won't be able to take any updates until the new model is uploaded and the twins conform to it.
### Option 1: Upload new model version This option involves creating a new version of the model and uploading it to your instance.
-This **does not** overwrite earlier versions of the model, so multiple versions of the model will coexist in your instance until you [remove them](#remove-models). Since the new model version and the old model version coexist, twins can use either the new version of the model or the older version. This also means that uploading a new version of a model does not automatically affect existing twins. The existing twins will remain as instances of the old model version, and you can update these twins to the new model version by patching them.
+This operation **doesn't** overwrite earlier versions of the model, so multiple versions of the model will coexist in your instance until you [remove them](#remove-models). Since the new model version and the old model version coexist, twins can use either the new version of the model or the older version, meaning that uploading a new version of a model doesn't automatically affect existing twins. The existing twins will remain as instances of the old model version, and you can update these twins to the new model version by patching them.
To use this strategy, follow the steps below.
To use this strategy, follow the steps below.
To create a new version of an existing model, start with the DTDL of the original model. Update, add, or remove the fields you want to change.
-Next, mark this as a newer version of the model by updating the `id` field of the model. The last section of the model ID, after the `;`, represents the model number. To indicate that this is now a more-updated version of this model, increment the number at the end of the `id` value to any number greater than the current version number.
+Next, mark this model as a newer version of the model by updating the `id` field of the model. The last section of the model ID, after the `;`, represents the model number. To indicate that this model is now a more-updated version, increment the number at the end of the `id` value to any number greater than the current version number.
For example, if your previous model ID looked like this:
Next, update the **twins and relationships** in your instance to use the new mod
>[!IMPORTANT] >When updating twins, use the **same patch** to update both the model ID (to the new model version) and any fields that must be altered on the twin to make it conform to the new model.
-You may also need to update **relationships** and other **models** in your instance that reference this model, to make them refer to the new model version. This will be another model update operation, so return to the beginning of this section and repeat the process for any additional models that need updating.
+You may also need to update **relationships** and other **models** in your instance that reference this model, to make them refer to the new model version. You'll need to do another model update operation to achieve this purpose, so return to the beginning of this section and repeat the process for any more models that need updating.
#### 3. (Optional) Decommission or delete old model version
-If you won't be using the old model version anymore, you can [decommission](#decommissioning) the older model. This will allow it to keep existing in the instance, but it can't be used to create new digital twins.
+If you won't be using the old model version anymore, you can [decommission](#decommissioning) the older model. This action allows the model to keep existing in the instance, but it can't be used to create new digital twins.
You can also [delete](#deletion) the old model completely if you don't want it in the instance anymore at all. The sections linked above contain example code and considerations for decommissioning and deleting models.
-### Option 2: Delete old model and re-upload
+### Option 2: Delete old model and reupload
-Instead of incrementing the version of a model, you can delete a model completely and re-upload an edited model to the instance.
+Instead of incrementing the version of a model, you can delete a model completely and reupload an edited model to the instance.
-Azure Digital Twins doesn't remember the old model was ever uploaded, so this will be like uploading a completely new model. Twins in the graph that use the model will automatically switch over to the new definition once it's available. Depending on how the new definition differs from the old one, these twins may have properties and relationships that match the deleted definition and are not valid with the new one, so you may need to patch them to make sure they remain valid.
+Azure Digital Twins doesn't remember the old model was ever uploaded, so this action will be like uploading an entirely new model. Twins in the graph that use the model will automatically switch over to the new definition once it's available. Depending on how the new definition differs from the old one, these twins may have properties and relationships that match the deleted definition and aren't valid with the new one, so you may need to patch them to make sure they remain valid.
To use this strategy, follow the steps below. ### 1. Delete old model
-Since Azure Digital Twins does not allow two models with the same ID, start by deleting the original model from your instance.
+Since Azure Digital Twins doesn't allow two models with the same ID, start by deleting the original model from your instance.
>[!NOTE] > If you have other models that depend on this model (through inheritance or components), you'll need to remove those references before you can delete the model. You can update those dependent models first to temporarily remove the references, or delete the dependent models and reupload them in a later step.
-Use the following instructions to [delete your original model](#deletion). This will leave your twins that were using that model temporarily "orphaned," as they're now using a model that no longer exists. This state will be repaired in the next step when you reupload the updated model.
+Use the following instructions to [delete your original model](#deletion). This action will leave your twins that were using that model temporarily "orphaned," as they're now using a model that no longer exists. This state will be repaired in the next step when you reupload the updated model.
### 2. Create and upload new model
Now that your new model has been uploaded in place of the old one, the twins in
>[!NOTE] > If you removed other dependent models earlier in order to delete the original model, reupload them now after the cache has reset. If you updated the dependent models to temporarily remove references to the original model, you can update them again to put the reference back.
-Next, update the **twins and relationships** in your instance so their properties match the properties defined by the new model. There are two ways to do this:
+Next, update the **twins and relationships** in your instance so their properties match the properties defined by the new model. There are two ways to achieve this purpose:
* Patch the twins and relationships as needed so they fit the new model. You can use the following instructions to [update twins](how-to-manage-twin.md#update-a-digital-twin) and [update relationships](how-to-manage-graph.md#update-relationships).
- - **If you've added properties**: Updating twins and relationships to have the new values isn't required, since twins missing the new values will still be valid twins. You can patch them as desired to add values for the new properties.
- - **If you've removed properties**: You must patch twins to remove the properties that are now invalid with the new model.
- - **If you've updated properties**: You must patch twins to update the values of changed properties to be valid with the new model.
+ - **If you've added properties**: Updating twins and relationships to have the new values isn't required, since twins missing the new values will still be valid twins. You can patch them however you want to add values for the new properties.
+ - **If you've removed properties**: It's required to patch twins to remove the properties that are now invalid with the new model.
+ - **If you've updated properties**: It's required to patch twins to update the values of changed properties to be valid with the new model.
* Delete twins and relationships that use the model, and recreate them. You can use the following instructions to [delete twins](how-to-manage-twin.md#delete-a-digital-twin) and [recreate twins](how-to-manage-twin.md#create-a-digital-twin), and [delete relationships](how-to-manage-graph.md#delete-relationships) and [recreate relationships](how-to-manage-graph.md#create-relationships).
- - You might want to do this if you're making a lot of changes to the model, and it will be difficult to update the existing twins to match it. However, recreation can be complicated if you have a lot of twins that are interconnected by many relationships.
+ - You might want to do this operation if you're making many changes to the model, and it will be difficult to update the existing twins to match it. However, recreation can be complicated if you have many twins that are interconnected by many relationships.
## Remove models Models can be removed from the service in one of two ways: * **Decommissioning** : Once a model is decommissioned, you can no longer use it to create new digital twins. Existing digital twins that already use this model aren't affected, so you can still update them with things like property changes and adding or deleting relationships.
-* **Deletion** : This will completely remove the model from the solution. Any twins that were using this model are no longer associated with any valid model, so they're treated as though they don't have a model at all. You can still read these twins, but won't be able to make any updates on them until they're reassigned to a different model.
+* **Deletion** : This operation will completely remove the model from the solution. Any twins that were using this model are no longer associated with any valid model, so they're treated as though they don't have a model at all. You can still read these twins, but you can't make any updates on them until they're reassigned to a different model.
-These are separate features and they do not impact each other, although they may be used together to remove a model gradually.
+These operations are separate features and they don't impact each other, although they may be used together to remove a model gradually.
### Decommissioning
To delete an individual model, follow the instructions and considerations from t
Generally, models can be deleted at any time.
-The exception is models that other models depend on, either with an `extends` relationship or as a component. For example, if a ConferenceRoom model extends a Room model, and has a ACUnit model as a component, you cannot delete Room or ACUnit until ConferenceRoom removes those respective references.
+The exception is models that other models depend on, either with an `extends` relationship or as a component. For example, if a ConferenceRoom model extends a Room model, and has a ACUnit model as a component, you can't delete Room or ACUnit until ConferenceRoom removes those respective references.
You can do this by updating the dependent model to remove the dependencies, or deleting the dependent model completely.
Even if a model meets the requirements to delete it immediately, you may want to
1. First, decommission the model 2. Wait a few minutes, to make sure the service has processed any last-minute twin creation requests sent before the decommission 3. Query twins by model to see all twins that are using the now-decommissioned model
-4. Delete the twins if you no longer need them, or patch them to a new model if needed. You can also choose to leave them alone, in which case they will become twins without models once the model is deleted. See the next section for the implications of this state.
+4. Delete the twins if you no longer need them, or patch them to a new model if needed. You can also choose to leave them alone, in which case they'll become twins without models once the model is deleted. See the next section for the implications of this state.
5. Wait for another few minutes to make sure the changes have percolated through 6. Delete the model
You can also delete a model with the [DigitalTwinModels Delete](/rest/api/digita
#### After deletion: Twins without models
-Once a model is deleted, any digital twins that were using the model are now considered to be without a model. Note that there is no query that can give you a list of all the twins in this stateΓÇöalthough you *can* still query the twins by the deleted model to know what twins are affected.
+Once a model is deleted, any digital twins that were using the model are now considered to be without a model. There's no query that can give you a list of all the twins in this stateΓÇöalthough you *can* still query the twins by the deleted model to know what twins are affected.
-Here is an overview of what you can and cannot do with twins that don't have a model.
+Here's an overview of what you can and can't do with twins that don't have a model.
Things you **can** do: * Query the twin
Things you **can't** do:
* Edit outgoing relationships (as in, relationships *from* this twin to other twins) * Edit properties
-#### After deletion: Re-uploading a model
+#### After deletion: Reuploading a model
After a model has been deleted, you may decide later to upload a new model with the same ID as the one you deleted. Here's what happens in that case.
-* From the solution store's perspective, this is the same as uploading a completely new model. The service doesn't remember the old one was ever uploaded.
-* If there are any remaining twins in the graph referencing the deleted model, they are no longer orphaned; this model ID is valid again with the new definition. However, if the new definition for the model is different than the model definition that was deleted, these twins may have properties and relationships that match the deleted definition and are not valid with the new one.
+* From the solution store's perspective, this operation is the same as uploading an entirely new model. The service doesn't remember the old one was ever uploaded.
+* If there are any remaining twins in the graph referencing the deleted model, they're no longer orphaned; this model ID is valid again with the new definition. However, if the new definition for the model is different than the model definition that was deleted, these twins may have properties and relationships that match the deleted definition and aren't valid with the new one.
-Azure Digital Twins does not prevent this state, so be careful to patch twins appropriately in order to make sure they remain valid through the model definition switch.
+Azure Digital Twins doesn't prevent this state, so be careful to patch twins appropriately to make sure they remain valid through the model definition switch.
## Next steps
digital-twins How To Manage Routes Apis Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-manage-routes-apis-cli.md
Follow the steps below to set up these storage resources in your Azure account,
1. Follow the steps in [Create a storage account](../storage/common/storage-account-create.md?tabs=azure-portal) to create a **storage account** in your Azure subscription. Make a note of the storage account name to use it later. 2. Follow the steps in [Create a container](../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container) to create a **container** within the new storage account. Make a note of the container name to use it later.
-3. Next, create a **SAS token** for your storage account that the endpoint can use to access it. Start by navigating to your storage account in the [Azure portal](https://ms.portal.azure.com/#home) (you can find it by name with the portal search bar).
-4. In the storage account page, choose the _Shared access signature_ link in the left navigation bar to start setting up the SAS token.
+3. Retrieve your storage account keys using the following command and copy the value for either one of your keys:
- :::image type="content" source="./media/how-to-manage-routes-apis-cli/generate-sas-token-1.png" alt-text="Screenshot of the storage account page in the Azure portal." lightbox="./media/how-to-manage-routes-apis-cli/generate-sas-token-1.png":::
+ ```azurecli
+ az storage account keys list --account-name <storage-account-name>
+ ```
-1. On the *Shared access signature page*, under *Allowed services* and *Allowed resource types*, select whatever settings you want. You'll need to select at least one box in each category. Under *Allowed permissions*, choose **Write** (you can also select other permissions if you want).
-1. Set whatever values you want for the remaining settings.
-1. When you're finished, select the _Generate SAS and connection string_ button to generate the SAS token.
+4. Select an expiration date and generate the SAS token for your storage account using the following command:
- :::image type="content" source="./media/how-to-manage-routes-apis-cli/generate-sas-token-2.png" alt-text="Screenshot of the storage account page in the Azure portal showing all the setting selection to generate a SAS token." lightbox="./media/how-to-manage-routes-apis-cli/generate-sas-token-2.png":::
+ ```azurecli
+ az storage account generate-sas --account-name <storage-account-name> --account-key <storage-account-key> --expiry <expiration-date> --services bfqt --resource-types o --permissions w
+ ```
-1. This will generate several SAS and connection string values at the bottom of the same page, underneath the setting selections. Scroll down to view the values, and use the *Copy to clipboard* icon to copy the **SAS token** value. Save it to use later.
+ The output of this command is the SAS token. Copy the SAS token value to use later.
- :::image type="content" source="./media/how-to-manage-routes-apis-cli/copy-sas-token.png" alt-text="Screenshot of the storage account page in the Azure portal highlighting how to copy the SAS token to use in the dead-letter secret." lightbox="./media/how-to-manage-routes-apis-cli/copy-sas-token.png":::
+ > [!NOTE]
+ > This command includes "**b**lob", "**f**ile", "**q**ueue", and "**t**able" *services*; an "**o**bject" *resource type*; and allows "**w**rite" *permissions*.
+ >
+ > For more information about the `az storage account generate-sas` command and its parameters, see the [Azure CLI reference](/cli/azure/storage/account?view=azure-cli-latest&preserve-view=true#az_storage_account_generate_sas).
#### Create the dead-letter endpoint
digital-twins How To Manage Routes Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-manage-routes-portal.md
Now the event grid, event hub, or Service Bus topic is available as an endpoint
When an endpoint can't deliver an event within a certain time period or after trying to deliver the event a certain number of times, it can send the undelivered event to a storage account. This process is known as **dead-lettering**.
+#### Set up storage resources
+
+Before setting the dead-letter location, you must have a [storage account](../storage/common/storage-account-create.md?tabs=azure-portal) with a [container](../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container) set up in your Azure account.
+
+You'll provide the URI for this container when creating the endpoint later. The dead-letter location will be provided to the endpoint as a container URI with a [SAS token](../storage/common/storage-sas-overview.md). That token needs `write` permission for the destination container within the storage account. The fully formed **dead letter SAS URI** will be in the format of: `https://<storage-account-name>.blob.core.windows.net/<container-name>?<SAS-token>`.
+
+Follow the steps below to set up these storage resources in your Azure account, to prepare to set up the endpoint connection in the next section.
+
+1. Follow the steps in [Create a storage account](../storage/common/storage-account-create.md?tabs=azure-portal) to create a **storage account** in your Azure subscription. Make a note of the storage account name to use it later.
+2. Follow the steps in [Create a container](../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container) to create a **container** within the new storage account. Make a note of the container name to use it later.
+3. Next, create a **SAS token** for your storage account that the endpoint can use to access it. Start by navigating to your storage account in the [Azure portal](https://ms.portal.azure.com/#home) (you can find it by name with the portal search bar).
+4. In the storage account page, choose the _Shared access signature_ link in the left navigation bar to start setting up the SAS token.
+
+ :::image type="content" source="./media/how-to-manage-routes-portal/generate-sas-token-1.png" alt-text="Screenshot of the storage account page in the Azure portal." lightbox="./media/how-to-manage-routes-portal/generate-sas-token-1.png":::
+
+1. On the *Shared access signature page*, under *Allowed services* and *Allowed resource types*, select whatever settings you want. You'll need to select at least one box in each category. Under *Allowed permissions*, choose **Write** (you can also select other permissions if you want).
+1. Set whatever values you want for the remaining settings.
+1. When you're finished, select the _Generate SAS and connection string_ button to generate the SAS token.
+
+ :::image type="content" source="./media/how-to-manage-routes-portal/generate-sas-token-2.png" alt-text="Screenshot of the storage account page in the Azure portal showing all the setting selection to generate a SAS token." lightbox="./media/how-to-manage-routes-portal/generate-sas-token-2.png":::
+
+1. This will generate several SAS and connection string values at the bottom of the same page, underneath the setting selections. Scroll down to view the values, and use the *Copy to clipboard* icon to copy the **SAS token** value. Save it to use later.
+
+ :::image type="content" source="./media/how-to-manage-routes-portal/copy-sas-token.png" alt-text="Screenshot of the storage account page in the Azure portal highlighting how to copy the SAS token to use in the dead-letter secret." lightbox="./media/how-to-manage-routes-portal/copy-sas-token.png":::
+
+#### Create the dead-letter endpoint
+ In order to create an endpoint with dead-lettering enabled, you must use the [CLI commands](/cli/azure/dt?view=azure-cli-latest&preserve-view=true) or [control plane APIs](/rest/api/digital-twins/controlplane/endpoints/digitaltwinsendpoint_createorupdate) to create your endpoint, rather than the Azure portal.
-For instructions on how to do this with these tools, see the [APIs and CLI](how-to-manage-routes-apis-cli.md#create-an-endpoint-with-dead-lettering) version of this article.
+For instructions on how to do this with these tools, see the [APIs and CLI](how-to-manage-routes-apis-cli.md#create-the-dead-letter-endpoint) version of this article.
## Create an event route
digital-twins How To Query Graph https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-query-graph.md
This article offers query examples and instructions for using the **Azure Digital Twins query language** to query your [twin graph](concepts-twins-graph.md) for information. (For an introduction to the query language, see [Concepts: Query language](concepts-query-language.md).)
-It contains sample queries that illustrate the query language structure and common query operations for digital twins. It also describes how to run your queries after you've written them, using the Azure Digital Twins [Query API](/rest/api/digital-twins/dataplane/query) or an [SDK](concepts-apis-sdks.md#overview-data-plane-apis).
+The article contains sample queries that illustrate the query language structure and common query operations for digital twins. It also describes how to run your queries after you've written them, using the Azure Digital Twins [Query API](/rest/api/digital-twins/dataplane/query) or an [SDK](concepts-apis-sdks.md#overview-data-plane-apis).
> [!NOTE] > If you're running the sample queries below with an API or SDK call, you'll need to condense the query text into a single line.
It contains sample queries that illustrate the query language structure and comm
## Show all digital twins
-Here is the basic query that will return a list of all digital twins in the instance:
+Here's the basic query that will return a list of all digital twins in the instance:
:::code language="sql" source="~/digital-twins-docs-samples/queries/examples.sql" id="GetAllTwins":::
As shown in the query above, the ID of a digital twin is queried using the metad
>[!TIP] > If you are using Cloud Shell to run a query with metadata fields that begin with `$`, you should escape the `$` with a backtick to let Cloud Shell know it's not a variable and should be consumed as a literal in the query text.
-You can also get twins based on **whether a certain property is defined**. Here is a query that gets twins that have a defined *Location* property:
+You can also get twins based on **whether a certain property is defined**. Here's a query that gets twins that have a defined *Location* property:
:::code language="sql" source="~/digital-twins-docs-samples/queries/examples.sql" id="QueryByProperty2":::
-This can help you to get twins by their *tag* properties, as described in [Add tags to digital twins](how-to-use-tags.md). Here is a query that gets all twins tagged with *red*:
+This query can help you to get twins by their *tag* properties, as described in [Add tags to digital twins](how-to-use-tags.md). Here's a query that gets all twins tagged with *red*:
:::code language="sql" source="~/digital-twins-docs-samples/queries/examples.sql" id="QueryMarkerTags1":::
-You can also get twins based on the **type of a property**. Here is a query that gets twins whose *Temperature* property is a number:
+You can also get twins based on the **type of a property**. Here's a query that gets twins whose *Temperature* property is a number:
:::code language="sql" source="~/digital-twins-docs-samples/queries/examples.sql" id="QueryByProperty3":::
So for example, if you query for twins of the model `dtmi:example:widget;4`, the
`IS_OF_MODEL` can take several different parameters, and the rest of this section is dedicated to its different overload options. The simplest use of `IS_OF_MODEL` takes only a `twinTypeName` parameter: `IS_OF_MODEL(twinTypeName)`.
-Here is a query example that passes a value in this parameter:
+Here's a query example that passes a value in this parameter:
:::code language="sql" source="~/digital-twins-docs-samples/queries/examples.sql" id="QueryByModel1":::
-To specify a twin collection to search when there is more than one (like when a `JOIN` is used), add the `twinCollection` parameter: `IS_OF_MODEL(twinCollection, twinTypeName)`.
-Here is a query example that adds a value for this parameter:
+To specify a twin collection to search when there's more than one (like when a `JOIN` is used), add the `twinCollection` parameter: `IS_OF_MODEL(twinCollection, twinTypeName)`.
+Here's a query example that adds a value for this parameter:
:::code language="sql" source="~/digital-twins-docs-samples/queries/examples.sql" id="QueryByModel2"::: To do an exact match, add the `exact` parameter: `IS_OF_MODEL(twinTypeName, exact)`.
-Here is a query example that adds a value for this parameter:
+Here's a query example that adds a value for this parameter:
:::code language="sql" source="~/digital-twins-docs-samples/queries/examples.sql" id="QueryByModel3"::: You can also pass all three arguments together: `IS_OF_MODEL(twinCollection, twinTypeName, exact)`.
-Here is a query example specifying a value for all three parameters:
+Here's a query example specifying a value for all three parameters:
:::code language="sql" source="~/digital-twins-docs-samples/queries/examples.sql" id="QueryByModel4":::
Here is a query example specifying a value for all three parameters:
When querying based on digital twins' **relationships**, the Azure Digital Twins query language has a special syntax.
-Relationships are pulled into the query scope in the `FROM` clause. Unlike in "classical" SQL-type languages, each expression in this `FROM` clause is not a table; rather, the `FROM` clause expresses a cross-entity relationship traversal. To traverse across relationships, Azure Digital Twins uses a custom version of `JOIN`.
+Relationships are pulled into the query scope in the `FROM` clause. Unlike in "classical" SQL-type languages, each expression in the `FROM` clause isn't a table; rather, the `FROM` clause expresses a cross-entity relationship traversal. To traverse across relationships, Azure Digital Twins uses a custom version of `JOIN`.
-Recall that with the Azure Digital Twins [model](concepts-models.md) capabilities, relationships do not exist independently of twins. This means that relationships here can't be queried independently and must be tied to a twin.
-To handle this, the keyword `RELATED` is used in the `JOIN` clause to pull in the set of a certain type of relationship coming from the twin collection. The query must then filter in the `WHERE` clause which specific twin(s) to use in the relationship query (using the twins' `$dtId` values).
+Recall that with the Azure Digital Twins [model](concepts-models.md) capabilities, relationships don't exist independently of twins, meaning that relationships here can't be queried independently and must be tied to a twin.
+To reflect this fact, the keyword `RELATED` is used in the `JOIN` clause to pull in the set of a certain type of relationship coming from the twin collection. The query must then filter in the `WHERE` clause, to indicate which specific twin(s) to use in the relationship query (using the twins' `$dtId` values).
The following sections give examples of what this looks like. ### Basic relationship query
-Here is a sample relationship-based query. This code snippet selects all digital twins with an *ID* property of 'ABC', and all digital twins related to these digital twins via a *contains* relationship.
+Here's a sample relationship-based query. This code snippet selects all digital twins with an *ID* property of 'ABC', and all digital twins related to these digital twins via a *contains* relationship.
:::code language="sql" source="~/digital-twins-docs-samples/queries/examples.sql" id="QueryByRelationship1":::
Here is a sample relationship-based query. This code snippet selects all digital
You can use the relationship query structure to identify a digital twin that's the source or the target of a relationship.
-For instance, you can start with a source twin and follow its relationships to find the target twins of the relationships. Here is an example of a query that finds the target twins of the *feeds* relationships coming from the twin source-twin.
+For instance, you can start with a source twin and follow its relationships to find the target twins of the relationships. Here's an example of a query that finds the target twins of the *feeds* relationships coming from the twin source-twin.
:::code language="sql" source="~/digital-twins-docs-samples/queries/examples.sql" id="QueryByRelationshipSource":::
You can also start with the target of the relationship and trace the relationshi
Similarly to the way digital twins have properties described via DTDL, relationships can also have properties. You can query twins **based on the properties of their relationships**. The Azure Digital Twins query language allows filtering and projection of relationships, by assigning an alias to the relationship within the `JOIN` clause.
-As an example, consider a *servicedBy* relationship that has a *reportedCondition* property. In the below query, this relationship is given an alias of 'R' in order to reference its property.
+As an example, consider a *servicedBy* relationship that has a *reportedCondition* property. In the below query, this relationship is given an alias of 'R' to reference its property.
:::code language="sql" source="~/digital-twins-docs-samples/queries/examples.sql" id="QueryByRelationship2":::
In the example above, note how *reportedCondition* is a property of the *service
### Query with multiple JOINs
-Up to five `JOIN`s are supported in a single query. This allows you to traverse multiple levels of relationships at once.
+Up to five `JOIN`s are supported in a single query, which allows you to traverse multiple levels of relationships at once.
To query on multiple levels of relationships, use a single `FROM` statement followed by N `JOIN` statements, where the `JOIN` statements express relationships on the result of a previous `FROM` or `JOIN` statement.
-Here is an example of a multi-join query, which gets all the light bulbs contained in the light panels in rooms 1 and 2.
+Here's an example of a multi-join query, which gets all the light bulbs contained in the light panels in rooms 1 and 2.
:::code language="sql" source="~/digital-twins-docs-samples/queries/examples.sql" id="QueryByRelationship3":::
Add a `WHERE` clause to count the number of items that meet a certain criteria.
:::code language="sql" source="~/digital-twins-docs-samples/queries/examples.sql" id="SelectCount2":::
-You can also use `COUNT` along with the `JOIN` clause. Here is a query that counts all the light bulbs contained in the light panels of rooms 1 and 2:
+You can also use `COUNT` along with the `JOIN` clause. Here's a query that counts all the light bulbs contained in the light panels of rooms 1 and 2:
:::code language="sql" source="~/digital-twins-docs-samples/queries/examples.sql" id="SelectCount3":::
You can select the several "top" items in a query using the `Select TOP` clause.
By using projections in the `SELECT` statement, you can choose which columns a query will return. Projection is now supported for both primitive and complex properties. For more information about projections with Azure Digital Twins, see the [SELECT clause reference documentation](reference-query-clause-select.md#select-columns-with-projections).
-Here is an example of a query that uses projection to return twins and relationships. The following query projects the Consumer, Factory and Edge from a scenario where a Factory with an ID of *ABC* is related to the Consumer through a relationship of *Factory.customer*, and that relationship is presented as the *Edge*.
+Here's an example of a query that uses projection to return twins and relationships. The following query projects the Consumer, Factory and Edge from a scenario where a Factory with an ID of *ABC* is related to the Consumer through a relationship of *Factory.customer*, and that relationship is presented as the *Edge*.
:::code language="sql" source="~/digital-twins-docs-samples/queries/examples.sql" id="Projections1":::
The following query does the same operations as the previous example, but it ali
:::code language="sql" source="~/digital-twins-docs-samples/queries/examples.sql" id="Projections4":::
-Here is a similar query that queries the same set as above, but projects only the *Consumer.name* property as `consumerName`, and projects the complete Factory as a twin.
+Here's a similar query that queries the same set as above, but projects only the *Consumer.name* property as `consumerName`, and projects the complete Factory as a twin.
:::code language="sql" source="~/digital-twins-docs-samples/queries/examples.sql" id="Projections5":::
For example, consider a scenario in which Buildings contain Floors and Floors co
## Other compound query examples
-You can **combine** any of the above types of query using combination operators to include more detail in a single query. Here are some additional examples of compound queries that query for more than one type of twin descriptor at once.
+You can **combine** any of the above types of query using combination operators to include more detail in a single query. Here are some other examples of compound queries that query for more than one type of twin descriptor at once.
* Out of the devices that Room 123 has, return the MxChip devices that serve the role of Operator :::code language="sql" source="~/digital-twins-docs-samples/queries/examples.sql" id="OtherExamples1":::
You can **combine** any of the above types of query using combination operators
## Run queries with the API
-Once you have decided on a query string, you execute it by making a call to the [Query API](/rest/api/digital-twins/dataplane/query).
+Once you've decided on a query string, you execute it by making a call to the [Query API](/rest/api/digital-twins/dataplane/query).
You can call the API directly, or use one of the [SDKs](concepts-apis-sdks.md#overview-data-plane-apis) available for Azure Digital Twins.
The following code snippet illustrates the [.NET (C#) SDK](/dotnet/api/overview/
The query used in this call returns a list of digital twins, which the above example represents with [BasicDigitalTwin](/dotnet/api/azure.digitaltwins.core.basicdigitaltwin?view=azure-dotnet&preserve-view=true) objects. The return type of your data for each query will depend on what terms you specify with the `SELECT` statement: * Queries that begin with `SELECT * FROM ...` will return a list of digital twins (which can be serialized as `BasicDigitalTwin` objects, or other custom digital twin types that you may have created). * Queries that begin in the format `SELECT <A>, <B>, <C> FROM ...` will return a dictionary with keys `<A>`, `<B>`, and `<C>`.
-* Other formats of `SELECT` statements can be crafted to return custom data. You might consider creating your own classes to handle very customized result sets.
+* Other formats of `SELECT` statements can be crafted to return custom data. You might consider creating your own classes to handle customized result sets.
### Query with paging
-Query calls support paging. Here is a complete example using `BasicDigitalTwin` as query result type with error handling and paging:
+Query calls support paging. Here's a complete example using `BasicDigitalTwin` as query result type with error handling and paging:
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/queries.cs" id="FullQuerySample":::
digital-twins How To Route With Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-route-with-managed-identity.md
# Enable a managed identity for routing Azure Digital Twins events (preview)
-This article describes how to enable a [system-assigned identity for an Azure Digital Twins instance](concepts-security.md#managed-identity-for-accessing-other-resources-preview) (currently in preview), and use the identity when forwarding events to supported routing destinations. Setting up a managed identity is not required for routing, but it can help the instance to easily access other Azure AD-protected resources, such as [Event Hub](../event-hubs/event-hubs-about.md), [Service Bus](../service-bus-messaging/service-bus-messaging-overview.md) destinations, and [Azure Storage Container](../storage/blobs/storage-blobs-introduction.md).
+This article describes how to enable a [system-assigned identity for an Azure Digital Twins instance](concepts-security.md#managed-identity-for-accessing-other-resources-preview) (currently in preview), and use the identity when forwarding events to supported routing destinations. Setting up a managed identity isn't required for routing, but it can help the instance to easily access other Azure AD-protected resources, such as [Event Hub](../event-hubs/event-hubs-about.md), [Service Bus](../service-bus-messaging/service-bus-messaging-overview.md) destinations, and [Azure Storage Container](../storage/blobs/storage-blobs-introduction.md).
Here are the steps that are covered in this article: 1. Create an Azure Digital Twins instance with a system-assigned identity or enable system-assigned identity on an existing Azure Digital Twins instance. 1. Add an appropriate role or roles to the identity. For example, assign the **Azure Event Hub Data Sender** role to the identity if the endpoint is Event Hub, or **Azure Service Bus Data Sender role** if the endpoint is Service Bus.
-1. Create an endpoint in Azure Digital Twins that is able to use system-assigned identities for authentication.
+1. Create an endpoint in Azure Digital Twins that can use system-assigned identities for authentication.
## Enable system-managed identity for the instance
Either of these creation methods will give the same configuration options and th
### Add a system-managed identity during instance creation
-In this section, you'll learn how to enable a system-managed identity for an Azure Digital Twins instance while the instance is being created. You can enable the identity whether you are creating the instance with the [Azure portal](https://portal.azure.com) or the [Azure CLI](/cli/azure/what-is-azure-cli). Use the tabs below to select instructions for your preferred experience.
+In this section, you'll learn how to enable a system-managed identity for an Azure Digital Twins instance while the instance is being created. You can enable the identity whether you're creating the instance with the [Azure portal](https://portal.azure.com) or the [Azure CLI](/cli/azure/what-is-azure-cli). Use the tabs below to select instructions for your preferred experience.
# [Portal](#tab/portal)
To assign a role to the identity, start by opening the [Azure portal](https://po
# [CLI](#tab/cli)
-You can add the `--scopes` parameter onto the `az dt create` command in order to assign the identity to one or more scopes with a specified role. This can be used when first creating the instance, or later by passing in the name of an instance that already exists.
+You can add the `--scopes` parameter onto the `az dt create` command to assign the identity to one or more scopes with a specified role. The command with this parameter can be used when first creating the instance, or later by passing in the name of an instance that already exists.
-Here is an example that creates an instance with a system managed identity, and assigns that identity a custom role called `MyCustomRole` in an event hub.
+Here's an example that creates an instance with a system managed identity, and assigns that identity a custom role called `MyCustomRole` in an event hub.
```azurecli-interactive az dt create --dt-name <instance-name> --resource-group <resource-group> --assign-identity --scopes "/subscriptions/<subscription ID>/resourceGroups/<resource-group>/providers/Microsoft.EventHub/namespaces/<Event-Hubs-namespace>/eventhubs/<event-hub-name>" --role MyCustomRole
az dt create --dt-name <instance-name> --resource-group <resource-group> --assig
For more examples of role assignments with this command, see the [az dt create reference documentation](/cli/azure/dt#az_dt_create).
-Alternatively, you can also use the [az role assignment](/cli/azure/role/assignment?view=azure-cli-latest&preserve-view=true) command group to create and manage roles. This can be used to support additional scenarios where you don't want to group role assignment with the create command.
+You can also use the [az role assignment](/cli/azure/role/assignment?view=azure-cli-latest&preserve-view=true) command group to create and manage roles. This command can be used to support other scenarios where you don't want to group role assignment with the create command.
## Create an endpoint with identity-based authentication
-After setting up a system-managed identity for your Azure Digital Twins instance and assigning it the appropriate role(s), you can create Azure Digital Twins [endpoints](how-to-manage-routes-portal.md#create-an-endpoint-for-azure-digital-twins) that are capable of using the identity for authentication. This option is only available for Event Hub and Service Bus-type endpoints (it's not supported for Event Grid).
+After setting up a system-managed identity for your Azure Digital Twins instance and assigning it the appropriate role(s), you can create Azure Digital Twins [endpoints](how-to-manage-routes-portal.md#create-an-endpoint-for-azure-digital-twins) that can use the identity for authentication. This option is only available for Event Hub and Service Bus-type endpoints (it's not supported for Event Grid).
>[!NOTE] > You cannot edit an endpoint that has already been created with key-based identity to change to identity-based authentication. You must choose the authentication type when the endpoint is first created.
Finish setting up your endpoint and select **Save**.
Creating the endpoint with the CLI is done by adding a `--auth-type` parameter to the `az dt endpoint create` command that's used to create the endpoint. (For more information about this command, see its [reference documentation](/cli/azure/dt/endpoint/create?view=azure-cli-latest&preserve-view=true) or the [general instructions for setting up an Azure Digital Twins endpoint](how-to-manage-routes-apis-cli.md#create-the-endpoint)).
-To create an endpoint that uses identity-based authentication, specify the `IdentityBased` authentication type with the `--auth-type` parameter. The example below illustrates this for an Event Hubs endpoint.
+To create an endpoint that uses identity-based authentication, specify the `IdentityBased` authentication type with the `--auth-type` parameter. The example below illustrates this functionality for an Event Hubs endpoint.
```azurecli-interactive az dt endpoint create eventhub --endpoint-name <endpoint-name> --eventhub-resource-group <eventhub-resource-group> --eventhub-namespace <eventhub-namespace> --eventhub <eventhub-name> --auth-type IdentityBased --dt-name <instance-name>
digital-twins Reference Query Clause From https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/reference-query-clause-from.md
To name the collection:
### Examples
-Here is a basic query. The following query returns all digital twins in the instance.
+Here's a basic query. The following query returns all digital twins in the instance.
:::code language="sql" source="~/digital-twins-docs-samples/queries/reference.sql" id="FromDigitalTwinsExample":::
-Here is a query with a named collection. The following query assigns a name `T` to the collection, and still returns all digital twins in the instance.
+Here's a query with a named collection. The following query assigns a name `T` to the collection, and still returns all digital twins in the instance.
:::code language="sql" source="~/digital-twins-docs-samples/queries/reference.sql" id="FromDigitalTwinsNamedExample":::
To name the collection:
### Examples
-Here is a query that returns all relationships in the instance.
+Here's a query that returns all relationships in the instance.
:::code language="sql" source="~/digital-twins-docs-samples/queries/reference.sql" id="FromRelationshipsExample":::
-Here is a query that returns all relationships coming from twins `A`, `B`, `C`, or `D`.
+Here's a query that returns all relationships coming from twins `A`, `B`, `C`, or `D`.
:::code language="sql" source="~/digital-twins-docs-samples/queries/reference.sql" id="FromRelationshipsFilteredExample":::
The following limits apply to queries using `FROM`.
* [No subqueries](#no-subqueries) * [Choose FROM RELATIONSHIPS or JOIN](#choose-from-relationships-or-join)
-See the sections below for more details.
+For more information, see the following sections.
### No subqueries
The following query shows an example of what **cannot** be done as per this limi
### Choose FROM RELATIONSHIPS or JOIN
-The `FROM RELATIONSHIPS` feature cannot be combined with `JOIN`. You will have to select which of these options works best for the information you'd like to select.
+The `FROM RELATIONSHIPS` feature cannot be combined with `JOIN`. You'll have to select which of these options works best for the information you'd like to select.
digital-twins Reference Query Clause Join https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/reference-query-clause-join.md
The `JOIN` clause is used in the Azure Digital Twins query language as part of t
This clause is optional while querying. ## Core syntax: JOIN ... RELATED
-Because relationships in Azure Digital Twins are part of digital twins, not independent entities, the `RELATED` keyword is used in `JOIN` queries to reference the set of relationships of a certain type from the twin collection. This set of relationships can be assigned a collection name.
+Because relationships in Azure Digital Twins are part of digital twins, not independent entities, the `RELATED` keyword is used in `JOIN` queries to reference the set of relationships of a certain type from the twin collection. The set of relationships can be assigned a collection name.
-The query must then use the `WHERE` clause to specify which specific twin or twins are being used to support the relationship query. This is done by filtering on either the source or target twin's `$dtId` value.
+The query must then use the `WHERE` clause to specify which specific twin or twins are being used to support the relationship query, which is done by filtering on either the source or target twin's `$dtId` value.
### Syntax
The following limits apply to queries using `JOIN`.
* [No OUTER JOIN semantics](#no-outer-join-semantics) * [Source twin required](#twins-required)
-See the sections below for more details.
+For more information, see the following sections.
### Depth limit of five
Graph traversal depth is restricted to five `JOIN` levels per query.
#### Example
-The following query illustrates the maximum number of `JOINs` that are possible in an Azure Digital Twins query. It gets all the LightBulbs in Buliding1.
+The following query illustrates the maximum number of `JOIN` clauses that are possible in an Azure Digital Twins query. It gets all the LightBulbs in Buliding1.
:::code language="sql" source="~/digital-twins-docs-samples/queries/reference.sql" id="MaxJoinExample"::: ### No OUTER JOIN semantics
-`OUTER JOIN` semantics are not supported, meaning if the relationship has a rank of zero, then the entire "row" is eliminated from the output result set.
+`OUTER JOIN` semantics aren't supported, meaning if the relationship has a rank of zero, then the entire "row" is eliminated from the output result set.
#### Example
If Building1 contains no floors, then this query will return an empty result set
### Twins required
-Relationships in Azure Digital Twins can't be queried as independent entities; you also need to provide information about the source twin that the relationship comes from. This is included as part of the default `JOIN` usage in Azure Digital Twins through the `RELATED` keyword.
+Relationships in Azure Digital Twins can't be queried as independent entities; you also need to provide information about the source twin that the relationship comes from. This functionality is included as part of the default `JOIN` usage in Azure Digital Twins through the `RELATED` keyword.
Queries with a `JOIN` clause must also filter by any twin's `$dtId` property in the `WHERE` clause, to clarify which twin(s) are being used to support the relationship query.
digital-twins Reference Query Clause Select https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/reference-query-clause-select.md
Use the `*` character in a select statement to project the digital twin document
### Returns
-The set of properties which are returned from the query.
+The set of properties that are returned from the query.
### Example
To project a property:
A collection of twins, properties, or relationships specified in the projection.
-If a property included in the projection is not present for a particular data row, the property will similarly not be present in the result set. For an example of this behavior, see [Project property example: Property not present for a data row](#project-property-example-property-not-present-for-a-data-row).
+If a property included in the projection isn't present for a particular data row, the property will similarly not be present in the result set. For an example of this behavior, see [Project property example: Property not present for a data row](#project-property-example-property-not-present-for-a-data-row).
### Examples
Below is an example query that projects a collection from this graph. The follow
:::code language="sql" source="~/digital-twins-docs-samples/queries/reference.sql" id="SelectProjectCollectionExample":::
-Here is the JSON payload that's returned from this query:
+Here's the JSON payload that's returned from this query:
```json {
Here is the JSON payload that's returned from this query:
#### Project with JOIN example
-Projection is commonly used to return a collection specified in a `JOIN`. The following query uses projection to return the data of the Consumer, Factory and Relationship. For more about the `JOIN` syntax used in the example, see [Azure Digital Twins query language reference: JOIN clause](reference-query-clause-join.md).
+Projection is commonly used to return a collection specified in a `JOIN`. The following query uses projection to return the data of the Consumer, Factory, and Relationship. For more about the `JOIN` syntax used in the example, see [Azure Digital Twins query language reference: JOIN clause](reference-query-clause-join.md).
:::code language="sql" source="~/digital-twins-docs-samples/queries/reference.sql" id="SelectProjectJoinExample":::
-Here is the JSON payload that's returned from this query:
+Here's the JSON payload that's returned from this query:
```json {
Here is the JSON payload that's returned from this query:
#### Project property example
-Here is an example that projects a property. The following query uses projection to return the `name` property of the Consumer twin, and the `managedBy` property of the relationship.
+Here's an example that projects a property. The following query uses projection to return the `name` property of the Consumer twin, and the `managedBy` property of the relationship.
:::code language="sql" source="~/digital-twins-docs-samples/queries/reference.sql" id="SelectProjectPropertyExample":::
-Here is the JSON payload that's returned from this query:
+Here's the JSON payload that's returned from this query:
```json {
Here is the JSON payload that's returned from this query:
#### Project property example: Property not present for a data row
-If a property included in the projection is not present for a particular data row, the property will similarly not be present in the result set.
+If a property included in the projection isn't present for a particular data row, the property will similarly not be present in the result set.
-Consider for this example a set of twins that represent people. Some of the twins have ages associated with them, but others do not.
+Consider for this example a set of twins that represent people. Some of the twins have ages associated with them, but others don't.
-Here is a query that projects the `name` and `age` properties:
+Here's a query that projects the `name` and `age` properties:
:::code language="sql" source="~/digital-twins-docs-samples/queries/reference.sql" id="SelectProjectPropertyNotPresentExample":::
-The result might look something like this, with the `age` property missing from some twins in the result where the twins do not have this property.
+The result might look something like this, with the `age` property missing from some twins in the result where the twins don't have this property.
```json {
The following query returns the count of all relationships in the instance.
## SELECT TOP
-Use this method to return only a certain number of top items that meet the query requirements.
+Use this method to return only some of the top items that meet the query requirements.
### Syntax
digital-twins Reference Query Clause Where https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/reference-query-clause-where.md
This document contains reference information on the **WHERE clause** for the [Azure Digital Twins query language](concepts-query-language.md).
-The WHERE clause is the last part of a query. It is used to filter the items that are returned based on specific conditions.
+The WHERE clause is the last part of a query. It's used to filter the items that are returned based on specific conditions.
This clause is optional while querying.
A condition evaluating to a `Boolean` value.
### Examples
-Here is an example using properties and operators. The following query specifies in the WHERE clause to only return the twin with a `$dtId` value of Room1.
+Here's an example using properties and operators. The following query specifies in the WHERE clause to only return the twin with a `$dtId` value of Room1.
:::code language="sql" source="~/digita