Updates from: 02/11/2023 02:11:07
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory How Provisioning Works https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/how-provisioning-works.md
Previously updated : 10/20/2022 Last updated : 02/10/2023 # How Application Provisioning works in Azure Active Directory
-Automatic provisioning refers to creating user identities and roles in the cloud applications that users need access to. In addition to creating user identities, automatic provisioning includes the maintenance and removal of user identities as status or roles change. Before you start a deployment, you can review this article to learn how Azure AD provision works and get configuration recommendations.
+Automatic provisioning refers to creating user identities and roles in the cloud applications that users need to access. In addition to creating user identities, automatic provisioning includes the maintenance and removal of user identities as status or roles change. Before you start a deployment, you can review this article to learn how Azure AD provisioning works and get configuration recommendations.
The **Azure AD Provisioning Service** provisions users to SaaS apps and other systems by connecting to a System for Cross-Domain Identity Management (SCIM) 2.0 user management API endpoint provided by the application vendor. This SCIM endpoint allows Azure AD to programmatically create, update, and remove users. For selected applications, the provisioning service can also create, update, and remove additional identity-related objects, such as groups and roles. The channel used for provisioning between Azure AD and the application is encrypted using HTTPS TLS 1.2 encryption.
After the initial cycle, all other cycles will:
The provisioning service continues running back-to-back incremental cycles indefinitely, at intervals defined in the [tutorial specific to each application](../saas-apps/tutorial-list.md). Incremental cycles continue until one of the following events occurs: - The service is manually stopped using the Azure portal, or using the appropriate Microsoft Graph API command.-- A new initial cycle is triggered using the **Restart provisioning** option in the Azure portal, or using the appropriate Microsoft Graph API command. This action clears any stored watermark and causes all source objects to be evaluated again. This will not break the links between source and target objects. To break the links use [Restart synchronizationJob](https://learn.microsoft.com/graph/api/synchronization-synchronizationjob-restart?view=graph-rest-beta&tabs=http) with the following request:
+- A new initial cycle is triggered using the **Restart provisioning** option in the Azure portal, or using the appropriate Microsoft Graph API command. This action clears any stored watermark and causes all source objects to be evaluated again. This will not break the links between source and target objects. To break the links use [Restart synchronizationJob](/graph/api/synchronization-synchronizationjob-restart?view=graph-rest-beta&tabs=http&preserve-view=true) with the following request:
<!-- { "blockType": "request",
active-directory User Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/user-provisioning.md
Previously updated : 10/20/2022 Last updated : 02/09/2023
In Azure Active Directory (Azure AD), the term *app provisioning* refers to auto
![Diagram that shows provisioning scenarios.](../governance/media/what-is-provisioning/provisioning.png)
-Azure AD application provisioning refers to automatically creating user identities and roles in the applications that users need access to. In addition to creating user identities, automatic provisioning includes the maintenance and removal of user identities as status or roles change. Common scenarios include provisioning an Azure AD user into SaaS applications like [Dropbox](../../active-directory/saas-apps/dropboxforbusiness-provisioning-tutorial.md), [Salesforce](../../active-directory/saas-apps/salesforce-provisioning-tutorial.md), [ServiceNow](../../active-directory/saas-apps/servicenow-provisioning-tutorial.md), and more.
+Azure AD application provisioning refers to automatically creating user identities and roles in the applications that users need access to. In addition to creating user identities, automatic provisioning includes the maintenance and removal of user identities as status or roles change. Common scenarios include provisioning an Azure AD user into SaaS applications like [Dropbox](../../active-directory/saas-apps/dropboxforbusiness-provisioning-tutorial.md), [Salesforce](../../active-directory/saas-apps/salesforce-provisioning-tutorial.md), [ServiceNow](../../active-directory/saas-apps/servicenow-provisioning-tutorial.md), and many more.
Azure AD also supports provisioning users into applications hosted on-premises or in a virtual machine, without having to open up any firewalls. If your application supports [SCIM](https://aka.ms/scimoverview), or you've built a SCIM gateway to connect to your legacy application, you can use the Azure AD Provisioning agent to [directly connect](./on-premises-scim-provisioning.md) with your application and automate provisioning and deprovisioning. If you have legacy applications that don't support SCIM and rely on an [LDAP](./on-premises-ldap-connector-configure.md) user store or a [SQL](./tutorial-ecma-sql-connector.md) database, Azure AD can support those as well.
For other applications that support SCIM 2.0, follow the steps in [Build a SCIM
- [List of tutorials on how to integrate SaaS apps](../saas-apps/tutorial-list.md) - [Customizing attribute mappings for user provisioning](customize-application-attributes.md)-- [Scoping filters for user provisioning](define-conditional-rules-for-provisioning-user-accounts.md)
+- [Scoping filters for user provisioning](define-conditional-rules-for-provisioning-user-accounts.md)
active-directory Application Proxy Configure Single Sign On With Kcd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-single-sign-on-with-kcd.md
Previously updated : 11/17/2022 Last updated : 02/10/2023
# Kerberos Constrained Delegation for single sign-on (SSO) to your apps with Application Proxy
-You can provide single sign-on for on-premises applications published through Application Proxy that are secured with integrated Windows authentication. These applications require a Kerberos ticket for access. Application Proxy uses Kerberos Constrained Delegation (KCD) to support these applications.
+You can provide single sign-on for on-premises applications published through Application Proxy that are secured with integrated Windows authentication. These applications require a Kerberos ticket for access. Application Proxy uses Kerberos Constrained Delegation (KCD) to support these applications.
+
+To learn more about Single Sign-On (SSO), see [What is Single Sign-On?](../manage-apps/what-is-single-sign-on.md).
You can enable single sign-on to your applications using integrated Windows authentication (IWA) by giving Application Proxy connectors permission in Active Directory to impersonate users. The connectors use this permission to send and receive tokens on their behalf.
But, in some cases, the request is successfully sent to the backend application
## Next steps * [How to configure an Application Proxy application to use Kerberos Constrained Delegation](application-proxy-back-end-kerberos-constrained-delegation-how-to.md)
-* [Troubleshoot issues you're having with Application Proxy](application-proxy-troubleshoot.md)
+* [Troubleshoot issues you're having with Application Proxy](application-proxy-troubleshoot.md)
active-directory Concept Certificate Based Authentication Mobile Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication-mobile-ios.md
Title: Azure Active Directory certificate-based authentication on iOS devices - Azure Active Directory
-description: Learn about Azure Active Directory certificate-based authentication on iOS devices
+ Title: Azure Active Directory certificate-based authentication on Apple devices - Azure Active Directory
+description: Learn about Azure Active Directory certificate-based authentication on Apple devices that run macOS or iOS
Previously updated : 01/29/2023 Last updated : 02/09/2023 -+
-# Azure Active Directory certificate-based authentication on iOS
+# Azure Active Directory certificate-based authentication on iOS and macOS
+This topic covers Azure Active Directory (Azure AD) certificate-based authentication (CBA) support for macOS and iOS devices.
+
+## Azure Active Directory certificate-based authentication on macOS devices
+
+Devices that run macOS can use CBA to authenticate against Azure AD by using their X.509 client certificate. Azure AD CBA is supported with certificates on-device and external hardware protected security keys. On macOS, Azure AD CBA is supported on all browsers and on Microsoft first-party applications.
+
+### Browsers supported on macOS
+
+|Edge | Chrome | Safari | Firefox |
+|--|||-|
+|&#x2705; |&#x2705; | &#x2705; |&#x2705; |
+
+### macOS device sign-in with Azure AD CBA
+
+Azure AD CBA today isn't supported for device-based sign-in to macOS machines. The certificate used to sign in to the device can be the same certificate used to authenticate to Azure AD from a browser or desktop application, but the device sign-in itself isn't supported against Azure AD yet. 
+
+## Azure Active Directory certificate-based authentication on iOS devices
Devices that run iOS can use certificate-based authentication (CBA) to authenticate to Azure Active Directory (Azure AD) using a client certificate on their device when connecting to: - Office mobile applications such as Microsoft Outlook and Microsoft Word
Devices that run iOS can use certificate-based authentication (CBA) to authentic
Azure AD CBA is supported for certificates on-device on native browsers and on Microsoft first-party applications on iOS devices.
-## Prerequisites
+### Prerequisites
- iOS version must be iOS 9 or later. - Microsoft Authenticator is required for Office applications and Outlook on iOS.
-## Support for on-device certificates and external storage
+### Support for on-device certificates and external storage
On-device certificates are provisioned on the device. Customers can use Mobile Device Management (MDM) to provision the certificates on the device. Since iOS doesn't support hardware protected keys out of the box, customers can use external storage devices for certificates.
-## Supported platforms
+### Supported platforms
- Only native browsers are supported - Applications using latest MSAL libraries or Microsoft Authenticator can do CBA-- Edge with profile, when users add account and logged in a profile will support CBA
+- Edge with profile, when users add account and logged in a profile support CBA
- Microsoft first party apps with latest MSAL libraries or Microsoft Authenticator can do CBA ### Browsers
On-device certificates are provisioned on the device. Customers can use Mobile D
|--|||-| |&#10060; | &#10060; | &#x2705; |&#10060; |
-## Microsoft mobile applications support
+### Microsoft mobile applications support
| Applications | Support | |:|::|
On-device certificates are provisioned on the device. Customers can use Mobile D
|Word / Excel / PowerPoint | &#x2705; | |Yammer | &#x2705; |
-## Support for Exchange ActiveSync clients
+### Support for Exchange ActiveSync clients
On iOS 9 or later, the native iOS mail client is supported.
To determine if your email application supports Azure AD CBA, contact your appli
Certificates can be provisioned in external devices like hardware security keys along with a PIN to protect private key access. Microsoft's mobile certificate-based solution coupled with the hardware security keys is a simple, convenient, FIPS (Federal Information Processing Standards) certified phishing-resistant MFA method.
-As for iOS 16/iPadOS 16.1, Apple devices provide native driver support for USB-C or Lightning connected CCID-compliant smart cards. This means Apple devices on iOS 16/iPadOS 16.1 will see a USB-C or Lightning connected CCID-compliant device as a smart card without the use of additional drivers or 3rd party apps. Azure AD CBA will work on these USB-A or USB-C, or Lightning connected CCID-compliant smart cards.
+As for iOS 16/iPadOS 16.1, Apple devices provide native driver support for USB-C or Lightning connected CCID-compliant smart cards. This means Apple devices on iOS 16/iPadOS 16.1 see a USB-C or Lightning connected CCID-compliant device as a smart card without the use of additional drivers or third-party apps. Azure AD CBA works on these USB-A, USB-C, or Lightning connected CCID-compliant smart cards.
### Advantages of certificates on hardware security key
Security keys with certificates:
### Azure AD CBA on iOS mobile with YubiKey
-Even though the native Smartcard/CCID driver is available on iOS/iPadOS for Lightning connected CCID-compliant smart cards, the YubiKey 5Ci Lightning connector is not seen as a connected smart card on these devices without the use of PIV (Personal Identity Verification) middleware like the Yubico Authenticator.
+Even though the native Smartcard/CCID driver is available on iOS/iPadOS for Lightning connected CCID-compliant smart cards, the YubiKey 5Ci Lightning connector isn't seen as a connected smart card on these devices without the use of PIV (Personal Identity Verification) middleware like the Yubico Authenticator.
### One-time registration prerequisite
Even though the native Smartcard/CCID driver is available on iOS/iPadOS for Ligh
1. Install the latest Microsoft Authenticator app. 1. Open Outlook and plug in your YubiKey. 1. Select **Add account** and enter your user principal name (UPN).
-1. Click **Continue** and the iOS certificate picker will appear.
+1. Click **Continue** and the iOS certificate picker appears.
1. Select the public certificate copied from YubiKey that is associated with the userΓÇÖs account. 1. Click **YubiKey required** to open the YubiKey authenticator app. 1. Enter the PIN to access YubiKey and select the back button at the top left corner.
The user should be successfully logged in and redirected to the Outlook homepage
### Troubleshoot certificates on hardware security key
-#### What will happen if the user has certificates both on the iOS device and YubiKey?
+#### What happens if the user has certificates both on the iOS device and YubiKey?
-The iOS certificate picker will show all the certificates on both iOS device and the ones copied from YubiKey into iOS device. Depending on the certificate user picks they will be either taken to YubiKey authenticator to enter PIN or directly authenticated.
+The iOS certificate picker shows all the certificates on both iOS device and the ones copied from YubiKey into iOS device. Depending on the certificate user picks, they may be taken to YubiKey authenticator to enter a PIN, or directly authenticated.
#### My YubiKey is locked after incorrectly typing PIN 3 times. How do I fix it? - Users should see a dialog informing you that too many PIN attempts have been made. This dialog also pops up during subsequent attempts to select **Use Certificate or smart card**. - [YubiKey Manager](https://www.yubico.com/support/download/yubikey-manager/) can reset a YubiKeyΓÇÖs PIN.
-#### Once CBA fails, clicking on the CBA option again in the ΓÇÿOther ways to signinΓÇÖ link on the error page fails.
+#### After CBA fails, the CBA option in the ΓÇÿOther ways to sign inΓÇÖ link also fails. Is there a workaround?
-This issue happens because of certificate caching. We are working to add a fix to clear the cache. As a workaround, clicking cancel and restarting the login flow will let the user choose a new certificate and successfully login.
+This issue happens because of certificate caching. We're working on an update to clear the cache. As a workaround, click **Cancel**, retry sign-in, and choose a new certificate.
#### Azure AD CBA with YubiKey is failing. What information would help debug the issue?
This issue happens because of certificate caching. We are working to add a fix t
#### How can I enforce phishing-resistant MFA using a hardware security key on browser-based applications on mobile?
-Certificate based authentication and Conditional Access authentication strength capability makes it powerful for customers to enforce authentication needs. Edge as a profile (add an account) will work with a hardware security key like YubiKey and conditional access policy with authentication strength capability can enforce phishing-resistant authentication with CBA.
+Certificate-based authentication and Conditional Access authentication strength capability makes it powerful for customers to enforce authentication needs. Edge as a profile (add an account) works with a hardware security key like YubiKey and a Conditional Access policy with authentication strength capability can enforce phishing-resistant authentication with CBA.
-CBA support for YubiKey is available in the latest Microsoft Authentication Library (MSAL) libraries, any third-party application that integrates the latest MSAL, and all Microsoft first party applications can leverage CBA and Conditional Access authentication strength.
+CBA support for YubiKey is available in the latest Microsoft Authentication Library (MSAL) libraries, and any third-party application that integrates the latest MSAL. All Microsoft first-party applications can use CBA and Conditional Access authentication strength.
### Supported operating systems
CBA support for YubiKey is available in the latest Microsoft Authentication Libr
## Known issue
-On iOS, users will see a "double prompt", where they must click the option to use certificate-based authentication twice. We're working to create a seamless user experience.
+On iOS, users see a "double prompt", where they must click the option to use certificate-based authentication twice. We're working to create a seamless user experience.
## Next steps
active-directory Concept Fido2 Hardware Vendor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-fido2-hardware-vendor.md
# Become a Microsoft-compatible FIDO2 security key vendor
-Most hacking related breaches use either stolen or weak passwords. Often, IT will enforce stronger password complexity or frequent password changes to reduce the risk of a security incident. However, this increases help desk costs and leads to poor user experiences as users are required to memorize or store new, complex passwords.
+Most hacking related breaches use either stolen or weak passwords. Often, IT enforce stronger password complexity or frequent password changes to reduce the risk of a security incident. However, this increases help desk costs and leads to poor user experiences as users are required to memorize or store new, complex passwords.
-FIDO2 security keys offer an alternative. FIDO2 security keys can replace weak credentials with strong hardware-backed public/private-key credentials which can't be reused, replayed, or shared across services. Security keys support shared device scenarios, allowing you to carry your credential with you and safely authenticate to an Azure Active Directory joined Windows 10 device thatΓÇÖs part of your organization.
+FIDO2 security keys offer an alternative. FIDO2 security keys can replace weak credentials with strong hardware-backed public/private-key credentials that can't be reused, replayed, or shared across services. Security keys support shared device scenarios, allowing you to carry your credential with you and safely authenticate to an Azure Active Directory joined Windows 10 device thatΓÇÖs part of your organization.
-Microsoft partners with FIDO2 security key vendors to ensure that security devices work on Windows, the Microsoft Edge browser, and online Microsoft accounts, to enable strong password-less authentication.
+Microsoft partners with FIDO2 security key vendors to ensure that security devices work on Windows, the Microsoft Edge browser, and online Microsoft accounts. FIDO2 security keys enable strong password-less authentication.
-You can become a Microsoft-compatible FIDO2 security key vendor through the following process. Microsoft doesn't commit to do go-to-market activities with the partner and will evaluate partner priority based on customer demand.
+You can become a Microsoft-compatible FIDO2 security key vendor through the following process. Microsoft doesn't commit to do go-to-market activities with the partner and evaluates partner priority based on customer demand.
-1. First, your authenticator needs to have a FIDO2 certification. We won't be able to work with providers who don't have a FIDO2 certification. To learn more about the certification, please visit this website: [https://fidoalliance.org/certification/](https://fidoalliance.org/certification/)
-2. After you have a FIDO2 certification, please fill in your request to our form here: [https://forms.office.com/r/NfmQpuS9hF](https://forms.office.com/r/NfmQpuS9hF). Our engineering team will only test compatibility of your FIDO2 devices. We won't test security of your solutions.
-3. Once we confirm a move forward to the testing phase, the process usually take about 3-6 months. The steps usually involve:
- - Initial discussion between Microsoft and your team.
- - Verify FIDO Alliance Certification or the path to certification if not complete
- - Receive an overview of the device from the vendor
- - Microsoft will share our test scripts with you. Our engineering team will be able to answer questions if you have any specific needs.
- - You'll complete and send all passed results to Microsoft Engineering team
-4. Upon successful passing of all tests by Microsoft Engineering team, Microsoft will confirm vendor's device is listed in [the FIDO MDS](https://fidoalliance.org/metadata/).
-5. Microsoft will add your FIDO2 Security Key on Azure AD backend and to our list of approved FIDO2 vendors.
+1. First, your authenticator needs to have a FIDO2 certification. We aren't able to work with providers who don't have a FIDO2 certification. To learn more about the certification, visit the [FIDO Alliance Certification Overview website](https://fidoalliance.org/certification/).
+2. After you have a FIDO2 certification, [submit a request form](https://forms.office.com/r/NfmQpuS9hF) to become a Microsoft-compatible FIDO2 security key vendor. Our engineering team only confirms the features supported by your FIDO2 devices. We don't retest features already tested as part of the FIDO2 certification and don't evaluate the security of your solutions. The process usually takes a few weeks to complete.
+3. After the engineering team successfully confirmed the feature list, we'll confirm vendor's device is listed in the [FIDO Alliance Metadata Service](https://fidoalliance.org/metadata/).
+4. Microsoft adds your FIDO2 Security Key on Azure Active Directory backend and to our list of approved FIDO2 vendors.
## Current partners
The following table lists partners who are Microsoft-compatible FIDO2 security k
| VinCSS | ![n] | ![y]| ![n]| ![n]| ![n] | https://passwordless.vincss.net | | Yubico | ![y] | ![y]| ![y]| ![n]| ![y] | https://www.yubico.com/solutions/passwordless/ | -- <!--Image references--> [y]: ./media/fido2-compatibility/yes.png [n]: ./media/fido2-compatibility/no.png - ## Next steps [FIDO2 Compatibility](fido2-compatibility.md)-
active-directory Feature Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/feature-availability.md
Last updated 01/29/2023 + --++
This following tables list Azure AD feature availability in Azure Government.
|**Authentication, single sign-on, and MFA**|Cloud authentication (Pass-through authentication, password hash synchronization) | &#x2705; | || Federated authentication (Active Directory Federation Services or federation with other identity providers) | &#x2705; | || Single sign-on (SSO) unlimited | &#x2705; |
-|| Multifactor authentication (MFA) <sup>1</sup>| &#x2705; |
+|| Multifactor authentication (MFA) | &#x2705; |
|| Passwordless (Windows Hello for Business, Microsoft Authenticator, FIDO2 security key integrations) | &#x2705; |
+|| Certificate-based authentication | &#x2705; |
|| Service-level agreement | &#x2705; | |**Applications access**|SaaS apps with modern authentication (Azure AD application gallery apps, SAML, and OAUTH 2.0) | &#x2705; | || Group assignment to applications | &#x2705; |
This following tables list Azure AD feature availability in Azure Government.
|| Session lifetime management | &#x2705; | || Identity Protection (vulnerabilities and risky accounts) | See [Identity protection](#identity-protection) below. | || Identity Protection (risk events investigation, SIEM connectivity) | See [Identity protection](#identity-protection) below. |
+|| Entra permissions management | &#10060; |
|**Administration and hybrid identity**|User and group management | &#x2705; | || Advanced group management (Dynamic groups, naming policies, expiration, default classification) | &#x2705; | || Directory synchronizationΓÇöAzure AD Connect (sync and cloud sync) | &#x2705; |
This following tables list Azure AD feature availability in Azure Government.
|| Global password protection and management ΓÇô cloud-only users | &#x2705; | || Global password protection and management ΓÇô custom banned passwords, users synchronized from on-premises Active Directory | &#x2705; | || Microsoft Identity Manager user client access license (CAL) | &#x2705; |
+|| Entra workload identities | &#10060; |
|**End-user self-service**|Application launch portal (My Apps) | &#x2705; | || User application collections in My Apps | &#x2705; | || Self-service account management portal (My Account) | &#x2705; |
This following tables list Azure AD feature availability in Azure Government.
|| Access certifications and reviews | &#x2705; | || Entitlement management | &#x2705; | || Privileged Identity Management (PIM), just-in-time access | &#x2705; |
+|| Entra governance | &#10060; |
|**Event logging and reporting**|Basic security and usage reports | &#x2705; | || Advanced security and usage reports | &#x2705; | || Identity Protection: vulnerabilities and risky accounts | &#x2705; | || Identity Protection: risk events investigation, SIEM connectivity | &#x2705; |
-|**Frontline workers**|SMS sign-in | Feature not available. |
+|**Frontline workers**|SMS sign-in | &#x2705; |
|| Shared device sign-out | Enterprise state roaming for Windows 10 devices isn't available. |
-|| Delegated user management portal (My Staff) | Feature not available. |
+|| Delegated user management portal (My Staff) | &#10060; |
-<sup>1</sup>Microsoft Authenticator only shows GUID and not UPN for compliance reasons.
## Identity protection | Risk Detection | Availability | |-|:--:| |Leaked credentials (MACE) | &#x2705; |
-|Azure AD threat intelligence | Feature not available. |
+|Azure AD threat intelligence | &#10060; |
|Anonymous IP address | &#x2705; | |Atypical travel | &#x2705; |
-|Anomalous Token | Feature not available. |
-|Token Issuer Anomaly| Feature not available. |
+|Anomalous Token | &#x2705; |
+|Token Issuer Anomaly| &#x2705; |
|Malware linked IP address | &#x2705; | |Suspicious browser | &#x2705; | |Unfamiliar sign-in properties | &#x2705; |
This following tables list Azure AD feature availability in Azure Government.
|New country | &#x2705; | |Activity from anonymous IP address | &#x2705; | |Suspicious inbox forwarding | &#x2705; |
-|Azure AD threat intelligence | Feature not available. |
|Additional risk detected | &#x2705; |
active-directory How To Certificate Based Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-certificate-based-authentication.md
description: Topic that shows how to configure Azure AD certificate-based authen
Previously updated : 01/30/2023 Last updated : 02/09/2023
Make sure that the following prerequisites are in place:
## Steps to configure and test Azure AD CBA
-Some configuration steps to be done before you enable Azure AD CBA. First, an admin must configure the trusted CAs that issue user certificates. As seen in the following diagram, we use role-based access control to make sure only least-privileged administrators are needed to make changes. Only the [Privileged Authentication Administrator](../roles/permissions-reference.md#privileged-authentication-administrator) role can configure the CA.
+Some configuration steps to be done before you enable Azure AD CBA. First, an admin must configure the trusted CAs that issue user certificates. As seen in the following diagram, we use role-based access control to make sure only least-privileged administrators are needed to make changes. Only the [Global Administrator](../roles/permissions-reference.md#global-administrator) role can configure the CA.
Optionally, you can also configure authentication bindings to map certificates to single-factor or multifactor authentication, and configure username bindings to map the certificate field to an attribute of the user object. [Authentication Policy Administrators](../roles/permissions-reference.md#authentication-policy-administrator) can configure user-related settings. Once all the configurations are complete, enable Azure AD CBA on the tenant.
For more information, see [Understanding the certificate revocation process](./c
## Step 2: Enable CBA on the tenant
+>[!IMPORTANT]
+>A user is considered capable for MFA when the user is in scope for **Certificate-based authentication** in the Authentication methods policy. This policy requirement means a user can't use proof up as part of their authentication to register other available methods. For more information, see [Azure AD MFA](concept-mfa-howitworks.md).
+ To enable the certificate-based authentication in the Azure portal, complete the following steps: 1. Sign in to the [Azure portal](https://portal.azure.com/) as an Authentication Policy Administrator.
active-directory How To Mfa Number Match https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-number-match.md
description: Learn how to use number matching in MFA notifications
Previously updated : 02/03/2023 Last updated : 02/10/2023
This topic covers how to enable number matching in Microsoft Authenticator push notifications to improve user sign-in security. >[!NOTE]
->Number matching is a key security upgrade to traditional second factor notifications in Microsoft Authenticator. We will remove the admin controls and enforce the number match experience tenant-wide for all users starting February 27, 2023.<br>
+>Number matching is a key security upgrade to traditional second factor notifications in Microsoft Authenticator. We will remove the admin controls and enforce the number match experience tenant-wide for all users of Microsoft Authenticator push notifications starting February 27, 2023.<br>
>We highly recommend enabling number matching in the near term for improved sign-in security. Relevant services will begin deploying these changes after February 27, 2023 and users will start to see number match in approval requests. As services deploy, some may see number match while others don't. To ensure consistent behavior for all users, we highly recommend you enable number match for Microsoft Authenticator push notifications in advance. ## Prerequisites
Number matching is available for the following scenarios. When enabled, all scen
- [AD FS adapter](#ad-fs-adapter) - [NPS extension](#nps-extension)
-Number matching isn't supported for Apple Watch notifications. Apple Watch users need to use their phone to approve notifications when number matching is enabled.
+Number matching isn't supported for push notifications for Apple Watch or Android wearable devices. Wearable device users need to use their phone to approve notifications when number matching is enabled.
### Multifactor authentication
Regardless of their default method, any user who is prompted to sign-in with Aut
No, number matching isn't enforced because it's not a supported feature for MFA Server, which is [deprecated](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-september-2022-train/ba-p/2967454).
-### What happens if a user runs an older version of Microsoft Authenticator?
+### What happens if a user runs an older version of Microsoft Authenticator?
-If a user is running an older version of Microsoft Authenticator that doesn't support number matching, authentication won't work if number matching is enabled. Users need to upgrade to the latest version of Microsoft Authenticator to use it for sign-in.
+If a user is running an older version of Microsoft Authenticator that doesn't support number matching, authentication won't work if number matching is enabled. Users need to upgrade to the latest version of Microsoft Authenticator to use it for sign-in if they use Android versions prior to 6.2006.4198, or iOS versions prior to 6.4.12.
-### Why is my user prompted to tap on one out of three numbers instead of entering the number in their Microsoft Authenticator app?
+### Why is my user prompted to tap on one of three numbers rather than enter the number in their Microsoft Authenticator app?
-Older versions of Microsoft Authenticator prompt users to tap and select a number instead of entering the number in their Microsoft Authenticator app. These authentications won't fail, but we highly recommend that users update to the latest version of the app to be able to enter the number.
+Older versions of Microsoft Authenticator prompt users to tap and select a number rather than enter the number in Microsoft Authenticator. These authentications won't fail, but Microsoft highly recommends that users upgrade to the latest version of Microsoft Authenticator if they use Android versions prior to 6.2108.5654, or iOS versions prior to 6.5.82, so they can use number match.
## Next steps
active-directory How To Mfa Server Migration Utility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-server-migration-utility.md
Once complete, navigate to the Multi-factor Authentication Server folder, and op
You've successfully installed the Migration Utility.
+>[!NOTE]
+> To ensure no changes in behavior during migration, if your MFA Server is associated with an MFA Provider with no tenant reference, you'll need to update the default MFA settings (e.g. custom greetings) for the tenant you're migrating to match the settings in your MFA Provider. We recommend doing this before migrating any users.
+ ### Migrate user data Migrating user data doesn't remove or alter any data in the Multi-Factor Authentication Server database. Likewise, this process won't change where a user performs MFA. This process is a one-way copy of data from the on-premises server to the corresponding user object in Azure AD.
active-directory Howto Password Smart Lockout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-password-smart-lockout.md
Smart lockout is always on, for all Azure AD customers, with these default setti
Using smart lockout doesn't guarantee that a genuine user is never locked out. When smart lockout locks a user account, we try our best to not lock out the genuine user. The lockout service attempts to ensure that bad actors can't gain access to a genuine user account. The following considerations apply:
-* Each Azure AD data center tracks lockout independently. A user has (*threshold_limit * datacenter_count*) number of attempts, if the user hits each data center.
+* Lockout state across Azure AD data centers are synchronized. The total number of failed sign-in attempts allowed before an account is locked out will also match the configured lockout threshold though there still may be some slight variance before a lockout. Once an account is locked out, they will be locked out everywhere across all Azure AD data centers.
* Smart Lockout uses familiar location vs unfamiliar location to differentiate between a bad actor and the genuine user. Unfamiliar and familiar locations both have separate lockout counters. Smart lockout can be integrated with hybrid deployments that use password hash sync or pass-through authentication to protect on-premises Active Directory Domain Services (AD DS) accounts from being locked out by attackers. By setting smart lockout policies in Azure AD appropriately, attacks can be filtered out before they reach on-premises AD DS.
active-directory Active Directory Enterprise App Role Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-enterprise-app-role-management.md
Title: Configure role claim for enterprise Azure AD apps
-description: Learn how to configure the role claim issued in the SAML token for enterprise applications in Azure Active Directory
+ Title: Configure the role claim for enterprise applications
+description: Learn how to configure the role claim issued in the SAML token for enterprise applications in Azure Active Directory.
- Previously updated : 11/11/2021 Last updated : 02/10/2023
-# Configure the role claim issued in the SAML token for enterprise applications
+# Configure the role claim issued in the SAML token
-By using Azure Active Directory (Azure AD), you can customize the claim type for the role claim in the response token that you receive after you authorize an app.
+In Azure Active Directory (Azure AD), you can customize the role claim in the access token that is received after an application is authorized. Use this feature if your application expects custom roles in the token returned by Azure AD. You can create as many roles as you need.
## Prerequisites -- An Azure AD subscription with directory setup.-- A subscription that has single sign-on (SSO) enabled. You must configure SSO with your application.
+- An Azure AD subscription with a set up tenant. For more information, see [Quickstart: Set up a tenant](quickstart-create-new-tenant.md).
+- An enterprise application that has been added to the tenant. For more information, see [Quickstart: Add an enterprise application](../manage-apps/add-application-portal.md).
+- Single sign-on (SSO) configured for the application. For more information, see [Enable single sign-on for an enterprise application](../manage-apps/add-application-portal-setup-sso.md).
+- A user account that will be assigned to the role. For more information, see [Quickstart: Create and assign a user account](../manage-apps/add-application-portal-assign-users.md).
> [!NOTE]
-> This article explains on how to create/update/delete application roles on the service principal using APIs in Azure AD. If you would like to use the new user interface for App Roles then please see the details [here](./howto-add-app-roles-in-azure-ad-apps.md).
-
-## When to use this feature
-
-Use this feature if your application expects custom roles in the SAML response returned by Azure AD. You can create as many roles as you need.
-
-## Create roles for an application
-
-1. In the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>, in the left pane, select the **Azure Active Directory** icon.
-
- ![Azure Active Directory icon][1]
-
-2. Select **Enterprise applications**. Then select **All applications**.
-
- ![Enterprise applications pane][2]
-
-3. To add a new application, select the **New application** button on the top of the dialog box.
-
- !["New application" button][3]
-
-4. In the search box, type the name of your application, and then select your application from the result panel. Select the **Add** button to add the application.
-
- ![Application in the results list](./media/active-directory-enterprise-app-role-management/tutorial_app_addfromgallery.png)
-
-5. After the application is added, go to the **Properties** page and copy the object ID.
-
- ![Properties Page](./media/active-directory-enterprise-app-role-management/tutorial_app_properties.png)
-
-6. Open [Microsoft Graph Explorer](https://developer.microsoft.com/graph/graph-explorer) in another window and take the following steps:
-
- 1. Sign in to the Graph Explorer site by using the Global Administrator or coadmin credentials for your tenant.
-
- 1. You need sufficient permissions to create the roles. Select **modify permissions** to get the permissions.
-
- ![The "modify permissions" button](./media/active-directory-enterprise-app-role-management/graph-explorer-new9.png)
-
- > [!NOTE]
- > Cloud App Administrator and App Administrator role will not work in this scenario as we need the Global Administrator permissions for Directory Read and Write.
+> This article explains how to create, update, or delete application roles on the service principal using APIs in Azure AD. To use the new user interface for App Roles, see [Add app roles to your application and receive them in the token](howto-add-app-roles-in-azure-ad-apps.md).
- 1. Select the following permissions from the list (if you don't have these already) and select **Modify Permissions**.
+## Locate the enterprise application
- ![List of permissions and "Modify Permissions" button](./media/active-directory-enterprise-app-role-management/graph-explorer-new10.png)
+Use the following steps to locate the enterprise application:
- 1. Accept the consent. You're logged in to the system again.
+1. In the [Azure portal](https://portal.azure.com/), in the left pane, select **Azure Active Directory**.
+1. Select **Enterprise applications**, and then select **All applications**.
+1. Enter the name of the existing application in the search box, and then select the application from the search results.
+1. After the application is selected, copy the object ID from the overview pane.
- 1. Change the version to **beta**, and fetch the list of service principals from your tenant by using the following query:
+ :::image type="content" source="media/active-directory-enterprise-app-role-management/record-objectid.png" alt-text="Screenshot that shows how to locate and record the object identifier for the application.":::
- `https://graph.microsoft.com/beta/servicePrincipals`
+## Add roles
- If you're using multiple directories, follow this pattern: `https://graph.microsoft.com/beta/contoso.com/servicePrincipals`
+Use the Microsoft Graph Explorer to add roles to an enterprise application.
- ![Graph Explorer dialog box, with the query for fetching service principals](./media/active-directory-enterprise-app-role-management/graph-explorer-new1.png)
+1. Open [Microsoft Graph Explorer](https://developer.microsoft.com/graph/graph-explorer) in another window and sign in using the global admin or co-admin credentials for your tenant.
- 1. From the list of fetched service principals, get the one that you need to modify. You can also use Ctrl+F to search the application from all the listed service principals. Search for the object ID that you copied from the **Properties** page, and use the following query to get to the service principal:
+ > [!NOTE]
+ > The Cloud App Administrator and App Administrator role won't work in this scenario. The Global Admin permissions are needed for directory read and write.
- `https://graph.microsoft.com/beta/servicePrincipals/<objectID>`
+1. Select **modify permissions**, select **Consent** for the `Application.ReadWrite.All` and the `Directory.ReadWrite.All` permissions in the list.
+1. Replace `<objectID>` in the following request with the object ID that was previously recorded and then run the query:
- ![Query for getting the service principal that you need to modify](./media/active-directory-enterprise-app-role-management/graph-explorer-new2.png)
+ `https://graph.microsoft.com/v1.0/servicePrincipals/<objectID>`
- 1. Extract the **appRoles** property from the service principal object.
+1. An enterprise application is also referred to as a service principal. Record the **appRoles** property from the service principal object that was returned. The following example shows the typical appRoles property:
- ![Details of the appRoles property](./media/active-directory-enterprise-app-role-management/graph-explorer-new3.png)
-
- If you're using the custom app (not the Azure Marketplace app), you see two default roles: user and msiam_access. For the Marketplace app, msiam_access is the only default role. You don't need to make any changes in the default roles.
-
- > [!NOTE]
- > When you are creating multiple roles, please don't modify the default role content just add the new msiam_access block of code for the new role.
-
- 1. Generate new roles for your application.
-
- The following JSON is an example of the **appRoles** object. Create a similar object to add the roles that you want for your application.
-
- ```json
+ ```json
+ {
+ "appRoles": [
{
- "appRoles": [
- {
- "allowedMemberTypes": [
- "User"
- ],
- "description": "msiam_access",
- "displayName": "msiam_access",
- "id": "b9632174-c057-4f7e-951b-be3adc52bfe6",
- "isEnabled": true,
- "origin": "Application",
- "value": null
- },
- {
- "allowedMemberTypes": [
- "User"
- ],
- "description": "Administrators Only",
- "displayName": "Admin",
- "id": "4f8f8640-f081-492d-97a0-caf24e9bc134",
- "isEnabled": true,
- "origin": "ServicePrincipal",
- "value": "Administrator"
- }
- ]
+ "allowedMemberTypes": [
+ "User"
+ ],
+ "description": "msiam_access",
+ "displayName": "msiam_access",
+ "id": "ef7437e6-4f94-4a0a-a110-a439eb2aa8f7",
+ "isEnabled": true,
+ "origin": "Application",
+ "value": null
}
- ```
-
- You can only add new roles after msiam_access for the patch operation. Also, you can add as many roles as your organization needs. Azure AD will send the value of these roles as the claim value in the SAML response. To generate the GUID values for the ID of new roles use the web tools like [this](https://www.guidgenerator.com/)
-
- 1. Go back to Graph Explorer and change the method from **GET** to **PATCH**. Patch the service principal object to have the desired roles by updating the **appRoles** property like the one shown in the preceding example. Select **Run Query** to execute the patch operation. A success message confirms the creation of the role.
-
- ![Patch operation with success message](./media/active-directory-enterprise-app-role-management/graph-explorer-new11.png)
-
-1. After the service principal is patched with more roles, you can assign users to the respective roles. You can assign the users by going to portal and browsing to the application. Select the **Users and groups** tab. This tab lists all the users and groups that are already assigned to the app. You can add new users on the new roles. You can also select an existing user and select **Edit** to change the role.
+ ]
+ }
+ ```
- !["Users and groups" tab](./media/active-directory-enterprise-app-role-management/graph-explorer-new5.png)
+1. In Graph Explorer, change the method from **GET** to **PATCH**.
+1. Copy the appRoles property that was previously recorded into the **Request body** pane of Graph Explorer, add the new role definition, and then select **Run Query** to execute the patch operation. A success message confirms the creation of the role. The following example shows the addition of an *Admin* role:
- To assign the role to any user, select the new role and select the **Assign** button on the bottom of the page.
-
- !["Edit Assignment" pane and "Select Role" pane](./media/active-directory-enterprise-app-role-management/graph-explorer-new6.png)
-
-
- Refresh your session in the Azure portal to see new roles.
-
-1. Update the **Attributes** table to define a customized mapping of the role claim.
-
-1. In the **User Claims** section on the **User Attributes** dialog, perform the following steps to add SAML token attribute as shown in the below table:
-
- | Attribute name | Attribute value |
- | -- | -|
- | Role name | user.assignedroles |
-
- If the role claim value is null, then Azure AD will not send this value in the token and this is default as per design.
-
- 1. Click **Edit** icon to open **User Attributes & Claims** dialog.
+ ```json
+ {
+ "appRoles": [
+ {
+ "allowedMemberTypes": [
+ "User"
+ ],
+ "description": "msiam_access",
+ "displayName": "msiam_access",
+ "id": "ef7437e6-4f94-4a0a-a110-a439eb2aa8f7",
+ "isEnabled": true,
+ "origin": "Application",
+ "value": null
+ },
+ {
+ "allowedMemberTypes": [
+ "User"
+ ],
+ "description": "Administrators Only",
+ "displayName": "Admin",
+ "id": "4f8f8640-f081-492d-97a0-caf24e9bc134",
+ "isEnabled": true,
+ "origin": "ServicePrincipal",
+ "value": "Administrator"
+ }
+ ]
+ }
+ ```
- ![Screenshot that highlights the Edit icon used to open the User Attributes & Claims dialog box.](./media/active-directory-enterprise-app-role-management/editattribute.png)
+ You can only add new roles after msiam_access for the patch operation. Also, you can add as many roles as your organization needs. Azure AD sends the value of these roles as the claim value in the SAML response. To generate the GUID values for the ID of new roles use the web tools, such as the [Online GUID / UUID Generator](https://www.guidgenerator.com/). The appRoles property should now represent what was in the request body of the query.
- 1. In the **Manage user claims** dialog, add the SAML token attribute by clicking on **Add new claim**.
+## Edit attributes
- !["Add attribute" button](./media/active-directory-enterprise-app-role-management/tutorial_attribute_04.png)
+Update the attributes to define the role claim that is included in the token.
- !["Add Attribute" pane](./media/active-directory-enterprise-app-role-management/tutorial_attribute_05.png)
+1. Locate the application in the Azure portal, and then select **Single sign-on** in the left menu.
+1. In the **Attributes & Claims** section, select **Edit**.
+1. Select **Add new claim**.
+1. In the **Name** box, type the attribute name. This example uses **Role Name** as the claim name.
+1. Leave the **Namespace** box blank.
+1. From the **Source attribute** list, select **user.assignedroles**.
+1. Select **Save**. The new **Role Name** attribute should now appear in the **Attributes & Claims** section. The claim should now be included in the access token when signing into the application.
- 1. In the **Name** box, type the attribute name as needed. This example uses **Role Name** as the claim name.
+ :::image type="content" source="media/active-directory-enterprise-app-role-management/attributes-summary.png" alt-text="Screenshot that shows a display of the list of attributes and claims defined for the application.":::
- 1. Leave the **Namespace** box blank.
+## Assign roles
- 1. From the **Source attribute** list, type the attribute value shown for that row.
+After the service principal is patched with more roles, you can assign users to the respective roles.
- 1. Select **Save**.
+1. In the Azure portal, locate the application to which the role was added.
+1. Select **Users and groups** in the left menu and then select the user that you want to assign the new role.
+1. Select **Edit assignment** at the top of the pane to change the role.
+1. Select **None Selected**, select the role from the list, and then select **Select**.
+1. Select **Assign** to assign the role to the user.
-10. To test your application in a single sign-on that's initiated by an identity provider, sign in to the [Access Panel](https://myapps.microsoft.com) and select your application tile. In the SAML token, you should see all the assigned roles for the user with the claim name that you have given.
+ :::image type="content" source="media/active-directory-enterprise-app-role-management/assign-role.png" alt-text="Screenshot that shows how to assign a role to a user of an application.":::
-## Update an existing role
+## Update roles
To update an existing role, perform the following steps: 1. Open [Microsoft Graph Explorer](https://developer.microsoft.com/graph/graph-explorer).
+1. Sign in to the Graph Explorer site by using the global admin or coadmin credentials for your tenant.
+1. Using the object ID for the application from the overview pane, replace `<objectID>` in the following request with it and then run the query:
-1. Sign in to the Graph Explorer site by using the Global Administrator or coadmin credentials for your tenant.
-
-1. Change the version to **beta**, and fetch the list of service principals from your tenant by using the following query:
-
- `https://graph.microsoft.com/beta/servicePrincipals`
-
- If you're using multiple directories, follow this pattern: `https://graph.microsoft.com/beta/contoso.com/servicePrincipals`
-
- ![Graph Explorer dialog box, with the query for fetching service principals](./media/active-directory-enterprise-app-role-management/graph-explorer-new1.png)
-
-1. From the list of fetched service principals, get the one that you need to modify. You can also use Ctrl+F to search the application from all the listed service principals. Search for the object ID that you copied from the **Properties** page, and use the following query to get to the service principal:
-
- `https://graph.microsoft.com/beta/servicePrincipals/<objectID>`
-
- ![Query for getting the service principal that you need to modify](./media/active-directory-enterprise-app-role-management/graph-explorer-new2.png)
-
-1. Extract the **appRoles** property from the service principal object.
-
- ![Details of the appRoles property](./media/active-directory-enterprise-app-role-management/graph-explorer-new3.png)
-
-1. To update the existing role, use the following steps.
-
- ![Request body for "PATCH," with "description" and "displayname" highlighted](./media/active-directory-enterprise-app-role-management/graph-explorer-patchupdate.png)
-
- 1. Change the method from **GET** to **PATCH**.
-
- 1. Copy the existing roles and paste them under **Request Body**.
-
- 1. Update the value of a role by updating the role description, role value, or role display name as needed.
-
- 1. After you update all the required roles, select **Run Query**.
-
-## Delete an existing role
-
-To delete an existing role, perform the following steps:
+ `https://graph.microsoft.com/v1.0/servicePrincipals/<objectID>`
-1. Open [Microsoft Graph Explorer](https://developer.microsoft.com/graph/graph-explorer) in another window.
+1. Record the **appRoles** property from the service principal object that was returned.
+1. In Graph Explorer, change the method from **GET** to **PATCH**.
+1. Copy the appRoles property that was previously recorded into the **Request body** pane of Graph Explorer, add update the role definition, and then select **Run Query** to execute the patch operation.
-1. Sign in to the Graph Explorer site by using the Global Administrator or coadmin credentials for your tenant.
+## Delete roles
-1. Change the version to **beta**, and fetch the list of service principals from your tenant by using the following query:
+To delete an existing role, perform the following steps:
- `https://graph.microsoft.com/beta/servicePrincipals`
-
- If you're using multiple directories, follow this pattern: `https://graph.microsoft.com/beta/contoso.com/servicePrincipals`
-
- ![Graph Explorer dialog box, with the query for fetching the list of service principals](./media/active-directory-enterprise-app-role-management/graph-explorer-new1.png)
-
-1. From the list of fetched service principals, get the one that you need to modify. You can also use Ctrl+F to search the application from all the listed service principals. Search for the object ID that you copied from the **Properties** page, and use the following query to get to the service principal:
-
- `https://graph.microsoft.com/beta/servicePrincipals/<objectID>`
-
- ![Query for getting the service principal that you need to modify](./media/active-directory-enterprise-app-role-management/graph-explorer-new2.png)
-
-1. Extract the **appRoles** property from the service principal object.
-
- ![Details of the appRoles property from the service principal object](./media/active-directory-enterprise-app-role-management/graph-explorer-new7.png)
-
-1. To delete the existing role, use the following steps.
-
- ![Request body for "PATCH," with IsEnabled set to false](./media/active-directory-enterprise-app-role-management/graph-explorer-new8.png)
-
- 1. Change the method from **GET** to **PATCH**.
-
- 1. Copy the existing roles from the application and paste them under **Request Body**.
-
- 1. Set the **IsEnabled** value to **false** for the role that you want to delete.
-
- 1. Select **Run Query**.
-
- Make sure that you have the msiam_access role, and the ID is matching in the generated role.
-
-1. After the role is disabled, delete that role block from the **appRoles** section. Keep the method as **PATCH**, and select **Run Query**.
+1. Open [Microsoft Graph Explorer](https://developer.microsoft.com/graph/graph-explorer).
+1. Sign in to the Graph Explorer site by using the global admin or coadmin credentials for your tenant.
+1. Using the object ID for the application from the overview pane in the Azure portal, replace `<objectID>` in the following request with it and then run the query:
-1. After you run the query, the role is deleted.
+ `https://graph.microsoft.com/v1.0/servicePrincipals/<objectID>`
- The role needs to be disabled before it can be removed.
+1. Record the **appRoles** property from the service principal object that was returned.
+1. In Graph Explorer, change the method from **GET** to **PATCH**.
+1. Copy the appRoles property that was previously recorded into the **Request body** pane of Graph Explorer, set the **IsEnabled** value to **false** for the role that you want to delete, and then select **Run Query** to execute the patch operation. A role must be disabled before it can be deleted.
+1. After the role is disabled, delete that role block from the **appRoles** section. Keep the method as **PATCH**, and select **Run Query** again.
## Next steps
-For additional steps, see the [app documentation](../saas-apps/tutorial-list.md).
-
-<!--Image references-->
-
-[1]: ./media/active-directory-enterprise-app-role-management/tutorial_general_01.png
-[2]: ./media/active-directory-enterprise-app-role-management/tutorial_general_02.png
-[3]: ./media/active-directory-enterprise-app-role-management/tutorial_general_03.png
-[4]: ./media/active-directory-enterprise-app-role-management/tutorial_general_04.png
+- For information about customizing claims, see [Customize claims issued in the SAML token for enterprise applications](active-directory-saml-claims-customization.md).
active-directory Licensing Service Plan Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-service-plan-reference.md
Previously updated : 02/03/2023 Last updated : 02/10/2023
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
- **Service plans included (friendly names)**: A list of service plans (friendly names) in the product that correspond to the string ID and GUID >[!NOTE]
->This information last updated on February 3rd, 2023.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv).
+>This information last updated on February 10th, 2023.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv).
><br/> | Product name | String ID | GUID | Service plans included | Service plans included (friendly names) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Azure Active Directory Premium P1 for faculty | AAD_PREMIUM_FACULTY | 30fc3c36-5a95-4956-ba57-c09c2a600bb9 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Defender for Cloud Apps Discovery (932ad362-64a8-4783-9106-97849a1a30b9) | | Azure Active Directory Premium P2 | AAD_PREMIUM_P2 | 84a661c4-e949-4bd2-a560-ed7766fcaf2b | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AAD_PREMIUM_P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0) | AZURE ACTIVE DIRECTORY PREMIUM P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AZURE ACTIVE DIRECTORY PREMIUM P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>CLOUD APP SECURITY DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MICROSOFT AZURE MULTI-FACTOR AUTHENTICATION (8a256a2b-b617-496d-b51b-e76466e88db0) | | Azure Information Protection Plan 1 | RIGHTSMANAGEMENT | c52ea49f-fe5d-4e95-93ba-1de91d380f89 | RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3) | AZURE INFORMATION PROTECTION PREMIUM P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>MICROSOFT AZURE ACTIVE DIRECTORY RIGHTS (bea4c11e-220a-4e6d-8eb8-8ea15d019f90) |
+| Azure Information Protection Premium P1 for Government | RIGHTSMANAGEMENT_CE_GOV | 78362de1-6942-4bb8-83a1-a32aa67e6e2c | EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>RMS_S_PREMIUM_GOV (1b66aedf-8ca1-4f73-af76-ec76c6180f98)<br/>RMS_S_ENTERPRISE_GOV (6a76346d-5d6e-4051-9fe3-ed3f312b5597) | Exchange Foundation for Government (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>Azure Information Protection Premium P1 for GCC (1b66aedf-8ca1-4f73-af76-ec76c6180f98)<br/>Azure Rights Management (6a76346d-5d6e-4051-9fe3-ed3f312b5597) |
| Business Apps (free) | SMB_APPS | 90d8b3f8-712e-4f7b-aa1e-62e7ae6cbe96 | DYN365BC_MS_INVOICING (39b5c996-467e-4e60-bd62-46066f572726)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2) | Microsoft Invoicing (39b5c996-467e-4e60-bd62-46066f572726)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2) | | Common Data Service Database Capacity | CDS_DB_CAPACITY | e612d426-6bc3-4181-9658-91aa906b0ac0 | CDS_DB_CAPACITY (360bcc37-0c11-4264-8eed-9fa7a3297c9b)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | Common Data Service for Apps Database Capacity (360bcc37-0c11-4264-8eed-9fa7a3297c9b)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) | | Common Data Service Database Capacity for Government | CDS_DB_CAPACITY_GOV | eddf428b-da0e-4115-accf-b29eb0b83965 | CDS_DB_CAPACITY_GOV (1ddffef6-4f69-455e-89c7-d5d72105f915)<br/>EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8) | Common Data Service for Apps Database Capacity for Government (1ddffef6-4f69-455e-89c7-d5d72105f915)<br/>Exchange Foundation for Government (922ba911-5694-4e99-a794-73aed9bfeec8)|
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Exchange Online (Plan 1) for Students | EXCHANGESTANDARD_STUDENT | ad2fe44a-915d-4e2b-ade1-6766d50a9d9c | EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>RMS_S_BASIC (31cf2cfc-6b0d-4adc-a336-88b724ed8122) | Exchange Online (Plan 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>To-Do (Plan 1) (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>Microsoft Azure Rights Management Service (31cf2cfc-6b0d-4adc-a336-88b724ed8122) | | Exchange Online (Plan 1) for Alumni with Yammer | EXCHANGESTANDARD_ALUMNI | aa0f9eb7-eff2-4943-8424-226fb137fcad | EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a) | Exchange Online (Plan 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a) | | Exchange Online (PLAN 2) | EXCHANGEENTERPRISE | 19ec0d23-8335-4cbd-94ac-6050e30712fa | EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0) | EXCHANGE ONLINE (PLAN 2) (efb87545-963c-4e0d-99df-69c6916d9eb0) |
+| Exchange Online (Plan 2) for GCC | EXCHANGEENTERPRISE_GOV | 7be8dc28-4da4-4e6d-b9b9-c60f2806df8a | EXCHANGE_S_ENTERPRISE_GOV (8c3069c0-ccdb-44be-ab77-986203a67df2)<br/> INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117) | Exchange Online (Plan 2) for Government (8c3069c0-ccdb-44be-ab77-986203a67df2)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117) |
| Exchange Online Archiving for Exchange Online | EXCHANGEARCHIVE_ADDON | ee02fd1b-340e-4a4b-b355-4a514e4c8943 | EXCHANGE_S_ARCHIVE_ADDON (176a09a6-7ec5-4039-ac02-b2791c6ba793) | EXCHANGE ONLINE ARCHIVING FOR EXCHANGE ONLINE (176a09a6-7ec5-4039-ac02-b2791c6ba793) | | Exchange Online Archiving for Exchange Server | EXCHANGEARCHIVE | 90b5e015-709a-4b8b-b08e-3200f994494c | EXCHANGE_S_ARCHIVE (da040e0a-b393-4bea-bb76-928b3fa1cf5a) | EXCHANGE ONLINE ARCHIVING FOR EXCHANGE SERVER (da040e0a-b393-4bea-bb76-928b3fa1cf5a) | | Exchange Online Essentials (ExO P1 Based) | EXCHANGEESSENTIALS | 7fc0182e-d107-4556-8329-7caaa511197b | EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c) | EXCHANGE ONLINE (PLAN 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)|
active-directory 9 Secure Access Teams Sharepoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/9-secure-access-teams-sharepoint.md
For the Collaboration restrictions option, the organization's business requireme
## External users and guest users in Teams
-Teams differentiates between external users (outside your organization) and guest users (guest accounts). You can manage collaboration setting in the [Teams Admin portal](https://admin.teams.microsoft.com/company-wide-settings/external-communications) under Org-wide settings. Authorized account credentials are required to sign in to the Teams Admin portal.
+Teams differentiates between external users (outside your organization) and guest users (guest accounts). You can manage collaboration setting in the [Microsoft Teams admin center](https://admin.teams.microsoft.com/company-wide-settings/external-communications) under Org-wide settings. Authorized account credentials are required to sign in to the Teams Admin portal.
* **External Access** - Teams allows external access by default. The organization can communicate with all external domains * Use External Access setting to restrict or allow domains
The External Identities collaboration feaure in Azure AD controls permissions. Y
Learn more: * [Manage external meetings and chat in Microsoft Teams](/microsoftteams/manage-external-access)
-* [Microsoft 365 identity models and Azure AD](/microsoft-365/enterprise/about-microsoft-365-identity)
+* [Step 1. Determine your cloud identity model](/microsoft-365/enterprise/about-microsoft-365-identity)
* [Identity models and authentication for Microsoft Teams](/microsoftteams/identify-models-authentication) * [Sensitivity labels for Microsoft Teams](/microsoftteams/sensitivity-labels)
active-directory Auth Prov Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/auth-prov-overview.md
+
+ Title: Azure Active Directory synchronization protocol overview
+description: Architectural guidance on integrating Azure AD with legacy synchronization protocols
++++++++ Last updated : 2/8/2023++++++
+# Azure Active Directory integrations with synchronization protocols
+
+Microsoft Azure Active Directory (Azure AD) enables integration with many synchronization protocols. The synchronization integrations enable you to sync user and group data to Azure AD, and then user Azure AD management capabilities. Some sync patterns also enable automated provisioning.
+
+## Synchronization patterns
+
+The following table presents Azure AD integration with synchronization patterns and their capabilities. Select the name of a pattern to see
+
+* A detailed description
+
+* When to use it
+
+* Architectural diagram
+
+* Explanation of system components
+
+* Links for how to implement the integration
+++
+| Synchronization pattern| Directory synchronization| User provisioning |
+| - | - | - |
+| [Directory synchronization](sync-directory.md)| ![check mark](./media/authentication-patterns/check.png)| |
+| [LDAP Synchronization](sync-ldap.md)| ![check mark](./media/authentication-patterns/check.png)| |
+| [SCIM synchronization](sync-scim.md)| ![check mark](./media/authentication-patterns/check.png)| ![check mark](./media/authentication-patterns/check.png) |
active-directory Auth Sync Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/auth-sync-overview.md
Previously updated : 1/10/2023 Last updated : 2/8/2023
-# Azure Active Directory integrations with authentication and synchronization protocols
+# Azure Active Directory integrations with authentication protocols
-Microsoft Azure Active Directory (Azure AD) enables integration with many authentication and synchronization protocols. The authentication integrations enable you to use Azure AD and its security and management features with little or no changes to your applications that use legacy authentication methods. The synchronization integrations enable you to sync user and group data to Azure AD, and then user Azure AD management capabilities. Some sync patterns also enable automated provisioning.
+Microsoft Azure Active Directory (Azure AD) enables integration with many authentication protocols. The authentication integrations enable you to use Azure AD and its security and management features with little or no changes to your applications that use legacy authentication methods.
## Legacy authentication protocols
The following table presents authentication Azure AD integration with legacy aut
| [Windows Authentication - Kerberos Constrained Delegation](auth-kcd.md)| ![check mark](./media/authentication-patterns/check.png)| ![check mark](./media/authentication-patterns/check.png)| ![check mark](./media/authentication-patterns/check.png)| ![check mark](./media/authentication-patterns/check.png) |
-
-## Synchronization patterns
-
-The following table presents Azure AD integration with synchronization patterns and their capabilities. Select the name of a pattern to see
-
-* A detailed description
-
-* When to use it
-
-* Architectural diagram
-
-* Explanation of system components
-
-* Links for how to implement the integration
-
-| Synchronization pattern| Directory synchronization| User provisioning |
-| - | - | - |
-| [Directory synchronization](sync-directory.md)| ![check mark](./media/authentication-patterns/check.png)| |
-| [LDAP Synchronization](sync-ldap.md)| ![check mark](./media/authentication-patterns/check.png)| |
-| [SCIM synchronization](sync-scim.md)| ![check mark](./media/authentication-patterns/check.png)| ![check mark](./media/authentication-patterns/check.png) |
active-directory Service Accounts Govern On Premises https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/service-accounts-govern-on-premises.md
Previously updated : 02/07/2023 Last updated : 02/10/2023
When you create service accounts, consider the information in the following tabl
| Ownership| Ensure there's an account owner who requests and assumes responsibility | | Scope| Define the scope, and anticipate usage duration| | Purpose| Create service accounts for one purpose |
-| Permissions | Apply the principle of least permission:<li>Don't assign permissions to built-in groups, such as administrators<li>Remove local machine permissions, where feasible<li>Tailor access, and use AD delegation for directory access<li>Use granular access permissions<li>Set account expiration and location restrictions on user-based service accounts |
-| Monitor and audit use| <li>Monitor sign-in data, and ensure it matches the intended usage <li>Set alerts for anomalous usage |
+| Permissions | Apply the principle of least permission:</br> - Don't assign permissions to built-in groups, such as administrators</br> - Remove local machine permissions, where feasible</br> - Tailor access, and use AD delegation for directory access</br> - Use granular access permissions</br> - Set account expiration and location restrictions on user-based service accounts |
+| Monitor and audit use| - Monitor sign-in data, and ensure it matches the intended usage</br> - Set alerts for anomalous usage |
### User account restrictions For user accounts used as service accounts, apply the following settings:
-* Account expiration - set the service account to automatically expire, after its review period, unless the account can continue
-* LogonWorkstations - restrict service account sign-in permissions
+* **Account expiration** - set the service account to automatically expire, after its review period, unless the account can continue
+* **LogonWorkstations** - restrict service account sign-in permissions
* If it runs locally and accesses resources on the machine, restrict it from signing in elsewhere
-* Can't change password - set the parameter to **true** to prevent the service account from changing its own password
+* **Can't change password** - set the parameter to **true** to prevent the service account from changing its own password
## Lifecycle management process
Consider the following restrictions, although some might not be relevant to your
* For user accounts used as service accounts, define a realistic end date * Use the **Account Expires** flag to set the date * Learn more: [Set-ADAccountExpiration](/powershell/module/activedirectory/set-adaccountexpiration)
-* Sign in to the [LogonWorkstation](/powershell/module/activedirectory/set-aduser)
-* [Password policy](../../active-directory-domain-services/password-policy.md) requirements
-* Create accounts in an [organizational unit location](/windows-server/identity/ad-ds/plan/delegating-administration-of-account-ous-and-resource-ous) that ensures only some users will manage it
-* Set up and collect auditing that detects [service account changes](/windows/security/threat-protection/auditing/audit-directory-service-changes), and [service account usage](https://www.manageengine.com/products/active-directory-audit/how-to/audit-kerberos-authentication-events.html)
+* See, [Set-ADUser (Active Directory)](/powershell/module/activedirectory/set-aduser)
+* Password policy requirements
+ * See, [Password and account lockout policies on Azure AD Domain Services managed domains](../../active-directory-domain-services/password-policy.md)
+* Create accounts in an organizational unit location that ensures only some users will manage it
+ * See, [Delegating Administration of Account OUs and Resource OUs](/windows-server/identity/ad-ds/plan/delegating-administration-of-account-ous-and-resource-ous)
+* Set up and collect auditing that detects service account changes:
+ * See, [Audit Directory Service Changes](/windows/security/threat-protection/auditing/audit-directory-service-changes), and
+ * Go to manageengine.com for [How to audit Kerberos authentication events in AD](https://www.manageengine.com/products/active-directory-audit/how-to/audit-kerberos-authentication-events.html)
* Grant account access more securely before it goes into production ### Service account reviews
To deprovision:
5. Create a business policy that determines the amount of time that accounts are disabled. 6. Delete the service account.
- * MSAs - see, [Uninstall the account](/powershell/module/activedirectory/uninstall-adserviceaccount?view=winserver2012-ps&preserve-view=true). Use PowerShell, or delete it manually from the managed service account container.
- * Computer or user accounts - manually delete the account from Active Directory
+ * **MSAs** - see, [Uninstall-ADServiceAccount](/powershell/module/activedirectory/uninstall-adserviceaccount?view=winserver2012-ps&preserve-view=true)
+ * Use PowerShell, or delete it manually from the managed service account container
+ * **Computer or user accounts** - manually delete the account from Active Directory
## Next steps
active-directory Service Accounts Group Managed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/service-accounts-group-managed.md
Previously updated : 02/06/2023 Last updated : 02/09/2023
If a service doesn't support gMSAs, you can use a standalone managed service acc
If you can't use a gMSA or sMSA supported by your service, configure the service to run as a standard user account. Service and domain administrators are required to observe strong password management processes to help keep the account secure.
-## Assess gSMA security posture
+## Assess gMSA security posture
gMSAs are more secure than standard user accounts, which require ongoing password management. However, consider gMSA scope of access in relation to security posture. Potential security issues and mitigations for using gMSAs are shown in the following table: | Security issue| Mitigation | | - | - |
-| gMSA is a member of privileged groups | <li>Review your group memberships. Create a PowerShell script to enumerate group memberships. Filter the resultant CSV file by gMSA file names.<li>Remove the gMSA from privileged groups.<li>Grant the gMSA rights and permissions it requires to run its service. See your service vendor.
-| gMSA has read/write access to sensitive resources | <li>Audit access to sensitive resources.<li>Archive audit logs to a SIEM, such as Azure Log Analytics or Microsoft Sentinel, for analysis.<li>Remove unnecessary resource permissions if there's an unnecessary access level. |
+| gMSA is a member of privileged groups | - Review your group memberships. Create a PowerShell script to enumerate group memberships. Filter the resultant CSV file by gMSA file names</br> - Remove the gMSA from privileged groups</br> - Grant the gMSA rights and permissions it requires to run its service. See your service vendor.
+| gMSA has read/write access to sensitive resources | - Audit access to sensitive resources</br> - Archive audit logs to a SIEM, such as Azure Log Analytics or Microsoft Sentinel</br> - Remove unnecessary resource permissions if there's an unnecessary access level |
## Find gMSAs
To manage gMSAs, use the following Active Directory PowerShell cmdlets:
## Move to a gMSA gMSAs are a secure service account type for on-premises. It's recommended you use gMSAs, if possible. In addition, consider moving your services to Azure and your service accounts to Azure Active Directory. +
+ > [!NOTE]
+ > Before you configure your service to use the gMSA, see [Get started with group managed service accounts](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/jj128431(v=ws.11)).
To move to a gMSA:
-1. Ensure the [Key Distribution Service (KDS) root key](/windows-server/security/group-managed-service-accounts/create-the-key-distribution-services-kds-root-key) is deployed in the forest. This is a one-time operation.
-2. [Create a new gMSA](/windows-server/security/group-managed-service-accounts/getting-started-with-group-managed-service-accounts).
+1. Ensure the Key Distribution Service (KDS) root key is deployed in the forest. This is a one-time operation. See, [Create the Key Distribution Services KDS Root Key](/windows-server/security/group-managed-service-accounts/create-the-key-distribution-services-kds-root-key).
+2. Create a new gMSA. See, [Getting Started with Group Managed Service Accounts](/windows-server/security/group-managed-service-accounts/getting-started-with-group-managed-service-accounts).
3. Install the new gMSA on hosts that run the service.
-
- > [!NOTE]
- > Before configuring your service to use the gMSA, see [Get started with group managed service accounts](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/jj128431(v=ws.11)).
- 4. Change your service identity to gMSA. 5. Specify a blank password. 6. Validate your service is working under the new gMSA identity.
active-directory Service Accounts Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/service-accounts-managed-identities.md
Azure control plane operations are managed by Azure Resource Manager and use Azu
Learn more: * [What is Azure Resource Manager?](../../azure-resource-manager/management/overview.md)
-* [What is Azure RBAC?](../../role-based-access-control/overview.md)
+* [What is Azure role-based Azure RBAC?](../../role-based-access-control/overview.md)
* [Azure control plane and data plane](../../azure-resource-manager/management/control-plane-and-data-plane.md) * [Azure services that can use managed identities to access other services](../managed-identities-azure-resources/managed-identities-status.md)
active-directory Service Accounts Principal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/service-accounts-principal.md
An Azure Active Directory (Azure AD) service principals are the local representa
Learn more: [Application and service principal objects in Azure AD](../develop/app-objects-and-service-principals.md)
-### Tenant-service principal relationships
+## Tenant-service principal relationships
A single-tenant application has one service principal in its home tenant. A multi-tenant web application or API requires a service principal in each tenant. A service principal is created when a user from that tenant consents to use of the application or API. This consent creates a one-to-many relationship between the multi-tenant application and its associated service principals.
An application instance has two properties: the ApplicationID (or ClientID) and
The ApplicationID represents the global application and is the same for application instances, across tenants. The ObjectID is a unique value for an application object. As with users, groups, and other resources, the ObjectID helps to identify an application instance in Azure AD.
-To learn more, see [Application and service principal relationship](../develop/app-objects-and-service-principals.md)
+To learn more, see [Application and service principal relationship in Azure AD](../develop/app-objects-and-service-principals.md)
### Create an application and its service principal object You can create an application and its service principal object (ObjectID) in a tenant using: * Azure PowerShell
-* Azure command-line interface (CLI)
+* Azure command-line interface (Azure CLI)
* Microsoft Graph * The Azure portal * Other tools
For more information on Azure Key Vault and how to use it for certificate and se
* [About Azure Key Vault](../../key-vault/general/overview.md) * [Assign a Key Vault access policy](../../key-vault/general/assign-access-policy.md)
- ### Challenges and mitigations
+### Challenges and mitigations
When using service principals, use the following table to match challenges and mitigations.
Conditional Access:
Use Conditional Access to block service principals from untrusted locations.
-See, [Conditional Access for workload identities](../conditional-access/workload-identity.md#create-a-location-based-conditional-access-policy)
+See, [Create a location-based Conditional Access policy](../conditional-access/workload-identity.md#create-a-location-based-conditional-access-policy)
active-directory Create Access Review Pim For Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/create-access-review-pim-for-groups.md
+
+ Title: Create an access review of PIM for Groups - Azure AD (preview)
+description: Learn how to create an access review of PIM for Groups in Azure Active Directory.
+++
+editor: markwahl-msft
++
+ na
++ Last updated : 09/14/2022++++
+
+# Create an access review of PIM for Groups in Azure AD (preview)
+
+This article describes how to create one or more access reviews for PIM for Groups, which will include the active members of the group as well as the eligible members. Reviews can be performed on both active members of the group, who are active at the time the review is created, and the eligible members of the group.
+
+## Prerequisites
+
+- Azure AD Premium P2.
+- Only Global administrators and Privileged Role administrators can create reviews on PIM for Groups. For more information, see [Use Azure AD groups to manage role assignments](../roles/groups-concept.md).
+
+For more information, see [License requirements](access-reviews-overview.md#license-requirements).
+
+## Create a PIM for Groups access review
+
+### Scope
+1. Sign in to the Azure portal and open the [Identity Governance](https://portal.azure.com/#blade/Microsoft_AAD_ERM/DashboardBlade/) page.
+
+2. On the left menu, select **Access reviews**.
+
+3. Select **New access review** to create a new access review.
+
+ ![Screenshot that shows the Access reviews pane in Identity Governance.](./media/create-access-review/access-reviews.png)
+
+4. In the **Select what to review** box, select **Teams + Groups**.
+
+ ![Screenshot that shows creating an access review.](./media/create-access-review/select-what-review.png)
+
+5. Select **Teams + Groups** and then select **Select Teams + groups** under **Review Scope**. A list of groups to choose from appears on the right.
+
+ ![Screenshot that shows selecting Teams + Groups.](./media/create-access-review/create-pim-review.png)
+
+> [!NOTE]
+> When a PIM for Groups is selected, the users under review for the group will include all eligible users and active users in that group.
+
+6. Now you can select a scope for the review. Your options are:
+ - **Guest users only**: This option limits the access review to only the Azure AD B2B guest users in your directory.
+ - **Everyone**: This option scopes the access review to all user objects associated with the resource.
++
+7. If you are conducting group membership review, you can create access reviews for only the inactive users in the group. In the *Users scope* section, check the box next to **Inactive users (on tenant level)**. If you check the box, the scope of the review will focus on inactive users only, those who have not signed in either interactively or non-interactively to the tenant. Then, specify **Days inactive** with a number of days inactive up to 730 days (two years). Users in the group inactive for the specified number of days will be the only users in the review.
+
+> [!NOTE]
+> Recently created users are not affected when configuring the inactivity time. The Access Review will check if a user has been created in the time frame configured and disregard users who havenΓÇÖt existed for at least that amount of time. For example, if you set the inactivity time as 90 days and a guest user was created or invited less than 90 days ago, the guest user will not be in scope of the Access Review. This ensures that a user can sign in at least once before being removed.
+
+8. Select **Next: Reviews**.
+
+After you have reached this step, you may follow the instructions outlined under **Next: Reviews** in the [Create an access review of groups or applications](create-access-review.md#next-reviews) article to complete your access review.
+
+> [!NOTE]
+> Review of PIM for Groups will only assign active owner(s) as the reviewers. Eligible owners are not included. At least one fallback reviewer is required for a PIM for Groups review. If there are no active owner(s) when the review begins, the fallback reviewer(s) will be assigned to the review.
+
+## Next steps
+
+- [Create an access review of groups or applications](create-access-review.md)
+- [Approve activation requests for PIM for Groups members and owners (preview)](../privileged-identity-management/groups-approval-workflow.md)
active-directory Create Access Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/create-access-review.md
If you are reviewing access to an application, then before creating the review,
If you choose either **Managers of users** or **Group owner(s)**, you can also specify a fallback reviewer. Fallback reviewers are asked to do a review when the user has no manager specified in the directory or if the group doesn't have an owner. >[!IMPORTANT]
- > For Privileged Access Groups (Preview), you must select **Group owner(s)**. It is mandatory to assign at least one fallback reviewer to the review. The review will only assign active owner(s) as the reviewer(s). Eligible owners are not included. If there are no active owners when the review begins, the fallback reviewer(s) will be assigned to the review.
+ > For PIM for Groups (Preview), you must select **Group owner(s)**. It is mandatory to assign at least one fallback reviewer to the review. The review will only assign active owner(s) as the reviewer(s). Eligible owners are not included. If there are no active owners when the review begins, the fallback reviewer(s) will be assigned to the review.
![Screenshot that shows New access review.](./media/create-access-review/new-access-review.png)
After one or more access reviews have started, you might want to modify or updat
## Next steps - [Complete an access review of groups or applications](complete-access-review.md)-- [Create an access review of Privileged Access Groups (preview)](create-access-review-privileged-access-groups.md)
+- [Create an access review of PIM for Groups (preview)](create-access-review-pim-for-groups.md)
- [Review access to groups or applications](perform-access-review.md) - [Review access for yourself to groups or applications](review-your-access.md)
active-directory Lifecycle Workflow Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-tasks.md
For Microsoft Graph the parameters for the **Generate Temporary Access Pass and
### Add user to groups
-Allows users to be added to Microsoft 365 and cloud-only security groups. Mail-enabled, distribution, dynamic and privileged access groups are not supported. To control access to on-premises applications and resources, you need to enable group writeback. For more information, see [Azure AD Connect group writeback](../hybrid/how-to-connect-group-writeback-v2.md).
+Allows users to be added to Microsoft 365 and cloud-only security groups. Mail-enabled, distribution, dynamic and PIM for Groups are not supported. To control access to on-premises applications and resources, you need to enable group writeback. For more information, see [Azure AD Connect group writeback](../hybrid/how-to-connect-group-writeback-v2.md).
You're able to customize the task name and description for this task. :::image type="content" source="media/lifecycle-workflow-task/add-group-task.png" alt-text="Screenshot of Workflows task: Add user to group task.":::
For Microsoft Graph the parameters for the **Disable user account** task are as
### Remove user from selected groups
-Allows users to be removed from Microsoft 365 and cloud-only security groups. Mail-enabled, distribution, dynamic and privileged access groups are not supported. To control access to on-premises applications and resources, you need to enable group writeback. For more information, see [Azure AD Connect group writeback](../hybrid/how-to-connect-group-writeback-v2.md).
+Allows users to be removed from Microsoft 365 and cloud-only security groups. Mail-enabled, distribution, dynamic and PIM for Groups are not supported. To control access to on-premises applications and resources, you need to enable group writeback. For more information, see [Azure AD Connect group writeback](../hybrid/how-to-connect-group-writeback-v2.md).
You're able to customize the task name and description for this task in the Azure portal. :::image type="content" source="media/lifecycle-workflow-task/remove-group-task.png" alt-text="Screenshot of Workflows task: Remove user from select groups.":::
For Microsoft Graph the parameters for the **Remove user from selected groups**
### Remove users from all groups
-Allows users to be removed from every Microsoft 365 and cloud-only security group they're a member of. Mail-enabled, distribution, dynamic and privileged access groups are not supported. To control access to on-premises applications and resources, you need to enable group writeback. For more information, see [Azure AD Connect group writeback](../hybrid/how-to-connect-group-writeback-v2.md).
+Allows users to be removed from every Microsoft 365 and cloud-only security group they're a member of. Mail-enabled, distribution, dynamic and PIM for Groups are not supported. To control access to on-premises applications and resources, you need to enable group writeback. For more information, see [Azure AD Connect group writeback](../hybrid/how-to-connect-group-writeback-v2.md).
You're able to customize the task name and description for this task in the Azure portal.
active-directory How To Connect Password Hash Synchronization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-password-hash-synchronization.md
To synchronize your password, Azure AD Connect sync extracts your password hash
The actual data flow of the password hash synchronization process is similar to the synchronization of user data. However, passwords are synchronized more frequently than the standard directory synchronization window for other attributes. The password hash synchronization process runs every 2 minutes. You cannot modify the frequency of this process. When you synchronize a password, it overwrites the existing cloud password.
-The first time you enable the password hash synchronization feature, it performs an initial synchronization of the passwords of all in-scope users. You cannot explicitly define a subset of user passwords that you want to synchronize. However, if there are multiple connectors, it is possible to disable password hash sync for some connectors but not others using the [Set-ADSyncAADPasswordSyncConfiguration](../../active-directory-domain-services/tutorial-configure-password-hash-sync.md) cmdlet.
+The first time you enable the password hash synchronization feature, it performs an initial synchronization of the passwords of all in-scope users. [Staged Rollout](how-to-connect-staged-rollout.md) allows you to selectively test groups of users with cloud authentication capabilities like Azure AD Multi-Factor Authentication (MFA), Conditional Access, Identity Protection for leaked credentials, Identity Governance, and others, before cutting over your domains. You cannot explicitly define a subset of user passwords that you want to synchronize. However, if there are multiple connectors, it is possible to disable password hash sync for some connectors but not others using the [Set-ADSyncAADPasswordSyncConfiguration](../../active-directory-domain-services/tutorial-configure-password-hash-sync.md) cmdlet.
When you change an on-premises password, the updated password is synchronized, most often in a matter of minutes. The password hash synchronization feature automatically retries failed synchronization attempts. If an error occurs during an attempt to synchronize a password, an error is logged in your event viewer.
The password hash synchronization feature automatically retries failed synchroni
The synchronization of a password has no impact on the user who is currently signed in. Your current cloud service session is not immediately affected by a synchronized password change that occurs, while you are signed in, to a cloud service. However, when the cloud service requires you to authenticate again, you need to provide your new password.
-A user must enter their corporate credentials a second time to authenticate to Azure AD, regardless of whether they're signed in to their corporate network. This pattern can be minimized, however, if the user selects the Keep me signed in (KMSI) check box at sign-in. This selection sets a session cookie that bypasses authentication for 180 days. KMSI behavior can be enabled or disabled by the Azure AD administrator. In addition, you can reduce password prompts by turning on [Seamless SSO](how-to-connect-sso.md), which automatically signs users in when they are on their corporate devices connected to your corporate network.
+A user must enter their corporate credentials a second time to authenticate to Azure AD, regardless of whether they're signed in to their corporate network. This pattern can be minimized, however, if the user selects the Keep me signed in (KMSI) check box at sign-in. This selection sets a session cookie that bypasses authentication for 180 days. KMSI behavior can be enabled or disabled by the Azure AD administrator. In addition, you can reduce password prompts by configuring [Azure AD join](../devices/concept-azure-ad-join.md) or [Hybrid Azure AD join](../devices/concept-azure-ad-join-hybrid.md), which automatically signs users in when they are on their corporate devices connected to your corporate network.
> [!NOTE] > Password sync is only supported for the object type user in Active Directory. It is not supported for the iNetOrgPerson object type.
Once enabled, Azure AD does not go to each synchronized user to remove the `Disa
After the *EnforceCloudPasswordPolicyForPasswordSyncedUsers* feature is enabled, new users are provisioned without a PasswordPolicies value.
-It is recommended to enable *EnforceCloudPasswordPolicyForPasswordSyncedUsers* prior to enabling password hash sync, so that the initial sync of password hashes does not add the `DisablePasswordExpiration` value to the PasswordPolicies attribute for the users.
+>[!TIP]
+>It is recommended to enable *EnforceCloudPasswordPolicyForPasswordSyncedUsers* prior to enabling password hash sync, so that the initial sync of password hashes does not add the `DisablePasswordExpiration` value to the PasswordPolicies attribute for the users.
The default Azure AD password policy requires users to change their passwords every 90 days. If your policy in AD is also 90 days, the two policies should match. However, if the AD policy is not 90 days, you can update the Azure AD password policy to match by using the Set-MsolPasswordPolicy PowerShell command.
If you use Azure AD Domain Services to provide legacy authentication for applica
## Enable password hash synchronization >[!IMPORTANT]
->If you are migrating from AD FS (or other federation technologies) to Password Hash Synchronization, we highly recommend that you follow our detailed deployment guide published [here](https://aka.ms/adfstophsdpdownload).
+>If you are migrating from AD FS (or other federation technologies) to Password Hash Synchronization, view [Resources for migrating applications to Azure AD](../manage-apps/migration-resources.md).
When you install Azure AD Connect by using the **Express Settings** option, password hash synchronization is automatically enabled. For more information, see [Getting started with Azure AD Connect using express settings](how-to-connect-install-express.md).
If you have problems with password hash synchronization, see [Troubleshoot passw
## Next steps * [Azure AD Connect sync: Customizing synchronization options](how-to-connect-sync-whatis.md) * [Integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md)
-* [Get a step-by-step deployment plan for migrating from ADFS to Password Hash Synchronization](https://aka.ms/authenticationDeploymentPlan)
+* [Resources for migrating applications to Azure AD](../manage-apps/migration-resources.md)
active-directory How To Connect Pta Current Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-pta-current-limitations.md
The following scenarios are supported: - User sign-ins to web browser-based applications.-- User sign-ins to Outlook clients using legacy protocols such as Exchange ActiveSync, EAS, SMTP, POP and IMAP. - User sign-ins to legacy Office client applications and Office applications that support [modern authentication](https://www.microsoft.com/en-us/microsoft-365/blog/2015/11/19/updated-office-365-modern-authentication-public-preview): Office 2013 and 2016 versions. - User sign-ins to legacy protocol applications such as PowerShell version 1.0 and others.-- Azure AD joins for Windows 10 devices.-- App passwords for Multi-Factor Authentication.
+- Azure AD joins for Windows 10 and later devices.
+- Hybrid Azure AD joins for Windows 10 and later devices.
## Unsupported scenarios
The following scenarios are _not_ supported:
## Next steps - [Quick start](how-to-connect-pta-quick-start.md): Get up and running with Azure AD Pass-through Authentication.-- [Migrate from AD FS to Pass-through Authentication](https://aka.ms/ADFSTOPTADPDownload) - A detailed guide to migrate from AD FS (or other federation technologies) to Pass-through Authentication.
+- [Migrate your apps to Azure AD](../manage-apps/migration-resources.md): Resources to help you migrate application access and authentication to Azure AD.
- [Smart Lockout](../authentication/howto-password-smart-lockout.md): Learn how to configure the Smart Lockout capability on your tenant to protect user accounts. - [Technical deep dive](how-to-connect-pta-how-it-works.md): Understand how the Pass-through Authentication feature works. - [Frequently asked questions](how-to-connect-pta-faq.yml): Find answers to frequently asked questions about the Pass-through Authentication feature. - [Troubleshoot](tshoot-connect-pass-through-authentication.md): Learn how to resolve common problems with the Pass-through Authentication feature. - [Security deep dive](how-to-connect-pta-security-deep-dive.md): Get deep technical information on the Pass-through Authentication feature.
+- [Hybrid Azure AD join](../devices/howto-hybrid-azure-ad-join.md): Configure Hybrid Azure AD join capability on your tenant for SSO across your cloud and on-premises resources.
- [Azure AD Seamless SSO](how-to-connect-sso.md): Learn more about this complementary feature. - [UserVoice](https://feedback.azure.com/d365community/forum/22920db1-ad25-ec11-b6e6-000d3a4f0789): Use the Azure Active Directory Forum to file new feature requests.
active-directory How To Connect Pta How It Works https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-pta-how-it-works.md
The following diagram illustrates all the components and the steps involved:
## Next steps - [Current limitations](how-to-connect-pta-current-limitations.md): Learn which scenarios are supported and which ones are not. - [Quick Start](how-to-connect-pta-quick-start.md): Get up and running on Azure AD Pass-through Authentication.-- [Migrate from AD FS to Pass-through Authentication](https://aka.ms/adfstoPTADP) - A detailed guide to migrate from AD FS (or other federation technologies) to Pass-through Authentication.
+- [Migrate your apps to Azure AD](../manage-apps/migration-resources.md): Resources to help you migrate application access and authentication to Azure AD.
- [Smart Lockout](../authentication/howto-password-smart-lockout.md): Configure the Smart Lockout capability on your tenant to protect user accounts. - [Frequently Asked Questions](how-to-connect-pta-faq.yml): Find answers to frequently asked questions. - [Troubleshoot](tshoot-connect-pass-through-authentication.md): Learn how to resolve common problems with the Pass-through Authentication feature. - [Security Deep Dive](how-to-connect-pta-security-deep-dive.md): Get deep technical information on the Pass-through Authentication feature.
+- [Hybrid Azure AD join](../devices/howto-hybrid-azure-ad-join.md): Configure Hybrid Azure AD join capability on your tenant for SSO across your cloud and on-premises resources.    
- [Azure AD Seamless SSO](how-to-connect-sso.md): Learn more about this complementary feature. - [UserVoice](https://feedback.azure.com/d365community/forum/22920db1-ad25-ec11-b6e6-000d3a4f0789): Use the Azure Active Directory Forum to file new feature requests.
active-directory How To Connect Pta Quick Start https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-pta-quick-start.md
Azure Active Directory (Azure AD) Pass-through Authentication allows your users to sign in to both on-premises and cloud-based applications by using the same passwords. Pass-through Authentication signs users in by validating their passwords directly against on-premises Active Directory. >[!IMPORTANT]
->If you are migrating from AD FS (or other federation technologies) to Pass-through Authentication, we highly recommend that you follow our detailed deployment guide published [here](https://aka.ms/adfstoPTADPDownload).
-
+>If you are migrating from AD FS (or other federation technologies) to Pass-through Authentication, view [Resources for migrating applications to Azure AD](../manage-apps/migration-resources.md).
>[!NOTE] >If you deploying Pass Through Authentication with the Azure Government cloud, view [Hybrid Identity Considerations for Azure Government](./reference-connect-government-cloud.md).
Ensure that the following prerequisites are in place.
>[!IMPORTANT] >From a security standpoint, administrators should treat the server running the PTA agent as if it were a domain controller. The PTA agent servers should be hardened along the same lines as outlined in [Securing Domain Controllers Against Attack](/windows-server/identity/ad-ds/plan/security-best-practices/securing-domain-controllers-against-attack)
-### In the Azure Active Directory admin center
+### In the Entra admin center
1. Create a cloud-only Hybrid Identity Administrator account or a Hybrid Identity administrator account on your Azure AD tenant. This way, you can manage the configuration of your tenant should your on-premises services fail or become unavailable. Learn about [adding a cloud-only Hybrid Identity Administrator account](../fundamentals/add-users-azure-active-directory.md). Completing this step is critical to ensure that you don't get locked out of your tenant. 2. Add one or more [custom domain names](../fundamentals/add-custom-domain.md) to your Azure AD tenant. Your users can sign in with one of these domain names.
Ensure that the following prerequisites are in place.
### In your on-premises environment 1. Identify a server running Windows Server 2016 or later to run Azure AD Connect. If not enabled already, [enable TLS 1.2 on the server](./how-to-connect-install-prerequisites.md#enable-tls-12-for-azure-ad-connect). Add the server to the same Active Directory forest as the users whose passwords you need to validate. It should be noted that installation of Pass-Through Authentication agent on Windows Server Core versions is not supported.
-2. Install the [latest version of Azure AD Connect](https://www.microsoft.com/download/details.aspx?id=47594) on the server identified in the preceding step. If you already have Azure AD Connect running, ensure that the version is 1.1.750.0 or later.
+2. Install the [latest version of Azure AD Connect](https://www.microsoft.com/download/details.aspx?id=47594) on the server identified in the preceding step. If you already have Azure AD Connect running, ensure that the version is supported.
>[!NOTE] >Azure AD Connect versions 1.1.557.0, 1.1.558.0, 1.1.561.0, and 1.1.614.0 have a problem related to password hash synchronization. If you _don't_ intend to use password hash synchronization in conjunction with Pass-through Authentication, read the [Azure AD Connect release notes](./reference-connect-version-history.md).
If you have already installed Azure AD Connect by using the [express installatio
![Azure AD Connect: Change user sign-in](./media/how-to-connect-pta-quick-start/changeusersignin.png) >[!IMPORTANT]
->Pass-through Authentication is a tenant-level feature. Turning it on affects the sign-in for users across _all_ the managed domains in your tenant. If you're switching from Active Directory Federation Services (AD FS) to Pass-through Authentication, you should wait at least 12 hours before shutting down your AD FS infrastructure. This wait time is to ensure that users can keep signing in to Exchange ActiveSync during the transition. For more help on migrating from AD FS to Pass-through Authentication, check out our detailed deployment plan published [here](https://aka.ms/adfstoptadpdownload).
+>Pass-through Authentication is a tenant-level feature. Turning it on affects the sign-in for users across _all_ the managed domains in your tenant. If you're switching from Active Directory Federation Services (AD FS) to Pass-through Authentication, you should wait at least 12 hours before shutting down your AD FS infrastructure. This wait time is to ensure that users can keep signing in to Exchange ActiveSync during the transition. For more help on migrating from AD FS to Pass-through Authentication, check out our deployment plans published [here](../manage-apps/migration-resources.md).
## Step 3: Test the feature Follow these instructions to verify that you have enabled Pass-through Authentication correctly:
-1. Sign in to the [Azure Active Directory admin center](https://aad.portal.azure.com) with the Hybrid Identity Administrator credentials for your tenant.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) with the Hybrid Identity Administrator credentials for your tenant.
2. Select **Azure Active Directory** in the left pane. 3. Select **Azure AD Connect**. 4. Verify that the **Pass-through authentication** feature appears as **Enabled**.
For most customers, three Authentication Agents in total are sufficient for high
To begin, follow these instructions to download the Authentication Agent software:
-1. To download the latest version of the Authentication Agent (version 1.5.193.0 or later), sign in to the [Azure Active Directory admin center](https://aad.portal.azure.com) with your tenant's Hybrid Identity Administrator credentials.
+1. To download the latest version of the Authentication Agent (version 1.5.193.0 or later), sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) with your tenant's Hybrid Identity Administrator credentials.
2. Select **Azure Active Directory** in the left pane. 3. Select **Azure AD Connect**, select **Pass-through authentication**, and then select **Download Agent**. 4. Select the **Accept terms & download** button.
Second, you can create and run an unattended deployment script. This is useful w
Smart Lockout assists in locking out bad actors who are trying to guess your usersΓÇÖ passwords or using brute-force methods to get in. By configuring Smart Lockout settings in Azure AD and / or appropriate lockout settings in on-premises Active Directory, attacks can be filtered out before they reach Active Directory. Read [this article](../authentication/howto-password-smart-lockout.md) to learn more on how to configure Smart Lockout settings on your tenant to protect your user accounts. ## Next steps-- [Migrate from AD FS to Pass-through Authentication](https://aka.ms/adfstoptadp) - A detailed guide to migrate from AD FS (or other federation technologies) to Pass-through Authentication.
+- [Migrate your apps to Azure AD](../manage-apps/migration-resources.md): Resources to help you migrate application access and authentication to Azure AD.
- [Smart Lockout](../authentication/howto-password-smart-lockout.md): Learn how to configure the Smart Lockout capability on your tenant to protect user accounts. - [Current limitations](how-to-connect-pta-current-limitations.md): Learn which scenarios are supported with the Pass-through Authentication and which ones are not. - [Technical deep dive](how-to-connect-pta-how-it-works.md): Understand how the Pass-through Authentication feature works. - [Frequently asked questions](how-to-connect-pta-faq.yml): Find answers to frequently asked questions. - [Troubleshoot](tshoot-connect-pass-through-authentication.md): Learn how to resolve common problems with the Pass-through Authentication feature. - [Security deep dive](how-to-connect-pta-security-deep-dive.md): Get technical information on the Pass-through Authentication feature.
+- [Hybrid Azure AD join](../devices/howto-hybrid-azure-ad-join.md): Configure Hybrid Azure AD join capability on your tenant for SSO across your cloud and on-premises resources.
- [Azure AD Seamless SSO](how-to-connect-sso.md): Learn more about this complementary feature. - [UserVoice](https://feedback.azure.com/d365community/forum/22920db1-ad25-ec11-b6e6-000d3a4f0789): Use the Azure Active Directory Forum to file new feature requests.
active-directory How To Connect Pta Security Deep Dive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-pta-security-deep-dive.md
The Authentication Agents use the following steps to register themselves with Az
![Agent registration](./media/how-to-connect-pta-security-deep-dive/pta1.png)
-1. Azure AD first requests that a Hybrid Identity Administratoristrator sign in to Azure AD with their credentials. During sign-in, the Authentication Agent acquires an access token that it can use on behalf of the
+1. Azure AD first requests that a Hybrid Identity Administrator sign in to Azure AD with their credentials. During sign-in, the Authentication Agent acquires an access token that it can use on behalf of the
2. The Authentication Agent then generates a key pair: a public key and a private key. - The key pair is generated through standard RSA 2048-bit encryption. - The private key stays on the on-premises server where the Authentication Agent resides.
To auto-update an Authentication Agent:
## Next steps - [Current limitations](how-to-connect-pta-current-limitations.md): Learn which scenarios are supported and which ones are not. - [Quickstart](how-to-connect-pta-quick-start.md): Get up and running on Azure AD Pass-through Authentication.-- [Migrate from AD FS to Pass-through Authentication](https://aka.ms/adfstoptadpdownload) - A detailed guide to migrate from AD FS (or other federation technologies) to Pass-through Authentication.
+- [Migrate your apps to Azure AD](../manage-apps/migration-resources.md): Resources to help you migrate application access and authentication to Azure AD.
- [Smart Lockout](../authentication/howto-password-smart-lockout.md): Configure the Smart Lockout capability on your tenant to protect user accounts. - [How it works](how-to-connect-pta-how-it-works.md): Learn the basics of how Azure AD Pass-through Authentication works. - [Frequently asked questions](how-to-connect-pta-faq.yml): Find answers to frequently asked questions.
active-directory How To Connect Selective Password Hash Synchronization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-selective-password-hash-synchronization.md
To reduce the configuration administrative effort, you should first consider the
> Configuring selective password hash synchronization directly influences password writeback. Password changes or password resets that are initiated in Azure Active Directory write back to on-premises Active Directory only if the user is in scope for password hash synchronization. > [!IMPORTANT]
-> Selective password hash synchronization is supported in 1.6.2.4 or later. If you are using a version lower than that, please upgrade to the latest version.
+> Selective password hash synchronization is supported in Azure AD Connect 1.6.2.4 or later. If you are using a version lower than that, upgrade to the latest version.
### The adminDescription attribute
active-directory Concept Pim For Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/concept-pim-for-groups.md
PIM for Groups is part of Azure AD Privileged Identity Management ΓÇô alongside
With PIM for Groups you can use policies similar to ones you use in PIM for Azure AD Roles and PIM for Azure Resources: you can require approval for membership or ownership activation, enforce multi-factor authentication (MFA), require justification, limit maximum activation time, and more. Each group in PIM for Groups has two policies: one for activation of membership and another for activation of ownership in the group. Up until January 2023, PIM for Groups feature was called ΓÇ£Privileged Access GroupsΓÇ¥.
->[!Note]
-> For groups used for elevating into Azure AD roles, we recommend that you require an approval process for eligible member assignments. Assignments that can be activated without approval can leave you vulnerable to a security risk from less-privileged administrators. For example, the Helpdesk Administrator has permission to reset an eligible user's passwords.
## What are Azure AD role-assignable groups?
active-directory Groups Assign Member Owner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-assign-member-owner.md
When a membership or ownership is assigned, the assignment:
- Can't be removed within five minutes of it being assigned >[!NOTE]
->Every user who is eligible for membership in or ownership of a privileged access group must have an Azure AD Premium P2 license. For more information, see [License requirements to use Privileged Identity Management](subscription-requirements.md).
+>Every user who is eligible for membership in or ownership of a PIM for Groups must have an Azure AD Premium P2 license. For more information, see [License requirements to use Privileged Identity Management](subscription-requirements.md).
## Assign an owner or member of a group
Follow these steps to make a user eligible member or owner of a group. You will
> For groups used for elevating into Azure AD roles, Microsoft recommends that you require an approval process for eligible member assignments. Assignments that can be activated without approval can leave you vulnerable to a security risk from another administrator with permission to reset an eligible user's passwords. - Active assignments don't require the member to perform any activations to use the role. Members or owners assigned as active have the privileges assigned to the role at all times.
-1. If the assignment should be permanent (permanently eligible or permanently assigned), select the **Permanently** checkbox. Depending on the group's settings, the check box might not appear or might not be editable. For more information, check out the [Configure privileged access group settings (preview) in Privileged Identity Management](groups-role-settings.md#assignment-duration) article.
+1. If the assignment should be permanent (permanently eligible or permanently assigned), select the **Permanently** checkbox. Depending on the group's settings, the check box might not appear or might not be editable. For more information, check out the [Configure PIM for Groups settings (preview) in Privileged Identity Management](groups-role-settings.md#assignment-duration) article.
:::image type="content" source="media/pim-for-groups/pim-group-5.png" alt-text="Screenshot of where to configure the setting for add assignments." lightbox="media/pim-for-groups/pim-group-5.png":::
active-directory Groups Renew Extend https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-renew-extend.md
Only Global Administrators, Privileged Role Administrators, or group owners can
## When notifications are sent
-Privileged Identity Management sends email notifications to administrators and affected users of privileged access group assignments that are expiring:
+Privileged Identity Management sends email notifications to administrators and affected users of PIM for Groups assignments that are expiring:
- Within 14 days prior to expiration - One day prior to expiration
active-directory Groups Role Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-role-settings.md
na Previously updated : 01/12/2023 Last updated : 01/27/2023
# Configure PIM for Groups settings (preview)
-In Privileged Identity Management (PIM) for groups in Azure Active Directory (Azure AD), part of Microsoft Entra, role settings define membership/ownership assignment properties: MFA and approval requirements for activation, assignment maximum duration, notification settings, etc. Use the following steps to configure role settings ΓÇô i.e., setup the approval workflow to specify who can approve or deny requests to elevate privilege.
+In Privileged Identity Management (PIM) for groups in Azure Active Directory (Azure AD), part of Microsoft Entra, role settings define membership or ownership assignment properties: MFA and approval requirements for activation, assignment maximum duration, notification settings, etc. Use the following steps to configure role settings and setup the approval workflow to specify who can approve or deny requests to elevate privilege.
-You need to have Global Administrator, Privileged Role Administrator, or group Owner permissions to manage settings for membership/ownership assignments of the group. Role settings are defined per role per group: all assignments for the same role (member/owner) for the same group follow same role settings. Role settings of one group are independent from role settings of another group. Role settings for one role (member) are independent from role settings for another role (owner).
+You need to have Global Administrator, Privileged Role Administrator, or group Owner permissions to manage settings for membership or ownership assignments of the group. Role settings are defined per role per group: all assignments for the same role (member or owner) for the same group follow same role settings. Role settings of one group are independent from role settings of another group. Role settings for one role (member) are independent from role settings for another role (owner).
## Update role settings
Follow these steps to open the settings for a group role.
Use the **Activation maximum duration** slider to set the maximum time, in hours, that an activation request for a role assignment remains active before it expires. This value can be from one to 24 hours.
-### Require multi-factor authentication (MFA) on activation
+### On activation, require multi-factor authentication
You can require users who are eligible for a role to prove who they are using Azure AD Multi-Factor Authentication before they can activate. Multi-factor authentication ensures that the user is who they say they are with reasonable certainty. Enforcing this option protects critical resources in situations when the user account might have been compromised.
User may not be prompted for multi-factor authentication if they authenticated w
For more information, see [Multifactor authentication and Privileged Identity Management](pim-how-to-require-mfa.md).
+### On activation, require Azure AD Conditional Access authentication context (Public Preview)
+
+You can require users who are eligible for a role to satisfy Conditional Access policy requirements: use specific authentication method enforced through Authentication Strengths, elevate the role from Intune compliant device, comply with Terms of Use, and more.
+
+To enforce this requirement, you need to:
+
+1. Create Conditional Access authentication context.
+1. Configure Conditional Access policy that would enforce requirements for this authentication context.
+1. Configure authentication context in PIM settings for the role.
++
+To learn more about Conditional Access authentication context, see [Conditional Access: Cloud apps, actions, and authentication context](../conditional-access/concept-conditional-access-cloud-apps.md#authentication-context).
+ ### Require justification on activation You can require that users enter a business justification when they activate the eligible assignment.
You can require that users enter a support ticket when they activate the eligibl
You can require approval for activation of eligible assignment. Approver doesnΓÇÖt have to be group member or owner. When using this option, you have to select at least one approver (we recommend to select at least two approvers), there are no default approvers.
-To learn more about approvals, see [Approve activation requests for privileged access group members and owners (preview)](groups-approval-workflow.md).
+To learn more about approvals, see [Approve activation requests for PIM for Groups members and owners (preview)](groups-approval-workflow.md).
### Assignment duration
And, you can choose one of these **active** assignment duration options:
### Require multi-factor authentication on active assignment You can require that administrator or group owner provides multi-factor authentication when they create an active (as opposed to eligible) assignment. Privileged Identity Management can't enforce multi-factor authentication when the user uses their role assignment because they are already active in the role from the time that it is assigned.
-User may not be prompted for multi-factor authentication if they authenticated with strong credential or provided multi-factor authentication earlier in this session.
+
+Administrator or group owner may not be prompted for multi-factor authentication if they authenticated with strong credential or provided multi-factor authentication earlier in this session.
### Require justification on active assignment
active-directory Pim Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-apis.md
Currently in general availability, this is the final iteration of the PIM API. B
- Supporting app-only permissions. - New features such as approval and email notification configuration.
-In the current iteration, there is no API support for PIM alerts and privileged access groups.
+In the current iteration, there is no API support for PIM alerts and PIM for Groups.
## Current permissions required
active-directory Pim Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-configure.md
Privileged Identity Management provides time-based and approval-based role activ
## What can I do with it?
-Once you set up Privileged Identity Management, you'll see **Tasks**, **Manage**, and **Activity** options in the left navigation menu. As an administrator, you'll choose between options such as managing **Azure AD roles**, managing **Azure resource** roles, or privileged access groups. When you choose what you want to manage, you see the appropriate set of options for that option.
+Once you set up Privileged Identity Management, you'll see **Tasks**, **Manage**, and **Activity** options in the left navigation menu. As an administrator, you'll choose between options such as managing **Azure AD roles**, managing **Azure resource** roles, or PIM for Groups. When you choose what you want to manage, you see the appropriate set of options for that option.
![Screenshot of Privileged Identity Management in the Azure portal.](./media/pim-configure/pim-quickstart.png)
The following screenshot shows how administrator assigns a role to members.
![Screenshot of Privileged Identity Management role assignment.](./media/pim-configure/role-assignment.png)
-For more information, check out the following articles: [Assign Azure AD roles](pim-how-to-add-role-to-user.md), [Assign Azure resource roles](pim-resource-roles-assign-roles.md), and [Assign eligibility for a privileged access group](groups-assign-member-owner.md)
+For more information, check out the following articles: [Assign Azure AD roles](pim-how-to-add-role-to-user.md), [Assign Azure resource roles](pim-resource-roles-assign-roles.md), and [Assign eligibility for a PIM for Groups](groups-assign-member-owner.md)
### Activate
The following screenshot shows how members activate their role to a limited time
If the role requires [approval](pim-resource-roles-approval-workflow.md) to activate, a notification will appear in the upper right corner of the user's browser informing them the request is pending approval. If an approval isn't required, the member can start using the role.
-For more information, check out the following articles: [Activate Azure AD roles](pim-how-to-activate-role.md), [Activate my Azure resource roles](pim-resource-roles-activate-your-roles.md), and [Activate my privileged access group roles](groups-activate-roles.md)
+For more information, check out the following articles: [Activate Azure AD roles](pim-how-to-activate-role.md), [Activate my Azure resource roles](pim-resource-roles-activate-your-roles.md), and [Activate my PIM for Groups roles](groups-activate-roles.md)
### Approve or deny Delegated approvers receive email notifications when a role request is pending their approval. Approvers can view, approve or deny these pending requests in PIM. After the request has been approved, the member can start using the role. For example, if a user or a group was assigned with Contribution role to a resource group, they'll be able to manage that particular resource group.
-For more information, check out the following articles: [Approve or deny requests for Azure AD roles](azure-ad-pim-approval-workflow.md), [Approve or deny requests for Azure resource roles](pim-resource-roles-approval-workflow.md), and [Approve activation requests for privileged access group](groups-approval-workflow.md)
+For more information, check out the following articles: [Approve or deny requests for Azure AD roles](azure-ad-pim-approval-workflow.md), [Approve or deny requests for Azure resource roles](pim-resource-roles-approval-workflow.md), and [Approve activation requests for PIM for Groups](groups-approval-workflow.md)
### Extend and renew assignments
After administrators set up time-bound owner or member assignments, the first qu
Both user-initiated actions require an approval from a Global Administrator or Privileged Role Administrator. Admins don't need to be in the business of managing assignment expirations. You can just wait for the extension or renewal requests to arrive for simple approval or denial.
-For more information, check out the following articles: [Extend or renew Azure AD role assignments](pim-how-to-renew-extend.md), [Extend or renew Azure resource role assignments](pim-resource-roles-renew-extend.md), and [Extend or renew privileged access group assignments](groups-renew-extend.md)
+For more information, check out the following articles: [Extend or renew Azure AD role assignments](pim-how-to-renew-extend.md), [Extend or renew Azure resource role assignments](pim-resource-roles-renew-extend.md), and [Extend or renew PIM for Groups assignments](groups-renew-extend.md)
## Scenarios
Privileged Identity Management supports the following scenarios:
## Managing privileged access Azure AD groups (preview)
-In Privileged Identity Management (PIM), you can now assign eligibility for membership or ownership of privileged access groups. Starting with this preview, you can assign Azure Active Directory (Azure AD) built-in roles to cloud groups and use PIM to manage group member and owner eligibility and activation. For more information about role-assignable groups in Azure AD, see [Use Azure AD groups to manage role assignments](../roles/groups-concept.md).
+In Privileged Identity Management (PIM), you can now assign eligibility for membership or ownership of PIM for Groups. Starting with this preview, you can assign Azure Active Directory (Azure AD) built-in roles to cloud groups and use PIM to manage group member and owner eligibility and activation. For more information about role-assignable groups in Azure AD, see [Use Azure AD groups to manage role assignments](../roles/groups-concept.md).
>[!Important]
-> To assign a privileged access group to a role for administrative access to Exchange, Security & Compliance Center, or SharePoint, use the Azure AD portal **Roles and Administrators** experience and not in the Privileged Access Groups experience to make the user or group eligible for activation into the group.
+> To assign a PIM for Groups to a role for administrative access to Exchange, Security & Compliance Center, or SharePoint, use the Azure AD portal **Roles and Administrators** experience and not in the PIM for Groups experience to make the user or group eligible for activation into the group.
### Different just-in-time policies for each group
-Some organizations use tools like Azure AD business-to-business (B2B) collaboration to invite their partners as guests to their Azure AD organization. Instead of a single just-in-time policy for all assignments to a privileged role, you can create two different privileged access groups with their own policies. You can enforce less strict requirements for your trusted employees, and stricter requirements like approval workflow for your partners when they request activation into their assigned group.
+Some organizations use tools like Azure AD business-to-business (B2B) collaboration to invite their partners as guests to their Azure AD organization. Instead of a single just-in-time policy for all assignments to a privileged role, you can create two different PIM for Groups with their own policies. You can enforce less strict requirements for your trusted employees, and stricter requirements like approval workflow for your partners when they request activation into their assigned group.
### Activate multiple role assignments in one request
-With the privileged access groups preview, you can give workload-specific administrators quick access to multiple roles with a single just-in-time request. For example, your Tier 3 Office Admins might need just-in-time access to the Exchange Admin, Office Apps Admin, Teams Admin, and Search Admin roles to thoroughly investigate incidents daily. Before today it would require four consecutive requests, which are a process that takes some time. Instead, you can create a role assignable group called ΓÇ£Tier 3 Office AdminsΓÇ¥, assign it to each of the four roles previously mentioned (or any Azure AD built-in roles) and enable it for Privileged Access in the groupΓÇÖs Activity section. Once enabled for privileged access, you can configure the just-in-time settings for members of the group and assign your admins and owners as eligible. When the admins elevate into the group, theyΓÇÖll become members of all four Azure AD roles.
+With the PIM for Groups preview, you can give workload-specific administrators quick access to multiple roles with a single just-in-time request. For example, your Tier 3 Office Admins might need just-in-time access to the Exchange Admin, Office Apps Admin, Teams Admin, and Search Admin roles to thoroughly investigate incidents daily. Before today it would require four consecutive requests, which are a process that takes some time. Instead, you can create a role assignable group called ΓÇ£Tier 3 Office AdminsΓÇ¥, assign it to each of the four roles previously mentioned (or any Azure AD built-in roles) and enable it for Privileged Access in the groupΓÇÖs Activity section. Once enabled for privileged access, you can configure the just-in-time settings for members of the group and assign your admins and owners as eligible. When the admins elevate into the group, theyΓÇÖll become members of all four Azure AD roles.
## Invite guest users and assign Azure resource roles in Privileged Identity Management
active-directory Pim Deployment Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-deployment-plan.md
Previously updated : 1/9/2023 Last updated : 2/3/2023
PIM enables you to allow a specific set of actions at a particular scope. Key fe
* Provide **just-in-time** privileged access to resources
-* Assign **eligibility for membership or ownership** of privileged access groups
+* Assign **eligibility for membership or ownership** of PIM for Groups
* Assign **time-bound** access to resources using start and end dates
Today, you can use PIM with:
* **Azure roles** ΓÇô The role-based access control (RBAC) roles in Azure that grants access to management groups, subscriptions, resource groups, and resources.
-* **Privileged Access Groups** ΓÇô To set up just-in-time access to member and owner role of an Azure AD security group. Privileged Access Groups not only gives you an alternative way to set up PIM for Azure AD roles and Azure roles, but also allows you to set up PIM for other permissions across Microsoft online services like Intune, Azure Key Vaults, and Azure Information Protection.
+* **PIM for Groups** ΓÇô To set up just-in-time access to member and owner role of an Azure AD security group. PIM for Groups not only gives you an alternative way to set up PIM for Azure AD roles and Azure roles, but also allows you to set up PIM for other permissions across Microsoft online services like Intune, Azure Key Vaults, and Azure Information Protection.
You can assign the following to these roles or groups:
-* **Users**- To get just-in-time access to Azure AD roles, Azure roles, and Privileged Access Groups.
+* **Users**- To get just-in-time access to Azure AD roles, Azure roles, and PIM for Groups.
-* **Groups**- Anyone in a group to get just-in-time access to Azure AD roles and Azure roles. For Azure AD roles, the group must be a newly created cloud group thatΓÇÖs marked as assignable to a role while for Azure roles, the group can be any Azure AD security group. We do not recommend assigning/nesting a group to a Privileged Access Groups.
+* **Groups**- Anyone in a group to get just-in-time access to Azure AD roles and Azure roles. For Azure AD roles, the group must be a newly created cloud group thatΓÇÖs marked as assignable to a role while for Azure roles, the group can be any Azure AD security group. We do not recommend assigning/nesting a group to a PIM for Groups.
> [!NOTE]
->You cannot assign service principals as eligible to Azure AD roles, Azure roles, and Privileged Access groups but you can grant a time limited active assignment to all three.
+>You cannot assign service principals as eligible to Azure AD roles, Azure roles, and PIM for Groups but you can grant a time limited active assignment to all three.
### Principle of least privilege
At each stage of your deployment ensure that you are evaluating that the results
* Start with a small set of users (pilot group) and verify that the PIM behaves as expected.
-* Verify whether all the configuration you set up for the roles or privileged access groups are working correctly.
+* Verify whether all the configuration you set up for the roles or PIM for Groups are working correctly.
* Roll it to production only after itΓÇÖs thoroughly tested.
The following table shows an example test case:
For both Azure AD and Azure resource role, make sure that youΓÇÖve users represented who will take those roles. In addition, consider the following roles when you test PIM in your staged environment:
-| Roles| Azure AD roles| Azure Resource roles| Privileged Access Groups |
+| Roles| Azure AD roles| Azure Resource roles| PIM for Groups |
| | | | | | Member of a group| | | x | | Members of a role| x| x| | | IT service owner| x| | x | | Subscription or resource owner| | x| x |
-| Privileged access group owner| | | x |
+| PIM for Groups owner| | | x |
### Plan rollback
When these important events occur in Azure resource roles, PIM sends [email noti
[Configure security alerts for the Azure resource roles](pim-resource-roles-configure-alerts.md) which will trigger an alert in case of any suspicious and unsafe activity.
-## Plan and implement PIM for privileged access groups
+## Plan and implement PIM for PIM for Groups
-Follow these tasks to prepare PIM to manage privileged access groups.
+Follow these tasks to prepare PIM to manage PIM for Groups.
-### Discover privileged access groups
+### Discover PIM for Groups
It may be the case that an individual has five or six eligible assignments to Azure AD roles through PIM. They will have to activate each role individually, which can reduce productivity. Worse still, they can also have tens or hundreds of Azure resources assigned to them, which aggravates the problem.
-In this case, you should use privileged access groups. Create a privileged access group and grant it permanent active access to multiple roles. See [Privileged Identity Management (PIM) for Groups (preview)](concept-pim-for-groups.md).
+In this case, you should use PIM for Groups. Create a PIM for Groups and grant it permanent active access to multiple roles. See [Privileged Identity Management (PIM) for Groups (preview)](concept-pim-for-groups.md).
-To manage an Azure AD role-assignable group as a privileged access group, you must [bring it under management in PIM](groups-discover-groups.md).
+To manage an Azure AD role-assignable group as a PIM for Groups, you must [bring it under management in PIM](groups-discover-groups.md).
-### Configure PIM settings for privileged access groups
+### Configure PIM settings for PIM for Groups
-[Draft and configure settings](groups-role-settings.md) for the privileged access groups that youΓÇÖve planned to protect with PIM.
+[Draft and configure settings](groups-role-settings.md) for the PIM for Groups that youΓÇÖve planned to protect with PIM.
The following table shows example settings:
The following table shows example settings:
| Owner| :heavy_check_mark:| :heavy_check_mark:| :heavy_check_mark:| Other owners of the resource| 1 Hour| None| n/a| 3 months | | Member| :heavy_check_mark:| :heavy_check_mark:| :x:| None| 5 Hour| None| n/a| 3 months |
-### Assign eligibility for privileged access groups
+### Assign eligibility for PIM for Groups
-You can [assign eligibility to members or owners of the privileged access groups.](groups-assign-member-owner.md) With just one activation, they will have access to all the linked resources.
+You can [assign eligibility to members or owners of the PIM for Groups.](groups-assign-member-owner.md) With just one activation, they will have access to all the linked resources.
>[!NOTE]
->You can assign the privileged group to one or more Azure AD and Azure resource roles in the same way as you assign roles to users. A maximum of 400 role-assignable groups can be created in a single Azure AD organization (tenant).
+>You can assign the group to one or more Azure AD and Azure resource roles in the same way as you assign roles to users. A maximum of 400 role-assignable groups can be created in a single Azure AD organization (tenant).
-![Assign eligibility for privileged access groups](media/pim-deployment-plan/privileged-access-groups.png)
+![Diagram of assign eligibility for PIM for Groups.](media/pim-deployment-plan/pim-for-groups.png)
-When privileged group assignment nears its expiration, use [PIM to extend or renew the group assignment](groups-renew-extend.md). YouΓÇÖll require an approval from the group owner.
+When group assignment nears its expiration, use [PIM to extend or renew the group assignment](groups-renew-extend.md). YouΓÇÖll require an approval from the group owner.
### Approve or deny PIM activation request
-Configure privileged access group members and owners to require approval for activation and choose users or groups from your Azure AD organization as delegated approvers. We recommend selecting two or more approvers for each group to reduce workload for the privileged role administrator.
+Configure PIM for Groups members and owners to require approval for activation and choose users or groups from your Azure AD organization as delegated approvers. We recommend selecting two or more approvers for each group to reduce workload for the privileged role administrator.
-[Approve or deny role activation requests for Privileged Access groups](groups-approval-workflow.md). As a delegated approver, you'll receive an email notification when a request is pending for your approval.
+[Approve or deny role activation requests for PIM for Groups](groups-approval-workflow.md). As a delegated approver, you'll receive an email notification when a request is pending for your approval.
-### View audit history for privileged access groups
+### View audit history for PIM for Groups
-[View audit history for all assignments and activations](groups-audit.md) within past 30 days for privileged access groups.
+[View audit history for all assignments and activations](groups-audit.md) within past 30 days for PIM for Groups.
## Next steps
active-directory Pim Email Notifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-email-notifications.md
The following shows an example email that is sent when a user is assigned an Azu
![New Privileged Identity Management email for Azure resource roles](./media/pim-email-notifications/email-resources-new.png)
-## Notifications for Privileged Access groups
+## Notifications for PIM for Groups
-Privileged Identity Management sends emails to Owners only when the following events occur for Privileged Access group assignments:
+Privileged Identity Management sends emails to Owners only when the following events occur for PIM for Groups assignments:
- When an Owner or Member role assignment is pending approval - When an Owner or Member role is assigned
Privileged Identity Management sends emails to Owners only when the following ev
- When an Owner or Member role is being renewed by an end user - When an Owner or Member role activation request is completed
-Privileged Identity Management sends emails to end users when the following events occur for Privileged Access group role assignments:
+Privileged Identity Management sends emails to end users when the following events occur for PIM for Groups role assignments:
- When an Owner or Member role is assigned to the user - When a user's an Owner or Member role is expired
active-directory Pim How To Change Default Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-how-to-change-default-settings.md
Title: Configure Azure AD role settings in PIM - Azure AD | Microsoft Docs
+ Title: Configure Azure AD role settings in PIM - Azure Active Directory
description: Learn how to configure Azure AD role settings in Azure AD Privileged Identity Management (PIM). documentationcenter: '' editor: ''- Previously updated : 11/12/2021 Last updated : 01/27/2023 - # Configure Azure AD role settings in Privileged Identity Management
-A privileged role administrator can customize Privileged Identity Management (PIM) in their Azure Active Directory (Azure AD) organization, including changing the experience for a user who is activating an eligible role assignment. For information on the PIM events that trigger notifications and which administrators receive them, see [Email notifications in Privileged Identity Management](pim-email-notifications.md#notifications-for-azure-ad-roles)
+In Privileged Identity Management (PIM) in Azure Active Directory (Azure AD), part of Microsoft Entra, role settings define role assignment properties: MFA and approval requirements for activation, assignment maximum duration, notification settings, and more. Use the following steps to configure role settings and setup the approval workflow to specify who can approve or deny requests to elevate privilege.
+
+You need to have Global Administrator or Privileged Role Administrator role to manage PIM role settings for Azure AD Role. Role settings are defined per role: all assignments for the same role follow the same role settings. Role settings of one role are independent from role settings of another role.
+
+PIM role settings are also known as ΓÇ£PIM PoliciesΓÇ¥.
+ ## Open role settings Follow these steps to open the settings for an Azure AD role.
-1. Sign in to [Azure portal](https://portal.azure.com/) with a user in the [Privileged Role Administrator](../roles/permissions-reference.md#privileged-role-administrator) role.
+1. [Sign in to Azure AD](https://aad.portal.azure.com/)
-1. Open **Azure AD Privileged Identity Management** &gt; **Azure AD roles** &gt; **Role settings**.
-
- ![Role settings page listing Azure AD roles](./media/pim-how-to-change-default-settings/role-settings.png)
+1. Select **Azure AD Privileged Identity Management -> Azure AD Roles -> Roles**. On this page you can see list of Azure AD roles available in the tenant, including built-in and custom roles.
+ :::image type="content" source="media/pim-how-to-change-default-settings/role-settings.png" alt-text="Screenshot of the list of Azure AD roles available in the tenant, including built-in and custom roles." lightbox="media/pim-how-to-change-default-settings/role-settings.png":::
1. Select the role whose settings you want to configure.
- ![Role setting details page listing several assignment and activation settings](./media/pim-how-to-change-default-settings/role-settings-page.png)
+1. Select **Role settings**. On the Role settings page you can view current PIM role settings for the selected role.
-1. Select **Edit** to open the Role settings page.
+ :::image type="content" source="media/pim-how-to-change-default-settings/role-settings-edit.png" alt-text="Screenshot of the role settings page with options to update assignment and activation settings." lightbox="media/pim-how-to-change-default-settings/role-settings-edit.png":::
- ![Edit role settings page with options to update assignment and activation settings](./media/pim-how-to-change-default-settings/role-settings-edit.png)
+1. Select Edit to update role settings.
- On the Role setting pane for each role, there are several settings you can configure.
+1. Once finished, select Update.
-## Assignment duration
+## Role settings
-You can choose from two assignment duration options for each assignment type (eligible and active) when you configure settings for a role. These options become the default maximum duration when a user is assigned to the role in Privileged Identity Management.
+### Activation maximum duration
-You can choose one of these **eligible** assignment duration options:
+Use the **Activation maximum duration** slider to set the maximum time, in hours, that an activation request for a role assignment remains active before it expires. This value can be from one to 24 hours.
-| Setting | Description |
-| | |
-| Allow permanent eligible assignment | Global admins and Privileged role admins can assign permanent eligible assignment. |
-| Expire eligible assignment after | Global admins and Privileged role admins can require that all eligible assignments have a specified start and end date. |
+### On activation, require multi-factor authentication
-And, you can choose one of these **active** assignment duration options:
+You can require users who are eligible for a role to prove who they are using Azure AD Multi-Factor Authentication before they can activate. Multi-factor authentication ensures that the user is who they say they are with reasonable certainty. Enforcing this option protects critical resources in situations when the user account might have been compromised.
-| Setting | Description |
-| | |
-| Allow permanent active assignment | Global admins and Privileged role admins can assign permanent active assignment. |
-| Expire active assignment after | Global admins and Privileged role admins can require that all active assignments have a specified start and end date. |
+User may not be prompted for multi-factor authentication if they authenticated with strong credential or provided multi-factor authentication earlier in this session.
-> [!NOTE]
-> All assignments that have a specified end date can be renewed by Global admins and Privileged role admins. Also, users can initiate self-service requests to [extend or renew role assignments](pim-resource-roles-renew-extend.md).
+For more information, see [Multifactor authentication and Privileged Identity Management](pim-how-to-require-mfa.md).
-## Require multifactor authentication
+### On activation, require Azure AD Conditional Access authentication context (Public Preview)
-Privileged Identity Management provides enforcement of Azure AD Multi-Factor Authentication on activation and on active assignment.
+You can require users who are eligible for a role to satisfy Conditional Access policy requirements: use specific authentication method enforced through Authentication Strengths, elevate the role from Intune compliant device, comply with Terms of Use, and more.
-### On activation
+To enforce this requirement, you need to:
-You can require users who are eligible for a role to prove who they are using Azure AD Multi-Factor Authentication before they can activate. Multifactor authentication ensures that the user is who they say they are with reasonable certainty. Enforcing this option protects critical resources in situations when the user account might have been compromised.
+1. Create Conditional Access authentication context.
+1. Configure Conditional Access policy that would enforce requirements for this authentication context.
+1. Configure authentication context in PIM settings for the role.
-To require multifactor authentication to activate the role assignment, select the **On activation, require Azure MFA** option in the Activation tab of **Edit role setting**.
-### On active assignment
+To learn more about Conditional Access authentication context, see [Conditional Access: Cloud apps, actions, and authentication context](../conditional-access/concept-conditional-access-cloud-apps.md#authentication-context).
-This option requires admins must complete a multifactor authentication before creating an active (as opposed to eligible) role assignment. Privileged Identity Management can't enforce multifactor authentication when the user uses their role assignment because they are already active in the role from the time that it is assigned.
+### Require justification on activation
-To require multifactor authentication when creating an active role assignment, select the **Require Azure Multi-Factor Authentication on active assignment** option in the Assignment tab of **Edit role setting**.
+You can require users to enter a business justification when they activate the eligible assignment.
-For more information, see [Multifactor authentication and Privileged Identity Management](pim-how-to-require-mfa.md).
+### Require ticket information on activation
-## Activation maximum duration
+You can require users to enter a support ticket number when they activate the eligible assignment. This is information-only field and correlation with information in any ticketing system is not enforced.
-Use the **Activation maximum duration** slider to set the maximum time, in hours, that an activation request for a role assignment remains active before it expires. This value can be from one to 24 hours.
+### Require approval to activate
+
+You can require approval for activation of eligible assignment. Approver doesnΓÇÖt have to have any roles. When using this option, you have to select at least one approver (we recommend to select at least two approvers), there are no default approvers.
+
+To learn more about approvals, see [Approve or deny requests for Azure AD roles in Privileged Identity Management](azure-ad-pim-approval-workflow.md).
+
+### Assignment duration
+
+You can choose from two assignment duration options for each assignment type (eligible and active) when you configure settings for a role. These options become the default maximum duration when a user is assigned to the role in Privileged Identity Management.
+
+You can choose one of these **eligible** assignment duration options:
-## Require justification
+| Setting | Description |
+| | |
+| Allow permanent eligible assignment | Resource administrators can assign permanent eligible assignment. |
+| Expire eligible assignment after | Resource administrators can require that all eligible assignments have a specified start and end date. |
-You can require that users enter a business justification when they activate. To require justification, check the **Require justification on active assignment** box or the **Require justification on activation** box.
+And, you can choose one of these **active** assignment duration options:
-## Require ticket information on activation
+| Setting | Description |
+| | |
+| Allow permanent active assignment | Resource administrators can assign permanent active assignment. |
+| Expire active assignment after | Resource administrators can require that all active assignments have a specified start and end date. |
-If your organization uses a ticketing system to track help desk items or change requests for your environment, you can select the **Require ticket information on activation** box to require the elevation request to contain the name of the ticketing system (optional, if your organization uses multiple systems) and the ticket number that prompted the need for role activation.
+> [!NOTE]
+> All assignments that have a specified end date can be renewed by Global admins and Privileged role admins. Also, users can initiate self-service requests to [extend or renew role assignments](pim-resource-roles-renew-extend.md).
-## Require approval to activate
+### Require multi-factor authentication on active assignment
-If setting multiple approvers, approval completes as soon as one of them approves or denies. You can't force approval from a second or subsequent approver. To require approval to activate a role, follow these steps.
+You can require that administrator provides multi-factor authentication when they create an active (as opposed to eligible) assignment. Privileged Identity Management can't enforce multi-factor authentication when the user uses their role assignment because they are already active in the role from the time that it is assigned.
-1. Check the **Require approval to activate** check box.
+Administrator may not be prompted for multi-factor authentication if they authenticated with strong credential or provided multi-factor authentication earlier in this session.
-1. Select **Select approvers**.
+### Require justification on active assignment
- ![Select a user or group pane to select approvers](./media/pim-resource-roles-configure-role-settings/resources-role-settings-select-approvers.png)
+You can require that users enter a business justification when they create an active (as opposed to eligible) assignment.
-1. Select at least one user and then click **Select**. Select at least one approver. If no specific approvers are selected, Privileged Role Administrators and Global Administrators become the default approvers.
- > [!Note]
- > An approver does not have to have an Azure AD administrative role themselves. They can be a regular user, such as an IT executive.
+In the **Notifications** tab on the role settings page, Privileged Identity Management enables granular control over who receives notifications and which notifications they receive.
-1. Select **Update** to save your changes.
+- **Turning off an email**</br>
+You can turn off specific emails by clearing the default recipient check box and deleting any other recipients.
+- **Limit emails to specified email addresses**</br>
+You can turn off emails sent to default recipients by clearing the default recipient check box. You can then add other email addresses as recipients. If you want to add more than one email address, separate them using a semicolon (;).
+- **Send emails to both default recipients and more recipients**</br>
+You can send emails to both default recipient and another recipient by selecting the default recipient checkbox and adding email addresses for other recipients.
+- **Critical emails only**</br>
+For each type of email, you can select the check box to receive critical emails only. What this means is that Privileged Identity Management will continue to send emails to the specified recipients only when the email requires an immediate action. For example, emails asking users to extend their role assignment will not be triggered while emails requiring admins to approve an extension request will be triggered.
## Manage role settings through Microsoft Graph
active-directory Pim Resource Roles Configure Role Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-configure-role-settings.md
Title: Configure Azure resource role settings in PIM - Azure AD | Microsoft Docs
+ Title: Configure Azure resource role settings in PIM - Azure Active Directory
description: Learn how to configure Azure resource role settings in Azure AD Privileged Identity Management (PIM). documentationcenter: ''
na Previously updated : 06/24/2022 Last updated : 01/27/2023 - # Configure Azure resource role settings in Privileged Identity Management
-When you configure Azure resource role settings, you define the default settings that are applied to Azure role assignments in Privileged Identity Management (PIM) in Azure Active Directory (Azure AD), part of Microsoft Entra. Use the following procedures to configure the approval workflow and specify who can approve or deny requests.
+In Privileged Identity Management (PIM) in Azure Active Directory (Azure AD), part of Microsoft Entra, role settings define role assignment properties: MFA and approval requirements for activation, assignment maximum duration, notification settings, and more. Use the following steps to configure role settings and setup the approval workflow to specify who can approve or deny requests to elevate privilege.
-## Open role settings
-
-Follow these steps to open the settings for an Azure resource role.
-
-1. Sign in to [Azure portal](https://portal.azure.com/) with a user in the [Privileged Role Administrator](../roles/permissions-reference.md#privileged-role-administrator) role.
-
-1. Open **Azure AD Privileged Identity Management**.
-
-1. Select **Azure resources**.
- >[!NOTE]
- > Approver doesn't have to have any Azure or Azure AD role assigned.
+You need to have Owner or User Access Administrator role to manage PIM role settings for the resource. Role settings are defined per role and per resource: all assignments for the same role follow the same role settings. Role settings of one role are independent from role settings of another role. Role settings of one resource are independent from role settings of another resource, and role settings configured on a higher level, such as "Subscription" for example, are not inherited on a lower level, such as "Resource Group" for example.
-1. Select the resource you want to manage, such as a subscription or management group.
+PIM role settings are also known as ΓÇ£PIM PoliciesΓÇ¥.
- ![Azure resources page listing resources that can be managed](./media/pim-resource-roles-configure-role-settings/resources-list.png)
+## Open role settings
-1. Select **Settings**.
+Follow these steps to open the settings for an Azure resource role.
- ![Role settings page listing Azure resource roles](./media/pim-resource-roles-configure-role-settings/resources-role-settings.png)
+1. [Sign in to Azure AD](https://aad.portal.azure.com/)
-1. Select the role whose settings you want to configure.
+1. Select **Azure AD Privileged Identity Management -> Azure Resources**. On this page you can see list of Azure resources discovered in PIM. Use Resource type filter to select all required resource types.
- ![Role setting details page listing several assignment and activation settings](./media/pim-resource-roles-configure-role-settings/resources-role-setting-details.png)
+ :::image type="content" source="media/pim-resource-roles-configure-role-settings/resources-list.png" alt-text="Screenshot of the list of Azure resources discovered in PIM." lightbox="media/pim-resource-roles-configure-role-settings/resources-list.png":::
-1. Select **Edit** to open the **Edit role setting** pane. The first tab allows you to update the configuration for role activation in Privileged Identity Management.
+1. Select the resource that you need to configure PIM role settings for.
- ![Edit role settings page with Activation tab open](./media/pim-resource-roles-configure-role-settings/role-settings-activation-tab.png)
+1. Select **Settings**. View list of PIM policies for a selected resource.
-1. Select the **Assignment** tab or the **Next: Assignment** button at the bottom of the page to open the assignment setting tab. These settings control role assignments made inside the Privileged Identity Management interface.
+ :::image type="content" source="media/pim-resource-roles-configure-role-settings/resources-role-settings.png" alt-text="Screenshot of the list of PIM policies for a selected resource." lightbox="media/pim-resource-roles-configure-role-settings/resources-role-settings.png":::
- ![Role Assignment tab in role settings page](./media/pim-resource-roles-configure-role-settings/role-settings-assignment-tab.png)
+1. Select the role or policy that you want to configure.
-1. Use the **Notification** tab or the **Next: Activation** button at the bottom of the page to get to the notification setting tab for this role. These settings control all the email notifications related to this role.
+1. Select Edit to update role settings.
- ![Role Notifications tab in role settings page](./media/pim-resource-roles-configure-role-settings/role-settings-notification-tab.png)
+1. Once finished, select Update.
- In the **Notifications** tab on the role settings page, Privileged Identity Management enables granular control over who receives notifications and which notifications they receive.
+## Role settings
- - **Turning off an email**<br>You can turn off specific emails by clearing the default recipient check box and deleting any additional recipients.
+### Activation maximum duration
- - **Limit emails to specified email addresses**<br>You can turn off emails sent to default recipients by clearing the default recipient checkbox. You can then add additional email addresses as additional recipients. If you want to add more than one email address, separate them using a semicolon (;).
+Use the **Activation maximum duration** slider to set the maximum time, in hours, that an activation request for a role assignment remains active before it expires. This value can be from one to 24 hours.
- - **Send emails to both default recipients and additional recipients**<br>You can send emails to both default recipient and additional recipient by selecting the default recipient checkbox and adding email addresses for additional recipients.
+### On activation, require multi-factor authentication
- - **Critical emails only**<br>For each type of email, you can select the checkbox to receive critical emails only. What this means is that Privileged Identity Management will continue to send emails to the configured recipients only when the email requires an immediate action. For example, emails asking users to extend their role assignment will not be triggered while an emails requiring admins to approve an extension request will be triggered.
+You can require users who are eligible for a role to prove who they are using Azure AD Multi-Factor Authentication before they can activate. Multi-factor authentication ensures that the user is who they say they are with reasonable certainty. Enforcing this option protects critical resources in situations when the user account might have been compromised.
-1. Select the **Update** button at any time to update the role settings.
+User may not be prompted for multi-factor authentication if they authenticated with strong credential or provided multi-factor authentication earlier in this session.
-## Assignment duration
+For more information, see [Multifactor authentication and Privileged Identity Management](pim-how-to-require-mfa.md).
-You can choose from two assignment duration options for each assignment type (eligible and active) when you configure settings for a role. These options become the default maximum duration when a user is assigned to the role in Privileged Identity Management.
+### On activation, require Azure AD Conditional Access authentication context (Public Preview)
-You can choose one of these **eligible** assignment duration options:
+You can require users who are eligible for a role to satisfy Conditional Access policy requirements: use specific authentication method enforced through Authentication Strengths, elevate the role from Intune compliant device, comply with Terms of Use, and more.
-| | Description |
-| | |
-| **Allow permanent eligible assignment** | Resource administrators can assign permanent eligible assignment. |
-| **Expire eligible assignment after** | Resource administrators can require that all eligible assignments have a specified start and end date. |
+To enforce this requirement, you need to:
-And, you can choose one of these **active** assignment duration options:
+1. Create Conditional Access authentication context.
+1. Configure Conditional Access policy that would enforce requirements for this authentication context.
+1. Configure authentication context in PIM settings for the role.
-| | Description |
-| | |
-| **Allow permanent active assignment** | Resource administrators can assign permanent active assignment. |
-| **Expire active assignment after** | Resource administrators can require that all active assignments have a specified start and end date. |
-> [!NOTE]
-> All assignments that have a specified end date can be renewed by resource administrators. Also, users can initiate self-service requests to [extend or renew role assignments](pim-resource-roles-renew-extend.md).
+To learn more about Conditional Access authentication context, see [Conditional Access: Cloud apps, actions, and authentication context](../conditional-access/concept-conditional-access-cloud-apps.md#authentication-context).
-## Require multifactor authentication
+### Require justification on activation
-Privileged Identity Management provides optional enforcement of Azure AD Multi-Factor Authentication for two distinct scenarios.
+You can require users to enter a business justification when they activate the eligible assignment.
-### On active assignment
+### Require ticket information on activation
-This option requires admins must complete a multifactor authentication before creating an active (as opposed to eligible) role assignment. Privileged Identity Management can't enforce multifactor authentication when the user activates their role assignment because the user is already active in the role from the time that it is assigned.
+You can require users to enter a support ticket number when they activate the eligible assignment. This is information-only field and correlation with information in any ticketing system is not enforced.
-To require multifactor authentication when creating an active role assignment, you can enforce multifactor authentication on active assignment by checking the **Require Multi-Factor Authentication on active assignment** box.
+### Require approval to activate
-### On activation
+You can require approval for activation of eligible assignment. Approver doesnΓÇÖt have to have any roles. When using this option, you have to select at least one approver (we recommend to select at least two approvers), there are no default approvers.
-You can require users who are eligible for a role to prove who they are using Azure AD Multi-Factor Authentication before they can activate. Multifactor authentication ensures that the user is who they say they are with reasonable certainty. Enforcing this option protects critical resources in situations when the user account might have been compromised.
+To learn more about approvals, see [Approve or deny requests for Azure AD roles in Privileged Identity Management](azure-ad-pim-approval-workflow.md).
-To require multifactor authentication before activation, check the **Require Multi-Factor Authentication on activation** box.
+### Assignment duration
-For more information, see [Multifactor authentication and Privileged Identity Management](pim-how-to-require-mfa.md).
+You can choose from two assignment duration options for each assignment type (eligible and active) when you configure settings for a role. These options become the default maximum duration when a user is assigned to the role in Privileged Identity Management.
-## Activation maximum duration
+You can choose one of these **eligible** assignment duration options:
-Use the **Activation maximum duration** slider to set the maximum time, in hours, that an activation request for a role assignment remains active before it expires. This value can be from one to 24 hours.
+| Setting | Description |
+| | |
+| Allow permanent eligible assignment | Resource administrators can assign permanent eligible assignment. |
+| Expire eligible assignment after | Resource administrators can require that all eligible assignments have a specified start and end date. |
-## Require justification
+And, you can choose one of these **active** assignment duration options:
-You can require that users enter a business justification when they activate. To require justification, check the **Require justification on active assignment** box or the **Require justification on activation** box.
+| Setting | Description |
+| | |
+| Allow permanent active assignment | Resource administrators can assign permanent active assignment. |
+| Expire active assignment after | Resource administrators can require that all active assignments have a specified start and end date. |
-## Require approval to activate
+> [!NOTE]
+> All assignments that have a specified end date can be renewed by Global admins and Privileged role admins. Also, users can initiate self-service requests to [extend or renew role assignments](pim-resource-roles-renew-extend.md).
-If you want to require approval to activate a role, follow these steps.
+### Require multi-factor authentication on active assignment
-1. Check the **Require approval to activate** check box.
+You can require that administrator provides multi-factor authentication when they create an active (as opposed to eligible) assignment. Privileged Identity Management can't enforce multi-factor authentication when the user uses their role assignment because they are already active in the role from the time that it is assigned.
-1. Select **Select approvers** to open the **Select a member or group** page.
+Administrator may not be prompted for multi-factor authentication if they authenticated with strong credential or provided multi-factor authentication earlier in this session.
- ![Select a user or group pane to select approvers](./media/pim-resource-roles-configure-role-settings/resources-role-settings-select-approvers.png)
+### Require justification on active assignment
-1. Select at least one user or group and then click **Select**. You can add any combination of users and groups. You must select at least one approver. There are no default approvers.
+You can require that users enter a business justification when they create an active (as opposed to eligible) assignment.
- Your selections will appear in the list of selected approvers.
+In the **Notifications** tab on the role settings page, Privileged Identity Management enables granular control over who receives notifications and which notifications they receive.
-1. Once you have specified your all your role settings, select **Update** to save your changes.
+- **Turning off an email**</br>
+You can turn off specific emails by clearing the default recipient check box and deleting any other recipients.
+- **Limit emails to specified email addresses**</br>
+You can turn off emails sent to default recipients by clearing the default recipient check box. You can then add other email addresses as recipients. If you want to add more than one email address, separate them using a semicolon (;).
+- **Send emails to both default recipients and more recipients**</br>
+You can send emails to both default recipient and another recipient by selecting the default recipient checkbox and adding email addresses for other recipients.
+- **Critical emails only**</br>
+For each type of email, you can select the check box to receive critical emails only. What this means is that Privileged Identity Management will continue to send emails to the specified recipients only when the email requires an immediate action. For example, emails asking users to extend their role assignment will not be triggered while emails requiring admins to approve an extension request will be triggered.
## Next steps
active-directory Subscription Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/subscription-requirements.md
You will need an Azure AD license to use PIM and all of it's settings. Currently
Ensure that your directory has Azure AD Premium P2 licenses for the following categories of users: - Users with eligible and/or time-bound assignments to Azure AD or Azure roles managed using PIM-- Users with eligible and/or time-bound assignments as members or owners of privileged access groups
+- Users with eligible and/or time-bound assignments as members or owners of PIM for Groups
- Users able to approve or reject activation requests in PIM - Users assigned to an access review - Users who perform access reviews
active-directory Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/best-practices.md
If you have an external governance system that takes advantage of groups, then y
You can assign an owner to role-assignable groups. That owner decides who is added to or removed from the group, so indirectly, decides who gets the role assignment. In this way, a Global Administrator or Privileged Role Administrator can delegate role management on a per-role basis by using groups. For more information, see [Use Azure AD groups to manage role assignments](groups-concept.md).
-## 7. Activate multiple roles at once using privileged access groups
+## 7. Activate multiple roles at once using PIM for Groups
It may be the case that an individual has five or six eligible assignments to Azure AD roles through PIM. They will have to activate each role individually, which can reduce productivity. Worse still, they can also have tens or hundreds of Azure resources assigned to them, which aggravates the problem.
-In this case, you should use [Privileged Identity Management (PIM) for Groups (preview)](../privileged-identity-management/concept-pim-for-groups.md). Create a privileged access group and grant it permanent access to multiple roles (Azure AD and/or Azure). Make that user an eligible member or owner of this group. With just one activation, they will have access to all the linked resources.
+In this case, you should use [Privileged Identity Management (PIM) for Groups (preview)](../privileged-identity-management/concept-pim-for-groups.md). Create a PIM for Groups and grant it permanent access to multiple roles (Azure AD and/or Azure). Make that user an eligible member or owner of this group. With just one activation, they will have access to all the linked resources.
-![Privileged access group diagram showing activating multiple roles at once](./media/best-practices/privileged-access-group.png)
+![PIM for Groups diagram showing activating multiple roles at once](./media/best-practices/pim-for-groups.png)
## 8. Use cloud native accounts for Azure AD roles
active-directory Groups Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/groups-concept.md
Role-assignable groups are designed to help prevent potential breaches by having
If you do not want members of the group to have standing access to a role, you can use [Azure AD Privileged Identity Management (PIM)](../privileged-identity-management/pim-configure.md) to make a group eligible for a role assignment. Each member of the group is then eligible to activate the role assignment for a fixed time duration.
-> [!NOTE]
-> For privileged access groups that are used to elevate into Azure AD roles, we recommend that you require an approval process for eligible member assignments. Assignments that can be activated without approval might create a security risk from administrators who have a lower level of permissions. For example, the Helpdesk Administrator has permissions to reset an eligible user's password.
## Scenarios not supported
active-directory Permissions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/permissions-reference.md
Users in this role can create and manage content, like topics, acronyms and lear
## License Administrator
-Users in this role can add, remove, and update license assignments on users, groups (using group-based licensing), and manage the usage location on users. The role does not grant the ability to purchase or manage subscriptions, create or manage groups, or create or manage users beyond the usage location. This role has no access to view, create, or manage support tickets.
+Users in this role can read, add, remove, and update license assignments on users, groups (using group-based licensing), and manage the usage location on users. The role does not grant the ability to purchase or manage subscriptions, create or manage groups, or create or manage users beyond the usage location. This role has no access to view, create, or manage support tickets.
> [!div class="mx-tableFixed"] > | Actions | Description |
Assign the User Administrator role to users who need to do the following:
| Delete or restore some users | [Who can perform sensitive actions](#who-can-perform-sensitive-actions) | | Create and manage user views | | | Create and manage all groups | |
-| Assign licenses for all users, including all administrators | |
+| Assign and read licenses for all users, including all administrators | |
| Reset passwords | [Who can reset passwords](#who-can-reset-passwords) | | Invalidate refresh tokens | [Who can reset passwords](#who-can-reset-passwords) | | Update (FIDO) device keys | |
active-directory Beable Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/beable-tutorial.md
+
+ Title: Azure Active Directory SSO integration with Beable
+description: Learn how to configure single sign-on between Azure Active Directory and Beable.
++++++++ Last updated : 02/09/2023++++
+# Azure Active Directory SSO integration with Beable
+
+In this article, you learn how to integrate Beable with Azure Active Directory (Azure AD). Beable Education offers interactive & engaging online learning platforms, textbooks & mobile apps for students to access information & succeed in studies. When you integrate Beable with Azure AD, you can:
+
+* Control in Azure AD who has access to Beable.
+* Enable your users to be automatically signed-in to Beable with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You'll configure and test Azure AD single sign-on for Beable in a test environment. Beable supports **IDP** initiated single sign-on.
+
+## Prerequisites
+
+To integrate Azure Active Directory with Beable, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Beable single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the Beable application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add Beable from the Azure AD gallery
+
+Add Beable from the Azure AD application gallery to configure single sign-on with Beable. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **Beable** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a URL using the following pattern:
+ `https://<SUBDOMAIN>.beable.com`
+
+ b. In the **Reply URL** textbox, type a URL using the following pattern:
+ `https://prod-literacy-backend-alb-12049610218161332941.beable.com/login/ssoVerification/?providerId=1466658d-11ae-11ed-b1a0-b9e58c7ef6cc&identifier=<DOMAIN>`
+
+ > [!Note]
+ > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [Beable support team](https://beable.com/contact/) to get these values. You can also refer to the patterns shown in the Basic SAML Configuration section in the Azure portal.
+
+1. Beable application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![Screenshot shows the image of attributes configuration.](common/default-attributes.png "Image")
+
+1. In addition to above, Beable application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | | |
+ | usertype | user.usertype |
+ | preferredlanguage | user.preferredlanguage |
+ | assignedroles | user.assignedroles |
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate")
+
+1. On the **Set up Beable** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
+
+## Configure Beable SSO
+
+To configure single sign-on on **Beable** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Beable support team](https://beable.com/contact/). They set this setting to have the SAML SSO connection set properly on both sides
+
+### Create Beable test user
+
+In this section, the users are rostered in Beable. Work with [Beable support team](https://beable.com/contact/) to provision the users in the Beable platform.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on Test this application in Azure portal and you should be automatically signed in to the Beable for which you set up the SSO.
+
+* You can use Microsoft My Apps. When you click the Beable tile in the My Apps, you should be automatically signed in to the Beable for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Beable you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Canva Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/canva-tutorial.md
+
+ Title: Azure Active Directory SSO integration with Canva
+description: Learn how to configure single sign-on between Azure Active Directory and Canva.
++++++++ Last updated : 02/09/2023++++
+# Azure Active Directory SSO integration with Canva
+
+In this article, you'll learn how to integrate Canva with Azure Active Directory (Azure AD). Canva is your photo editor, video editor, and graphic design tool all in one app. Create stunning social media posts, videos, cards, flyers, photo collages & more. When you integrate Canva with Azure AD, you can:
+
+* Control in Azure AD who has access to Canva.
+* Enable your users to be automatically signed-in to Canva with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You'll configure and test Azure AD single sign-on for Canva in a test environment. Canva supports **IDP** initiated single sign-on and **Just In Time** user provisioning.
+
+## Prerequisites
+
+To integrate Azure Active Directory with Canva, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Canva single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the Canva application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add Canva from the Azure AD gallery
+
+Add Canva from the Azure AD application gallery to configure single sign-on with Canva. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **Canva** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure.
+
+1. Your Canva application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows an example for this. The default value of **Unique User Identifier** is **user.userprincipalname** but Canva expects this to be mapped with the user's object id. For that you can use **user.objectid** attribute from the list or use the appropriate attribute value based on your organization configuration and select Name identifier format as **Persistent** from the dropdown.
+
+ ![Screenshot shows the image of custom attribute mappings.](common/default-attributes.png "Image")
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate")
+
+1. On the **Set up Canva** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
+
+## Configure Canva SSO
+
+To configure single sign-on on **Canva** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Canva support team](mailto:support@canva.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Canva test user
+
+In this section, a user called B.Simon is created in Canva. Canva supports just-in-time user provisioning, which is enabled by default. There's no action item for you in this section. If a user doesn't already exist in Canva, a new one is commonly created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on Test this application in Azure portal and you should be automatically signed in to the Canva for which you set up the SSO.
+
+* You can use Microsoft My Apps. When you click the Canva tile in the My Apps, you should be automatically signed in to the Canva for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure Canva you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Dojonavi Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/dojonavi-tutorial.md
+
+ Title: Azure Active Directory SSO integration with DojoNavi
+description: Learn how to configure single sign-on between Azure Active Directory and DojoNavi.
++++++++ Last updated : 02/09/2023++++
+# Azure Active Directory SSO integration with DojoNavi
+
+In this article, you'll learn how to integrate DojoNavi with Azure Active Directory (Azure AD). "Dojo Navi" is a next-generation manual solution that greatly contributes to various system operations in a company by providing "navigation functions" and "blocking functions" in system operation that have never existed before, in order to significantly improve system operation efficiency and significantly reduce system operation costs. When you integrate DojoNavi with Azure AD, you can:
+
+* Control in Azure AD who has access to DojoNavi.
+* Enable your users to be automatically signed-in to DojoNavi with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You'll configure and test Azure AD single sign-on for DojoNavi in a test environment. DojoNavi supports **SP** and **IDP** initiated single sign-on.
+
+## Prerequisites
+
+To integrate Azure Active Directory with DojoNavi, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* DojoNavi single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the DojoNavi application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add DojoNavi from the Azure AD gallery
+
+Add DojoNavi from the Azure AD application gallery to configure single sign-on with DojoNavi. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **DojoNavi** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a URL using one of the following patterns:
+
+ | **Identifier** |
+ |--|
+ | `https://<SUBDOMAIN>.dojo-navi.com/external_sso_service/metadata/` |
+ | `https://<SUBDOMAIN>.dojo-sero.tepss.com/external_sso_service/metadata/` |
+
+ b. In the **Reply URL** textbox, type a URL using one of the following patterns:
+
+ | **Reply URL** |
+ |-|
+ | `https://<SUBDOMAIN>.dojo-navi.com/external_sso_service/acs/` |
+ | `https://<SUBDOMAIN>.dojo-sero.tepss.com/external_sso_service/acs/` |
+
+1. If you wish to configure the application in **SP** initiated mode, then perform the following step:
+
+ In the **Sign on URL** textbox, type a URL using one of the following patterns:
+
+ | **Sign on URL** |
+ |--|
+ | `https://<SUBDOMAIN>.dojo-navi.com/external_sso_service/sso/` |
+ | `https://<SUBDOMAIN>.dojo-sero.tepss.com/external_sso_service/sso/` |
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [DojoNavi Client support team](mailto:product_support@tenda.co.jp) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set up DojoNavi** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
+
+## Configure DojoNavi SSO
+
+To configure single sign-on on **DojoNavi** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [DojoNavi support team](mailto:product_support@tenda.co.jp). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create DojoNavi test user
+
+In this section, you create a user called Britta Simon at DojoNavi. Work with [DojoNavi support team](mailto:product_support@tenda.co.jp) to add the users in the DojoNavi platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to DojoNavi Sign-on URL where you can initiate the login flow.
+
+* Go to DojoNavi Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the DojoNavi for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the DojoNavi tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the DojoNavi for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure DojoNavi you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory It Conductor Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/it-conductor-tutorial.md
+
+ Title: Azure Active Directory SSO integration with IT-Conductor
+description: Learn how to configure single sign-on between Azure Active Directory and IT-Conductor.
++++++++ Last updated : 02/06/2023++++
+# Azure Active Directory SSO integration with IT-Conductor
+
+In this article, you'll learn how to integrate IT-Conductor with Azure Active Directory (Azure AD). IT-Conductor is a Software-as-a-Service automation platform for remote agentless monitoring, performance management and IT operations. When you integrate IT-Conductor with Azure AD, you can:
+
+* Control in Azure AD who has access to IT-Conductor.
+* Enable your users to be automatically signed-in to IT-Conductor with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You'll configure and test Azure AD single sign-on for IT-Conductor in a test environment. IT-Conductor supports **IDP** initiated single sign-on and **Just In Time** user provisioning.
+
+## Prerequisites
+
+To integrate Azure Active Directory with IT-Conductor, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* IT-Conductor single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the IT-Conductor application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add IT-Conductor from the Azure AD gallery
+
+Add IT-Conductor from the Azure AD application gallery to configure single sign-on with IT-Conductor. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **IT-Conductor** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure.
+
+1. IT-Conductor application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![Screenshot shows the image of attributes configuration.](common/default-attributes.png "Image")
+
+1. In addition to above, IT-Conductor application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | | |
+ | PERSON_Email | user.mail |
+ | OBJECT_Name | user.userprincipalname |
+ | PERSON_FirstName | user.givenname |
+ | PERSON_LastName | user.surname |
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set up IT-Conductor** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
+
+## Configure IT-Conductor SSO
+
+To configure single sign-on on **IT-Conductor** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [IT-Conductor support team](mailto:support@itconductor.com). They set this setting to have the SAML SSO connection set properly on both sides. For more information, please refer [this](https://docs.itconductor.com/wiki/start-here/sso-setup) link.
+
+### Create IT-Conductor test user
+
+In this section, a user called B.Simon is created in IT-Conductor. IT-Conductor supports just-in-time user provisioning, which is enabled by default. There's no action item for you in this section. If a user doesn't already exist in IT-Conductor, a new one is commonly created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on Test this application in Azure portal and you should be automatically signed in to the IT-Conductor for which you set up the SSO.
+
+* You can use Microsoft My Apps. When you click the IT-Conductor tile in the My Apps, you should be automatically signed in to the IT-Conductor for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure IT-Conductor you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Kno2fy Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/kno2fy-tutorial.md
+
+ Title: Azure Active Directory SSO integration with Kno2fy
+description: Learn how to configure single sign-on between Azure Active Directory and Kno2fy.
++++++++ Last updated : 02/06/2023++++
+# Azure Active Directory SSO integration with Kno2fy
+
+In this article, you learn how to integrate Kno2fy with Azure Active Directory (Azure AD). Kno2fy empowers healthcare organizations to send, receive, and find patient information across the healthcare ecosystem with just a few quick clicks. When you integrate Kno2fy with Azure AD, you can:
+
+* Control in Azure AD who has access to Kno2fy.
+* Enable your users to be automatically signed-in to Kno2fy with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You'll configure and test Azure AD single sign-on for Kno2fy in a test environment. Kno2fy supports only **SP** initiated single sign-on.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Prerequisites
+
+To integrate Azure Active Directory with Kno2fy, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Kno2fy single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the Kno2fy application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add Kno2fy from the Azure AD gallery
+
+Add Kno2fy from the Azure AD application gallery to configure single sign-on with Kno2fy. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **Kno2fy** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type the value:
+ `urn:auth0:kno2:azuread`
+
+ b. In the **Reply URL** textbox, type the URL:
+ `https://auth.kno2fy.com/login/callback?connection=azuread`
+
+ c. In the **Sign on URL** textbox, type the URL:
+ `https://app.kno2fy.com/account/login/azuread`
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate")
+
+1. On the **Set up Kno2fy** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
+
+## Configure Kno2fy SSO
+
+To configure single sign-on on **Kno2fy** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Kno2fy support team](mailto:support@kno2.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Kno2fy test user
+
+In this section, you create a user called Britta Simon at Kno2fy. Work with [Kno2fy support team](mailto:support@kno2.com) to add the users in the Kno2fy platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Kno2fy Sign-on URL where you can initiate the login flow.
+
+* Go to Kno2fy Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you select the Kno2fy tile in the My Apps, this will redirect to Kno2fy Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure Kno2fy you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Knowledge Work Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/knowledge-work-tutorial.md
+
+ Title: Azure Active Directory SSO integration with Knowledge Work
+description: Learn how to configure single sign-on between Azure Active Directory and Knowledge Work.
++++++++ Last updated : 02/09/2023++++
+# Azure Active Directory SSO integration with Knowledge Work
+
+In this article, you learn how to integrate Knowledge Work with Azure Active Directory (Azure AD). "Knowledge Work" is a cloud service that realizes various elements of sales enablement with a single tool and improves the sales productivity of companies. Specifically, it is possible to share sales materials and sales know-how, and provide learning programs for sales. When you integrate Knowledge Work with Azure AD, you can:
+
+* Control in Azure AD who has access to Knowledge Work.
+* Enable your users to be automatically signed-in to Knowledge Work with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You'll configure and test Azure AD single sign-on for Knowledge Work in a test environment. Knowledge Work supports only **SP** initiated single sign-on and **Just In Time** user provisioning.
+
+## Prerequisites
+
+To integrate Azure Active Directory with Knowledge Work, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Knowledge Work single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the Knowledge Work application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add Knowledge Work from the Azure AD gallery
+
+Add Knowledge Work from the Azure AD application gallery to configure single sign-on with Knowledge Work. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **Knowledge Work** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a URL using the following pattern:
+ `https://<CustomerName>.kwork.cloud/saml`
+
+ b. In the **Reply URL** textbox, type the URL:
+ `https://knowledgework-prd.firebaseapp.com/__/auth/handle`
+
+ c. In the **Sign on URL** textbox, type a URL using the following pattern:
+ `https://<CustomerName>.kwork.cloud/login`
+
+ > [!Note]
+ > These values are not real. Update these values with the actual Identifier and Sign-on URL. Contact [Knowledge Work Client support team](mailto:support@knowledgework.com) to get these values. You can also refer to the patterns shown in the Basic SAML Configuration section in the Azure portal.
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate")
+
+1. On the **Set up Knowledge Work** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
+
+## Configure Knowledge Work SSO
+
+To configure single sign-on on **Knowledge Work** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Knowledge Work support team](mailto:support@knowledgework.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Knowledge Work test user
+
+In this section, a user called B.Simon is created in Knowledge Work. Knowledge Work supports just-in-time user provisioning, which is enabled by default. There's no action item for you in this section. If a user doesn't already exist in Knowledge Work, a new one is commonly created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Knowledge Work Sign-on URL where you can initiate the login flow.
+
+* Go to Knowledge Work Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you select the Knowledge Work tile in the My Apps, this will redirect to Knowledge Work Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure Knowledge Work you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Configure Cmmc Level 2 Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/configure-cmmc-level-2-access-control.md
The following table provides a list of practice statement and objectives, and Az
| - | - | | AC.L2-3.1.3<br><br>**Practice statement:** Control the flow of CUI in accordance with approved authorizations.<br><br>**Objectives:**<br>Determine if:<br>[a.] information flow control policies are defined;<br>[b.] methods and enforcement mechanisms for controlling the flow of CUI are defined;<br>[c.] designated sources and destinations (for example, networks, individuals, and devices) for CUI within the system and between intercfeetonnected systems are identified;<br>[d.] authorizations for controlling the flow of CUI are defined; and<br>[e.] approved authorizations for controlling the flow of CUI are enforced. | Configure Conditional Access policies to control the flow of CUI from trusted locations, trusted devices, approved applications and require app protection policy. For finer grained authorization to CUI, configure app-enforced restrictions(Exchange/SharePoint Online), App Control (with Microsoft Defender for Cloud Apps), Authentication Context. Deploy Azure AD Application Proxy to secure access to on-premises applications.<br>[Location condition in Azure Active Directory Conditional Access ](../conditional-access/location-condition.md)<br>[Grant controls in Conditional Access policy - Require device to be marked as compliant](../conditional-access/concept-conditional-access-grant.md)<br>[Grant controls in Conditional Access policy - Require hybrid Azure AD joined device](../conditional-access/concept-conditional-access-grant.md)<br>[Grant controls in Conditional Access policy - Require approved client app](../conditional-access/concept-conditional-access-grant.md)<br>[Grant controls in Conditional Access policy - Require app protection policy](../conditional-access/concept-conditional-access-grant.md)<br>[Session controls in Conditional Access policy - Application enforced restrictions](../conditional-access/concept-conditional-access-session.md)<br>[Protect with Microsoft Defender for Cloud Apps Conditional Access App Control](/defender-cloud-apps/proxy-intro-aad)<br>[Cloud apps, actions, and authentication context in Conditional Access policy ](../conditional-access/concept-conditional-access-cloud-apps.md)<br>[Remote access to on-premises apps using Azure AD Application Proxy](../app-proxy/application-proxy.md)<br><br>**Authentication Context**<br>[Configuring Authentication context & Assign to Conditional Access Policy](../conditional-access/concept-conditional-access-cloud-apps.md)<br><br>**Information Protection**<br>Know and protect your data; help prevent data loss.<br>[Protect your sensitive data with Microsoft Purview](/microsoft-365/compliance/information-protection?view=o365-worldwide&preserve-view=true)<br><br>**Conditional Access**<br>[Conditional Access for Azure information protection (AIP)](https://techcommunity.microsoft.com/t5/security-compliance-and-identity/conditional-access-policies-for-azure-information-protection/ba-p/250357) <br><br>**Application Proxy**<br>[Remote access to on-premises apps using Azure AD Application Proxy](../app-proxy/application-proxy.md) | |AC.L2-3.1.4<br><br>**Practice statement:** Separate the duties of individuals to reduce the risk of malevolent activity without collusion.<br><br>**Objectives:**<br>Determine if:<br>[a.] the duties of individuals requiring separation are defined;<br>[b.] responsibilities for duties that require separation are assigned to separate individuals; and<br>[c.] access privileges that enable individuals to exercise the duties that require separation are granted to separate individuals. | Ensuring adequate separation of duties by scoping appropriate access. Configure Entitlement Management Access packages to govern access to applications, groups, Teams and SharePoint sites. Configure Separation of Duties checks within access packages to avoid a user obtaining excessive access. In Azure AD entitlement management, you can configure multiple policies, with different settings for each user community that will need access through an access package. This configuration includes restrictions such that a user of a particular group, or already assigned a different access package, isn't assigned other access packages, by policy.<br><br>Configure administrative units in Azure Active Directory to scope administrative privilege so that administrators with privileged roles are scoped to only have those privileges on limited set of directory objects(users, groups, devices).<br>[What is entitlement management?](../governance/entitlement-management-overview.md)<br>[What are access packages and what resources can I manage with them?](../governance/entitlement-management-overview.md)<br>[Configure separation of duties for an access package in Azure AD entitlement management](../governance/entitlement-management-access-package-incompatible.md)<br>[Administrative units in Azure Active Directory](../roles/administrative-units.md)|
-| AC.L2-3.1.5<br><br>**Practice statement:** Employ the principle of least privilege, including specific security functions and privileged accounts.<br><br>**Objectives:**<br>Determine if:<br>[a.] privileged accounts are identified;<br>[b.] access to privileged accounts is authorized in accordance with the principle of least privilege;<br>[c.] security functions are identified; and<br>[d.] access to security functions is authorized in accordance with the principle of least privilege. | You're responsible for implementing and enforcing the rule of least privilege. This action can be accomplished with Privileged Identity Management for configuring enforcement, monitoring, and alerting. Set requirements and conditions for role membership.<br><br>Once privileged accounts are identified and managed, use [Entitlement Lifecycle Management](../governance/entitlement-management-overview.md) and [Access reviews](../governance/access-reviews-overview.md) to set, maintain and audit adequate access. Use the [MS Graph API](/graph/api/directoryrole-list-members?view=graph-rest-1.0&tabs=http&preserve-view=true) to discover and monitor directory roles.<br><br>**Assign roles**<br>[Assign Azure AD roles in PIM](../privileged-identity-management/pim-how-to-add-role-to-user.md)<br>[Assign Azure resource roles in Privileged Identity Management](../privileged-identity-management/pim-resource-roles-assign-roles.md)<br>[Assign eligible owners and members for privileged access groups](../privileged-identity-management/groups-assign-member-owner.md)<br><br>**Set role settings** <br>[Configure Azure AD role settings in PIM](../privileged-identity-management/pim-how-to-change-default-settings.md)<br>[Configure Azure resource role settings in PIM](../privileged-identity-management/pim-resource-roles-configure-role-settings.md)<br>[Configure privileged access groups settings in PIM](../privileged-identity-management/groups-role-settings.md)<br><br>**Set up alerts**<br>[Security alerts for Azure AD roles in PIM](../privileged-identity-management/pim-how-to-configure-security-alerts.md)<br>[Configure security alerts for Azure resource roles in Privileged Identity Management](../privileged-identity-management/pim-resource-roles-configure-alerts.md) |
+| AC.L2-3.1.5<br><br>**Practice statement:** Employ the principle of least privilege, including specific security functions and privileged accounts.<br><br>**Objectives:**<br>Determine if:<br>[a.] privileged accounts are identified;<br>[b.] access to privileged accounts is authorized in accordance with the principle of least privilege;<br>[c.] security functions are identified; and<br>[d.] access to security functions is authorized in accordance with the principle of least privilege. | You're responsible for implementing and enforcing the rule of least privilege. This action can be accomplished with Privileged Identity Management for configuring enforcement, monitoring, and alerting. Set requirements and conditions for role membership.<br><br>Once privileged accounts are identified and managed, use [Entitlement Lifecycle Management](../governance/entitlement-management-overview.md) and [Access reviews](../governance/access-reviews-overview.md) to set, maintain and audit adequate access. Use the [MS Graph API](/graph/api/directoryrole-list-members?view=graph-rest-1.0&tabs=http&preserve-view=true) to discover and monitor directory roles.<br><br>**Assign roles**<br>[Assign Azure AD roles in PIM](../privileged-identity-management/pim-how-to-add-role-to-user.md)<br>[Assign Azure resource roles in Privileged Identity Management](../privileged-identity-management/pim-resource-roles-assign-roles.md)<br>[Assign eligible owners and members for PIM for Groups](../privileged-identity-management/groups-assign-member-owner.md)<br><br>**Set role settings** <br>[Configure Azure AD role settings in PIM](../privileged-identity-management/pim-how-to-change-default-settings.md)<br>[Configure Azure resource role settings in PIM](../privileged-identity-management/pim-resource-roles-configure-role-settings.md)<br>[Configure PIM for Groups settings in PIM](../privileged-identity-management/groups-role-settings.md)<br><br>**Set up alerts**<br>[Security alerts for Azure AD roles in PIM](../privileged-identity-management/pim-how-to-configure-security-alerts.md)<br>[Configure security alerts for Azure resource roles in Privileged Identity Management](../privileged-identity-management/pim-resource-roles-configure-alerts.md) |
| AC.L2-3.1.6<br><br>**Practice statement:** Use non-privileged accounts or roles when accessing non security functions.<br><br>**Objectives:**<br>Determine if:<br>[a.] non security functions are identified; and <br>[b.] users are required to use non-privileged accounts or roles when accessing non security functions.<br><br>AC.L2-3.1.7<br><br>**Practice statement:** Prevent non-privileged users from executing privileged functions and capture the execution of such functions in audit logs.<br><br>**Objectives:**<br>Determine if:<br>[a.] privileged functions are defined;<br>[b.] non-privileged users are defined;<br>[c.] non-privileged users are prevented from executing privileged functions; and<br>[d.] the execution of privileged functions is captured in audit logs. |Requirements in AC.L2-3.1.6 and AC.L2-3.1.7 complement each other. Require separate accounts for privilege and non-privileged use. Configure Privileged Identity Management (PIM) to bring just-in-time(JIT) privileged access and remove standing access. Configure role based conditional access policies to limit access to productivity application for privileged users. For highly privileged users, secure devices as part of the privileged access story. All privileged actions are captured in the Azure AD Audit logs.<br>[Securing privileged access overview](/security/compass/overview)<br>[Configure Azure AD role settings in PIM](../privileged-identity-management/pim-how-to-change-default-settings.md)<br>[Users and groups in Conditional Access policy](../conditional-access/concept-conditional-access-users-groups.md)<br>[Why are privileged access devices important](/security/compass/privileged-access-devices) | | AC.L2-3.1.8<br><br>**Practice statement:** Limit unsuccessful sign-on attempts.<br><br>**Objectives:**<br>Determine if:<br>[a.] the means of limiting unsuccessful sign-on attempts is defined; and<br>[b.] the defined means of limiting unsuccessful sign-on attempts is implemented. | Enable custom smart lock-out settings. Configure lock-out threshold and lock-out duration in seconds to implement these requirements.<br>[Protect user accounts from attacks with Azure Active Directory smart lockout](../authentication/howto-password-smart-lockout.md)<br>[Manage Azure AD smart lockout values](../authentication/howto-password-smart-lockout.md) | | AC.L2-3.1.9<br><br>**Practice statement:** Provide privacy and security notices consistent with applicable CUI rules.<br><br>**Objectives:**<br>Determine if:<br>[a.] privacy and security notices required by CUI-specified rules are identified, consistent, and associated with the specific CUI category; and<br>[b.] privacy and security notices are displayed. | With Azure AD, you can deliver notification or banner messages for all apps that require and record acknowledgment before granting access. You can granularly target these terms of use policies to specific users (Member or Guest). You can also customize them per application via conditional access policies.<br><br>**Conditional access** <br>[What is conditional access in Azure AD?](../conditional-access/overview.md)<br><br>**Terms of use**<br>[Azure Active Directory terms of use](../conditional-access/terms-of-use.md)<br>[View report of who has accepted and declined](../conditional-access/terms-of-use.md) |
aks Csi Secrets Store Driver https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-secrets-store-driver.md
Previously updated : 01/26/2023 Last updated : 02/10/2023
A container using subPath volume mount won't receive secret updates when it's ro
2. Create an AKS cluster with Azure Key Vault Provider for Secrets Store CSI Driver capability using the [`az aks create`][az-aks-create] command with the `azure-keyvault-secrets-provider` add-on. ```azurecli-interactive
- az aks create -n myAKSCluster -g myResourceGroup --enable-addons azure-keyvault-secrets-provider --enable-managed-identity
+ az aks create -n myAKSCluster -g myResourceGroup --enable-addons azure-keyvault-secrets-provider
``` 3. A user-assigned managed identity, named `azureKeyvaultSecretsProvider`, is created by the add-on to access Azure resources. The following example uses this identity to connect to the Azure key vault where the secrets will be stored, but you can also use other [identity access methods][identity-access-methods]. Take note of the identity's `clientId` in the output.
aks Dapr Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/dapr-settings.md
If you want to use an outbound proxy with the Dapr extension for AKS, you can do
- `NO_PROXY` 1. [Installing the proxy certificate in the sidecar](https://docs.dapr.io/operations/configuration/install-certificates/).
+## Using Mariner-based images
+
+From Dapr version 1.8.0, you can use Mariner images with the Dapr extension. To use them, set the`global.tag` flag:
+
+```azurecli
+az k8s-extension upgrade --cluster-type managedClusters \
+--cluster-name myAKSCluster \
+--resource-group myResourceGroup \
+--name dapr \
+--extension-type Microsoft.Dapr \
+--set global.tag=1.10.0-mariner
+```
+
+- [Learn more about using Mariner-based images with Dapr.][dapr-mariner]
+- [Learn more about deploying Mariner on AKS.][aks-mariner]
++ ## Disable automatic CRD updates With Dapr version 1.9.2, CRDs are automatically upgraded when the extension upgrades. To disable this setting, you can set `hooks.applyCrds` to `false`.
Once you have successfully provisioned Dapr in your AKS cluster, try deploying a
[install-cli]: /cli/azure/install-azure-cli [dapr-migration]: ./dapr-migration.md [dapr-settings]: ./dapr-settings.md
+[aks-mariner]: ./cluster-configuration.md#mariner-os
+ <!-- LINKS EXTERNAL --> [kubernetes-production]: https://docs.dapr.io/operations/hosting/kubernetes/kubernetes-production
Once you have successfully provisioned Dapr in your AKS cluster, try deploying a
[dapr-supported-version]: https://docs.dapr.io/operations/support/support-release-policy/#supported-versions [dapr-troubleshooting]: https://docs.dapr.io/operations/troubleshooting/common_issues/ [supported-cloud-regions]: https://azure.microsoft.com/global-infrastructure/services/?products=azure-arc
+[dapr-mariner]: https://docs.dapr.io/operations/hosting/kubernetes/kubernetes-deploy/#using-mariner-based-images
api-management Api Management Policy Expressions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-policy-expressions.md
Previously updated : 02/07/2022 Last updated : 01/13/2023 # API Management policy expressions
The `context` variable is implicitly available in every policy [expression](api-
|<a id="ref-context-user"></a>`context.User`|`Email`: `string`<br /><br /> `FirstName`: `string`<br /><br /> `Groups`: `IEnumerable<`[`IGroup`](#ref-igroup)`>`<br /><br /> `Id`: `string`<br /><br /> `Identities`: `IEnumerable<`[`IUserIdentity`](#ref-iuseridentity)`>`<br /><br /> `LastName`: `string`<br /><br /> `Note`: `string`<br /><br /> `RegistrationDate`: `DateTime`| |<a id="ref-iapi"></a>`IApi`|`Id`: `string`<br /><br /> `Name`: `string`<br /><br /> `Path`: `string`<br /><br /> `Protocols`: `IEnumerable<string>`<br /><br /> `ServiceUrl`: [`IUrl`](#ref-iurl)<br /><br /> `SubscriptionKeyParameterNames`: [`ISubscriptionKeyParameterNames`](#ref-isubscriptionkeyparameternames)| |<a id="ref-igroup"></a>`IGroup`|`Id`: `string`<br /><br /> `Name`: `string`|
-|<a id="ref-imessagebody"></a>`IMessageBody`|`As<T>(preserveContent: bool = false): Where T: string, byte[],JObject, JToken, JArray, XNode, XElement, XDocument`<br /><br /> The `context.Request.Body.As<T>` and `context.Response.Body.As<T>` methods are used to read either a request and response message body in specified type `T`. By default, the method:<br /><ul><li>Uses the original message body stream.</li><li>Renders it unavailable after it returns.</li></ul> <br />To avoid that and have the method operate on a copy of the body stream, set the `preserveContent` parameter to `true`, as in [this example](api-management-transformation-policies.md#SetBody).|
+|<a id="ref-imessagebody"></a>`IMessageBody`|`As<T>(bool preserveContent = false): Where T: string, byte[], JObject, JToken, JArray, XNode, XElement, XDocument` <br /><br /> - The `context.Request.Body.As<T>` and `context.Response.Body.As<T>` methods read a request or response message body in specified type `T`. <br/><br/> - Or - <br/><br/>`AsFormUrlEncodedContent(bool preserveContent = false)` <br/></br>- The `context.Request.Body.AsFormUrlEncodedContent()` and `context.Response.Body.AsFormUrlEncodedContent()` methods read URL-encoded form data in a request or response message body and return an `IDictionary<string, IList<string>>` object. The decoded object supports `IDictionary` operations and the following expressions: `ToQueryString()`, `JsonConvert.SerializeObject()`, `ToFormUrlEncodedContent().` <br/><br/> By default, the `As<T>` and `AsFormUrlEncodedContent()` methods:<br /><ul><li>Use the original message body stream.</li><li>Render it unavailable after it returns.</li></ul> <br />To avoid that and have the method operate on a copy of the body stream, set the `preserveContent` parameter to `true`, as shown in examples for the [set-body](set-body-policy.md#examples) policy.|
|<a id="ref-iprivateendpointconnection"></a>`IPrivateEndpointConnection`|`Name`: `string`<br /><br /> `GroupId`: `string`<br /><br /> `MemberName`: `string`<br /><br />For more information, see the [REST API](/rest/api/apimanagement/current-ga/private-endpoint-connection/list-private-link-resources).| |<a id="ref-iurl"></a>`IUrl`|`Host`: `string`<br /><br /> `Path`: `string`<br /><br /> `Port`: `int`<br /><br /> [`Query`](#ref-iurl-query): `IReadOnlyDictionary<string, string[]>`<br /><br /> `QueryString`: `string`<br /><br /> `Scheme`: `string`| |<a id="ref-iuseridentity"></a>`IUserIdentity`|`Id`: `string`<br /><br /> `Provider`: `string`|
api-management Set Body Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-body-policy.md
Previously updated : 12/02/2022 Last updated : 01/13/2023
This example shows how to perform content filtering by removing data elements fr
### Transform JSON using a Liquid template ```xml
+<set-body template="liquid">
{ "order": { "id": "{{body.customer.purchase.identifier}}", "summary": "{{body.customer.purchase.orderShortDesc}}" } }
+</set-body>
+```
+
+### Access the body as URL-encoded form data
+The following example uses the `AsFormUrlEncodedContent()` expression to access the request body as URL-encoded form data (content type `application/x-www-form-urlencoded`), and then converts it to JSON. Since we are not reserving the original request body, accessing it later in the pipeline will result in an exception.
+
+```xml
+<set-body> 
+@{ 
+ var inBody = context.Request.Body.AsFormUrlEncodedContent();
+ return JsonConvert.SerializeObject(inBody); 
+} 
+</set-body>
```
+### Access and return body as URL-encoded form data
+The following example uses the `AsFormUrlEncodedContent()` expression to access the request body as URL-encoded form data (content type `application/x-www-form-urlencoded`), adds data to the payload, and returns URL-encoded form data. Since we are not reserving the original request body, accessing it later in the pipeline will result in an exception.
+
+```xml
+<set-body> 
+@{ 
+ var body = context.Request.Body.AsFormUrlEncodedContent();
+ body["newKey"].Add("newValue");
+ return body.ToFormUrlEncodedContent(); 
+} 
+</set-body>
+```
++ ## Related policies * [API Management transformation policies](api-management-transformation-policies.md)
app-service Routine Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/routine-maintenance.md
+
+ Title: App Service routine maintenance
+description: Learn more about the routine, planned maintenance to keep the App Service platform up-to-date and secure.
+
+tags: app-service
++ Last updated : 02/08/2023++
+# Routine (planned) maintenance for App Service
+
+Routine maintenance covers behind the scenes updates to the Azure App Service platform. Types of maintenance can be performance improvements, bug fixes,
+new features, or security updates. App Service maintenance can be on App Service itself or the underlying operating system.
+
+>[!IMPORTANT]
+>A breaking change or deprecation of functionality is not a part of routine maintenance (see [Modern Lifecycle Policy - Microsoft Lifecycle | Microsoft Learn](/lifecycle/policies/modern) for deprecation topic for details).
+>
+
+Our service quality and uptime guarantees continue to apply during maintenance periods. Maintenance periods are mentioned to help customers to get visibility into platform changes.
+
+## What to expect
+
+Like security updates on personal computers, mobile phones and other devices, even machines in the cloud need the latest updates. Unlike physical devices, cloud solutions like Azure App Service provide ways to overcome these routines with more ease. There's no need to "stop working" for a certain period and wait until patches are installed. Any workload can be shifted to different hardware in a matter of seconds and while updates are installed. The updates are made monthly, but can vary on the needs and other factors.
+
+Since a typical cloud solution consists of multiple applications, databases, storage accounts, functions, and other resources, various parts of your solutions can be undergoing maintenance at different times. Some of this coordination is related to geography, region, data centers, and availability zones. It can also be due to the cloud where not everything is touched simultaneously.
+
+[Safe deployment practices - Azure DevOps | Microsoft Learn](/devops/operate/safe-deployment-practices)
++
+In order from top to bottom we see:
+- A descriptive title of the maintenance event
+- Impacted regions and subscriptions
+- Expected maintenance window
+
+## Frequently Asked Questions
+
+### Why is the maintenance taking so long?
+
+The maintenance fundamentally represents delivering latest updates to the platform and service. It's difficult to predict when individual apps would be affected down to a specific time, so more generic notifications are sent out. The time ranges in those notifications don't reflect the experiences at the app level, but the overall operation across all resources. Apps which undergo maintenance instantly restart on freshly updated machines and continue working. There's no downtime when requests/traffic aren't served.
+
+### Why am I getting so many notifications?
+
+A typical scenario is that customers have multiple applications, and they are upgraded at different times. To avoid sending notifications for each of them, a more generic notification is sent that captures multiple resources. The notification is sent at the beginning and throughout the maintenance window. Due to the time window being longer, you can receive multiple reminders for the same rollout so you can easier correlate any restart/interruption/issue in case it is needed.
+
+### How is routine maintenance related to SLA?
+
+Platform maintenance isn't expected to impact application uptime or availability. Applications continue to stay online while platform maintenance occurs. Platform maintenance may cause applications to be cold started on new virtual machines, which can lead to cold start delays. An application is still considered to be online, even while cold-starting. For best practices to minimize/avoid cold starts, consider using [local cache for Windows apps](overview-local-cache.md) as well as [Health check](monitor-instances-health-check.md). It's not expected that sites would incur any SLA violation during maintenance windows.
+
+### How does the upgrade work how does it ensure the smooth operation of my apps?
+
+Azure App Service represents a fleet of scale units, which provide hosting of web applications/solutions to the customers. Each scale unit is further divided into smaller pieces and sliced into a concept of upgrade domains and availability zones. This is to optimize placements of bigger App Service Plans and smooth deployments since not all machines in each scale unit are updated at once. Fleet upgrades machines iteratively while monitoring the health of the fleet so any time there is an issue, the system can stop the rollout. This process is described in detail at [Demystifying the magic behind App Service OS updates - Azure App Service](https://azure.github.io/AppService/2018/01/18/Demystifying-the-magic-behind-App-Service-OS-updates.html).
+
+### Are business hours reflected?
+
+Maintenance operations are optimized to run outside standard business hours (9-5pm) as statistically that is a better timing for any interruptions and restarts of workloads as there is a less stress on the system (in customer applications and transitively also on the platform itself).
+
+### What are my options to control routine maintenance?
+
+If you run your workloads in Isolated SKU via App Service Environment v3, you can also schedule the upgrades when needed. This is described with details at Control and automate planned maintenance for App Service Environment v3 - Azure App Service.
+
+### Can I prepare my apps better for restarts?
+
+If your applications need extra time during restarts to come online (a typical pattern would be heavy dependency on external resources during application warm-up/start-up), consider using [Health Check](monitor-instances-health-check.md). You can use this to communicate with the platform that your application is not ready to receive requests yet and the system can use that information to route requests to other instances in your App Service Plan. For such case, it's recommended to have at least two instances in the plan.
+
+### My applications have been online, but since these notifications started showing up things are worse. What changed?
+
+Updates and maintenance events have been happening to the platform since its inception. The frequency of updates decreased over time, so the number of interruptions also decreased and uptime increases. However, there is an increased level of visibility into all changes which can cause the perception that more changes are being made.
+
+## Next steps
+
+[Control and automate planned maintenance for App Service Environment v3 - Azure App Service](https://azure.github.io/AppService/2022/09/15/Configure-automation-for-upgrade-preferences-in-App-Service-Environment.html)
+
+[Demystifying the magic behind App Service OS updates - Azure App Service](https://azure.github.io/AppService/2018/01/18/Demystifying-the-magic-behind-App-Service-OS-updates.html)
+
+[Routine Planned Maintenance Notifications for Azure App Service - Azure App Service](https://azure.github.io/AppService/2022/02/01/App-Service-Planned-Notification-Feature.html)
applied-ai-services Form Recognizer Disconnected Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/containers/form-recognizer-disconnected-containers.md
Previously updated : 01/23/2023 Last updated : 02/10/2023
+monikerRange: 'form-recog-2.1.0'
+recommendations: false
-# Use Form Recognizer containers in disconnected environments
+# Form Recognizer containers in disconnected environments
+
+**This article applies to:** ![Form Recognizer v2.1 checkmark](../media/yes-icon.png) **Form Recognizer v2.1**.
<!-- markdownlint-disable MD036 --> <!-- markdownlint-disable MD001 -->
Azure Cognitive Services Form Recognizer containers allow you to use Form Recogn
Before attempting to run a Docker container in an offline environment, make sure you're familiar with the following requirements to successfully download and use the container: * Host computer requirements and recommendations.
-* The Docker `pull` command you'll use to download the container.
+* The Docker `pull` command to download the container.
* How to validate that a container is running. * How to send queries to the container's endpoint, once it's running.
docker pull mcr.microsoft.com/azure-cognitive-services/form-recognizer/invoice:l
## Configure the container to be run in a disconnected environment
-Now that you've downloaded your container, you'll need to execute the `docker run` command with the following parameter:
+Now that you've downloaded your container, you need to execute the `docker run` command with the following parameter:
-* **`DownloadLicense=True`**. This parameter will download a license file that will enable your Docker container to run when it isn't connected to the internet. It also contains an expiration date, after which the license file will be invalid to run the container. You can only use the license file in corresponding approved container.
+* **`DownloadLicense=True`**. This parameter downloads a license file that enables your Docker container to run when it isn't connected to the internet. It also contains an expiration date, after which the license file is invalid to run the container. You can only use the license file in corresponding approved container.
> [!IMPORTANT] >The `docker run` command will generate a template that you can use to run the container. The template contains parameters you'll need for the downloaded models and configuration file. Make sure you save this template.
-The following example shows the formatting for the `docker run` command you'll use, with placeholder values. Replace these placeholder values with your own values.
+The following example shows the formatting for the `docker run` command to use with placeholder values. Replace these placeholder values with your own values.
| Placeholder | Value | Format or example | |-|-|| | `{IMAGE}` | The container image you want to use. | `mcr.microsoft.com/azure-cognitive-services/form-recognizer/invoice` |
-| `{LICENSE_MOUNT}` | The path where the license will be downloaded, and mounted. | `/host/license:/path/to/license/directory` |
+| `{LICENSE_MOUNT}` | The path where the license is downloaded, and mounted. | `/host/license:/path/to/license/directory` |
| `{ENDPOINT_URI}` | The endpoint for authenticating your service request. You can find it on your resource's **Key and endpoint** page, on the Azure portal. | `https://<your-custom-subdomain>.cognitiveservices.azure.com` | | `{API_KEY}` | The key for your Text Analytics resource. You can find it on your resource's **Key and endpoint** page, on the Azure portal. |`{string}`| | `{CONTAINER_LICENSE_DIRECTORY}` | Location of the license folder on the container's local filesystem. | `/path/to/license/directory` |
Placeholder | Value | Format or example |
| `{IMAGE}` | The container image you want to use. | `mcr.microsoft.com/azure-cognitive-services/form-recognizer/invoice` | `{MEMORY_SIZE}` | The appropriate size of memory to allocate for your container. | `4g` | | `{NUMBER_CPUS}` | The appropriate number of CPUs to allocate for your container. | `4` |
-| `{LICENSE_MOUNT}` | The path where the license will be located and mounted. | `/host/license:/path/to/license/directory` |
+| `{LICENSE_MOUNT}` | The path where the license is located and mounted. | `/host/license:/path/to/license/directory` |
| `{OUTPUT_PATH}` | The output path for logging [usage records](#usage-records). | `/host/output:/path/to/output/directory` | | `{CONTAINER_LICENSE_DIRECTORY}` | Location of the license folder on the container's local filesystem. | `/path/to/license/directory` | | `{CONTAINER_OUTPUT_DIRECTORY}` | Location of the output folder on the container's local filesystem. | `/path/to/output/directory` |
When operating Docker containers in a disconnected environment, the container wi
#### Arguments for storing logs
-When run in a disconnected environment, an output mount must be available to the container to store usage logs. For example, you would include `-v /host/output:{OUTPUT_PATH}` and `Mounts:Output={OUTPUT_PATH}` in the following example, replacing `{OUTPUT_PATH}` with the path where the logs will be stored:
+When run in a disconnected environment, an output mount must be available to the container to store usage logs. For example, you would include `-v /host/output:{OUTPUT_PATH}` and `Mounts:Output={OUTPUT_PATH}` in the following example, replacing `{OUTPUT_PATH}` with the path where the logs are stored:
```Docker docker run -v /host/output:{OUTPUT_PATH} ... <image> ... Mounts:Output={OUTPUT_PATH}
The container provides two endpoints for returning records about its usage.
#### Get all records
-The following endpoint will provide a report summarizing all of the usage collected in the mounted billing record directory.
+The following endpoint provides a report summarizing all of the usage collected in the mounted billing record directory.
```http https://<service>/records/usage-logs/
https://<service>/records/usage-logs/
`http://localhost:5000/records/usage-logs`
-The usage-log endpoint will return a JSON response similar to the following example:
+The usage-log endpoint returns a JSON response similar to the following example:
```json {
The usage-log endpoint will return a JSON response similar to the following exam
#### Get records for a specific month
-The following endpoint will provide a report summarizing usage over a specific month and year.
+The following endpoint provides a report summarizing usage over a specific month and year.
```HTTP https://<service>/records/usage-logs/{MONTH}/{YEAR} ```
-This usage-logs endpoint will return a JSON response similar to the following example:
+This usage-logs endpoint returns a JSON response similar to the following example:
```json {
This usage-logs endpoint will return a JSON response similar to the following ex
### Purchase a different commitment plan for disconnected containers
-Commitment plans for disconnected containers have a calendar year commitment period. When you purchase a plan, you'll be charged the full price immediately. During the commitment period, you can't change your commitment plan, however you can purchase more unit(s) at a pro-rated price for the remaining days in the year. You have until midnight (UTC) on the last day of your commitment, to end a commitment plan.
+Commitment plans for disconnected containers have a calendar year commitment period. When you purchase a plan, you're charged the full price immediately. During the commitment period, you can't change your commitment plan, however you can purchase more unit(s) at a pro-rated price for the remaining days in the year. You have until midnight (UTC) on the last day of your commitment, to end a commitment plan.
You can choose a different commitment plan in the **Commitment tier pricing** settings of your resource under the **Resource Management** section. ### End a commitment plan
-If you decide that you don't want to continue purchasing a commitment plan, you can set your resource's auto-renewal to **Do not auto-renew**. Your commitment plan will expire on the displayed commitment end date. After this date, you won't be charged for the commitment plan. You'll be able to continue using the Azure resource to make API calls, charged at pay-as-you-go pricing. You have until midnight (UTC) on the last day of the year to end a commitment plan for disconnected containers. If you cancel at or before that time, you won't be charged for the following year.
+If you decide that you don't want to continue purchasing a commitment plan, you can set your resource's auto-renewal to **Do not auto-renew**. Your commitment plan expires on the displayed commitment end date. After this date, you won't be charged for the commitment plan. You can continue using the Azure resource to make API calls, charged at pay-as-you-go pricing. You have until midnight (UTC) on the last day of the year to end a commitment plan for disconnected containers. If you cancel at or before that time, there are no charges for the next year.
## Troubleshooting
-Run the container with an output mount and logging enabled. These settings will enable the container generates log files that are helpful for troubleshooting issues that occur while starting or running the container.
+Run the container with an output mount and logging enabled. These settings enable the container generates log files that are helpful for troubleshooting issues that occur while starting or running the container.
> [!TIP] > For more troubleshooting information and guidance, see [Disconnected containers Frequently asked questions (FAQ)](../../../cognitive-services/containers/disconnected-container-faq.yml).
applied-ai-services Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/managed-identities.md
Previously updated : 10/20/2022 Last updated : 02/09/2023 monikerRange: '>=form-recog-2.1.0' recommendations: false
recommendations: false
# Managed identities for Form Recognizer + [!INCLUDE [applies to v3.0 and v2.1](includes/applies-to-v3-0-and-v2-1.md)] Managed identities for Azure resources are service principals that create an Azure Active Directory (Azure AD) identity and specific permissions for Azure managed resources:
Managed identities for Azure resources are service principals that create an Azu
> [!TIP] > Managed identities eliminate the need for you to manage credentials, including Shared Access Signature (SAS) tokens. Managed identities are a safer way to grant access to data without having credentials in your code. + ## Private storage account access Private Azure storage account access and authentication are supported by [managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/overview.md). If you have an Azure storage account, protected by a Virtual Network (VNet) or firewall, Form Recognizer can't directly access your storage account data. However, once a managed identity is enabled, Form Recognizer can access your storage account using an assigned managed identity credential.
Managed identities for Azure resources are service principals that create an Azu
## Prerequisites
-To get started, you'll need:
+To get started, you need:
* An active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/)ΓÇöif you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/). * A [**Form Recognizer**](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) or [**Cognitive Services**](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) resource in the Azure portal. For detailed steps, _see_ [Create a Cognitive Services resource using the Azure portal](../../cognitive-services/cognitive-services-apis-create-account.md?tabs=multiservice%2cwindows).
-* An [**Azure blob storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM) in the same region as your Form Recognizer resource. You'll create containers to store and organize your blob data within your storage account.
+* An [**Azure blob storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM) in the same region as your Form Recognizer resource. You also need to create containers to store and organize your blob data within your storage account.
* If your storage account is behind a firewall, **you must enable the following configuration**: </br></br>
There are two types of managed identity: **system-assigned** and **user-assigned
* A system-assigned managed identity is **enabled** directly on a service instance. It isn't enabled by default; you must go to your resource and update the identity setting.
-* The system-assigned managed identity is tied to your resource throughout its lifecycle. If you delete your resource, the managed identity will be deleted as well.
+* The system-assigned managed identity is tied to your resource throughout its lifecycle. If you delete your resource, the managed identity is deleted as well.
-In the following steps, we'll enable a system-assigned managed identity and grant Form Recognizer limited access to your Azure blob storage account.
+In the following steps, we enable a system-assigned managed identity and grant Form Recognizer limited access to your Azure blob storage account.
## Enable a system-assigned managed identity
You need to grant Form Recognizer access to your storage account before it can c
:::image type="content" source="media/managed-identities/enable-system-assigned-managed-identity-portal.png" alt-text="Screenshot: enable system-assigned managed identity in Azure portal.":::
-1. An Azure role assignments page will open. Choose your subscription from the drop-down menu then select **&plus; Add role assignment**.
+1. On the Azure role assignments page that opens, choose your subscription from the drop-down menu then select **&plus; Add role assignment**.
:::image type="content" source="media/managed-identities/azure-role-assignments-page-portal.png" alt-text="Screenshot: Azure role assignments page in the Azure portal.":::
automation Automation Solution Vm Management Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-solution-vm-management-config.md
# Configure Start/Stop VMs during off-hours > [!NOTE]
-> Start/Stop VM during off-hours, version 1 is going to retire soon by CY23 and is unavailable in the marketplace now. We recommend that you start using [version 2](/articles/azure-functions/start-stop-vms/overview.md), which is now generally available. The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. If you have the version 1 solution already deployed, you can still use the feature, and we will provide support until retirement in CY23. The details on announcement will be shared soon.
+> Start/Stop VM during off-hours, version 1 is going to retire soon by CY23 and is unavailable in the marketplace now. We recommend that you start using [version 2](/azure/azure-functions/start-stop-vms/overview), which is now generally available. The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. If you have the version 1 solution already deployed, you can still use the feature, and we will provide support until retirement in CY23. The details on announcement will be shared soon.
This article describes how to configure the [Start/Stop VMs during off-hours](automation-solution-vm-management.md) feature to support the described scenarios. You can also learn how to:
Start/Stop VMs during off-hours doesn't include a predefined set of Automation j
## Next steps * To monitor the feature during operation, see [Query logs from Start/Stop VMs during off-hours](automation-solution-vm-management-logs.md).
-* To handle problems during VM management, see [Troubleshoot Start/Stop VMs during off-hours issues](troubleshoot/start-stop-vm.md).
+* To handle problems during VM management, see [Troubleshoot Start/Stop VMs during off-hours issues](troubleshoot/start-stop-vm.md).
automation Automation Solution Vm Management Remove https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-solution-vm-management-remove.md
# Remove Start/Stop VMs during off-hours from Automation account > [!NOTE]
-> Start/Stop VM during off-hours, version 1 is going to retire soon by CY23 and is unavailable in the marketplace now. We recommend that you start using [version 2](/articles/azure-functions/start-stop-vms/overview.md), which is now generally available. The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. If you have the version 1 solution already deployed, you can still use the feature, and we will provide support until retirement in CY23. The details on announcement will be shared soon.
+> Start/Stop VM during off-hours, version 1 is going to retire soon by CY23 and is unavailable in the marketplace now. We recommend that you start using [version 2](/azure/azure-functions/start-stop-vms/overview), which is now generally available. The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. If you have the version 1 solution already deployed, you can still use the feature, and we will provide support until retirement in CY23. The details on announcement will be shared soon.
After you enable the Start/Stop VMs during off-hours feature to manage the running state of your Azure VMs, you may decide to stop using it. Removing this feature can be done using one of the following methods based on the supported deployment models:
To delete Start/Stop VMs during off-hours from your Automation account, perform
## Next steps
-To re-enable this feature, see [Enable Start/Stop during off-hours](automation-solution-vm-management-enable.md).
+To re-enable this feature, see [Enable Start/Stop during off-hours](automation-solution-vm-management-enable.md).
automation Automation Solution Vm Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-solution-vm-management.md
# Start/Stop VMs during off-hours overview > [!NOTE]
-> Start/Stop VM during off-hours, version 1 is going to retire soon by CY23 and is unavailable in the marketplace now. We recommend that you start using [version 2](https://learn.microsoft.com/azure/azure-functions/start-stop-vms/overview), which is now generally available. The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. If you have the version 1 solution already deployed, you can still use the feature, and we will provide support until retirement in CY23. The details on announcement will be shared soon.
+> Start/Stop VM during off-hours, version 1 is going to retire soon by CY23 and is unavailable in the marketplace now. We recommend that you start using [version 2](/azure/azure-functions/start-stop-vms/overview), which is now generally available. The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. If you have the version 1 solution already deployed, you can still use the feature, and we will provide support until retirement in CY23. The details on announcement will be shared soon.
The Start/Stop VMs during off-hours feature start or stops enabled Azure VMs. It starts or stops machines on user-defined schedules, provides insights through Azure Monitor logs, and sends optional emails by using [action groups](../azure-monitor/alerts/action-groups.md). The feature can be enabled on both Azure Resource Manager and classic VMs for most scenarios.
automation Overview Monitoring Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/overview-monitoring-agent.md
This article explains on the latest version of change tracking support using Azu
## Key benefits -- **Compatibility with the unified monitoring agent** - Compatible with the [Azure Monitor Agent (Preview)](/articles/azure-monitor/agents/agents-overview.md) that enhances security, reliability, and facilitates multi-homing experience to store data.
+- **Compatibility with the unified monitoring agent** - Compatible with the [Azure Monitor Agent (Preview)](/azure/azure-monitor/agents/agents-overview) that enhances security, reliability, and facilitates multi-homing experience to store data.
- **Compatibility with tracking tool**- Compatible with the Change tracking (CT) extension deployed through the Azure Policy on the client's virtual machine. You can switch to Azure Monitor Agent (AMA), and then the CT extension pushes the software, files, and registry to AMA.-- **Multi-homing experience** ΓÇô Provides standardization of management from one central workspace. You can [transition from Log Analytics (LA) to AMA](/articles/azure-monitor/agents/azure-monitor-agent-migration.md) so that all VMs point to a single workspace for data collection and maintenance.
+- **Multi-homing experience** ΓÇô Provides standardization of management from one central workspace. You can [transition from Log Analytics (LA) to AMA](/azure/azure-monitor/agents/azure-monitor-agent-migration) so that all VMs point to a single workspace for data collection and maintenance.
- **Rules management** ΓÇô Uses [Data Collection Rules](https://azure.microsoft.com/updates/azure-monitor-agent-and-data-collection-rules-public-preview/) to configure or customize various aspects of data collection. For example, you can change the frequency of file collection. ## Current limitations
automation Region Mappings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/how-to/region-mappings.md
# Supported regions for linked Log Analytics workspace > [!NOTE]
-> Start/Stop VM during off-hours, version 1 is going to retire soon by CY23 and is unavailable in the marketplace now. We recommend that you start using [version 2](/articles/azure-functions/start-stop-vms/overview.md), which is now generally available. The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. If you have the version 1 solution already deployed, you can still use the feature, and we will provide support until retirement in CY23. The details on announcement will be shared soon.
+> Start/Stop VM during off-hours, version 1 is going to retire soon by CY23 and is unavailable in the marketplace now. We recommend that you start using [version 2](/azure/azure-functions/start-stop-vms/overview), which is now generally available. The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. If you have the version 1 solution already deployed, you can still use the feature, and we will provide support until retirement in CY23. The details on announcement will be shared soon.
In Azure Automation, you can enable the Update Management, Change Tracking and Inventory, and Start/Stop VMs during off-hours features for your servers and virtual machines. These features have a dependency on a Log Analytics workspace, and therefore require linking the workspace with an Automation account. However, only certain regions are supported to link them together. In general, the mapping is *not* applicable if you plan to link an Automation account to a workspace that won't have these features enabled.
automation Migrate Existing Agent Based Hybrid Worker To Extension Based Workers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/migrate-existing-agent-based-hybrid-worker-to-extension-based-workers.md
For at-scale migration of multiple Agent based Hybrid Workers, you can also use
#### [Bicep template](#tab/bicep-template)
-You can use the Bicep template to create a new Hybrid Worker group, create a new Azure Windows VM and add it to an existing Hybrid Worker Group. Learn more about [Bicep](/articles/azure-resource-manager/bicep/overview.md)
+You can use the Bicep template to create a new Hybrid Worker group, create a new Azure Windows VM and add it to an existing Hybrid Worker Group. Learn more about [Bicep](/azure/azure-resource-manager/bicep/overview)
```Bicep param automationAccount string
sudo python onboarding.py --deregister --endpoint="<URL>" --key="<PrimaryAccessK
- To learn more about Hybrid Runbook Worker, see [Automation Hybrid Runbook Worker overview](automation-hybrid-runbook-worker.md). - To deploy Extension-based Hybrid Worker, see [Deploy an extension-based Windows or Linux User Hybrid Runbook Worker in Azure Automation](extension-based-hybrid-runbook-worker-install.md).-- To learn about Azure VM extensions, see [Azure VM extensions and features for Windows](../virtual-machines/extensions/features-windows.md) and [Azure VM extensions and features for Linux](../virtual-machines/extensions/features-linux.md).
+- To learn about Azure VM extensions, see [Azure VM extensions and features for Windows](../virtual-machines/extensions/features-windows.md) and [Azure VM extensions and features for Linux](../virtual-machines/extensions/features-linux.md).
azure-arc Create Sql Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-sql-managed-instance.md
To create a SQL Managed Instance, use `az sql mi-arc create`. See the following
> A ReadWriteMany (RWX) capable storage class needs to be specified for backups. Learn more about [access modes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes) If no storage class is specified for backups, the default storage class in Kubernetes is used and if this is not RWX capable, the Arc SQL Managed Instance installation may not succeed. --
-### [Indirectly connected mode](#tab/indirectly)
+### [Directly connected mode](#tab/directly-connected-mode)
```azurecli
-az sql mi-arc create -n <instanceName> --storage-class-backups <RWX capable storageclass> --k8s-namespace <namespace> --use-k8s
+az sql mi-arc create --name <name> --resource-group <group> -ΓÇôsubscription <subscription> --custom-location <custom-location> --storage-class-backups <RWX capable storageclass>
``` Example: ```azurecli
-az sql mi-arc create -n sqldemo --storage-class-backups mybackups --k8s-namespace my-namespace --use-k8s
+az sql mi-arc create --name sqldemo --resource-group rg -ΓÇôsubscription xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --custom-location private-location --storage-class-backups mybackups
```
-### [Directly connected mode](#tab/directly)
+
+### [Indirectly connected mode](#tab/indirectly-connected-mode)
```azurecli
-az sql mi-arc create --name <name> --resource-group <group> -ΓÇôsubscription <subscription> --custom-location <custom-location> --storage-class-backups <RWX capable storageclass>
+az sql mi-arc create -n <instanceName> --storage-class-backups <RWX capable storageclass> --k8s-namespace <namespace> --use-k8s
``` Example: ```azurecli
-az sql mi-arc create --name sqldemo --resource-group rg -ΓÇôsubscription xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --custom-location private-location --storage-class-backups mybackups
+az sql mi-arc create -n sqldemo --storage-class-backups mybackups --k8s-namespace my-namespace --use-k8s
```
azure-arc Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/release-notes.md
This release introduces the following breaking changes:
### Additional changes
-* A new optional parameter was added to `azdata arc postgres server create` called `--volume-claim mounts`. The value is a comma-separated list of volume claim mounts. A volume claim mount is a pair of volume type and PVC name. The only volume type currently supported is `backup`. In PostgreSQL, when volume type is `backup`, the PVC is mounted to `/mnt/db-backups`. This enables sharing backups between PostgresSQL instances so that the backup of one PostgresSQL instance can be restored in another instance.
+* A new optional parameter was added to `azdata arc postgres server create` called `--volume-claim mounts`. The value is a comma-separated list of volume claim mounts. A volume claim mount is a pair of volume type and PVC name. The only volume type currently supported is `backup`. In PostgreSQL, when volume type is `backup`, the PVC is mounted to `/mnt/db-backups`. This enables sharing backups between PostgreSQL instances so that the backup of one PostgreSQL instance can be restored in another instance.
-* New short names for PostgresSQL custom resource definitions:
+* New short names for PostgreSQL custom resource definitions:
* `pg11` * `pg12` * Telemetry upload provides user with either:
azure-functions Create First Function Arc Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-arc-cli.md
On your local computer:
# [JavaScript](#tab/nodejs)
-+ [Node.js](https://nodejs.org/) version 12. Node.js version 10 is also supported.
++ [Node.js](https://nodejs.org/) version 18. Node.js version 14 is also supported. + [Azure Functions Core Tools version 4.x.](functions-run-local.md?tabs=v4%2Cnode#install-the-azure-functions-core-tools). + [Azure CLI](/cli/azure/install-azure-cli) version 2.4 or later
az functionapp create --resource-group MyResourceGroup --name <APP_NAME> --custo
# [JavaScript](#tab/nodejs) ```azurecli
-az functionapp create --resource-group MyResourceGroup --name <APP_NAME> --custom-location <CUSTOM_LOCATION_ID> --storage-account <STORAGE_NAME> --functions-version 4 --runtime node --runtime-version 12
+az functionapp create --resource-group MyResourceGroup --name <APP_NAME> --custom-location <CUSTOM_LOCATION_ID> --storage-account <STORAGE_NAME> --functions-version 4 --runtime node --runtime-version 18
``` # [Python](#tab/python)
azure-government Compare Azure Government Global Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compare-azure-government-global-azure.md
recommendations: false Previously updated : 02/02/2023 Last updated : 02/10/2023 # Compare Azure Government and global Azure
This section outlines variations and considerations when using **Azure Bot Servi
### [Azure Bot Service](/azure/bot-service/)
-The following Azure Bot Service **features aren't currently available** in Azure Government (updated 16 August 2021):
+The following Azure Bot Service **features aren't currently available** in Azure Government:
- Bot Framework Composer integration - Channels (due to availability of dependent services)
- - Teams Channel
- Direct Line Speech Channel - Telephony Channel (Preview) - Microsoft Search Channel (Preview)
azure-monitor Analyze Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/analyze-usage.md
let freeTables = dynamic([
"OfficeActivity","Operation","SecurityAlert","SecurityIncident","UCClient","UCClientReadinessStatus", "UCClientUpdateStatus","UCDOAggregatedStatus","UCDOStatus","UCDeviceAlert","UCServiceUpdateStatus","UCUpdateAlert", "Usage","WUDOAggregatedStatus","WUDOStatus","WaaSDeploymentStatus","WaaSInsiderStatus","WaaSUpdateStatus"]);
-Usage | where DataType !in (freeTables) | where TimeGenerated > ago(30d) | summarize MonthlyGB=sum(Quantity)/1000
+Usage
+| where DataType !in (freeTables)
+| where TimeGenerated > ago(30d)
+| summarize MonthlyGB=sum(Quantity)/1000
+```
+
+To look for data which might not have IsBillable correctly set (and which could result in incorrect billing, or more specifically under-billing), use this query on your workspace:
+
+```kusto
+let freeTables = dynamic([
+"AppAvailabilityResults","AppSystemEvents","ApplicationInsights","AzureActivity","AzureNetworkAnalyticsIPDetails_CL",
+"AzureNetworkAnalytics_CL","AzureTrafficAnalyticsInsights_CL","ComputerGroup","DefenderIoTRawEvent","Heartbeat",
+"MAApplication","MAApplicationHealth","MAApplicationHealthIssues","MAApplicationInstance","MAApplicationInstanceReadiness",
+"MAApplicationReadiness","MADeploymentPlan","MADevice","MADeviceNotEnrolled","MADeviceReadiness","MADriverInstanceReadiness",
+"MADriverReadiness","MAProposedPilotDevices","MAWindowsBuildInfo","MAWindowsCurrencyAssessment",
+"MAWindowsCurrencyAssessmentDailyCounts","MAWindowsDeploymentStatus","NTAIPDetails_CL","NTANetAnalytics_CL",
+"OfficeActivity","Operation","SecurityAlert","SecurityIncident","UCClient","UCClientReadinessStatus",
+"UCClientUpdateStatus","UCDOAggregatedStatus","UCDOStatus","UCDeviceAlert","UCServiceUpdateStatus","UCUpdateAlert",
+"Usage","WUDOAggregatedStatus","WUDOStatus","WaaSDeploymentStatus","WaaSInsiderStatus","WaaSUpdateStatus"]);
+Usage
+| where DataType !in (freeTables)
+| where TimeGenerated > ago(30d)
+| where IsBillable = false
+| summarize MonthlyPotentialUnderbilledGB=sum(Quantity)/1000 by DataType
``` ## Querying for common data types
azure-resource-manager Bicep Functions Lambda https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-lambda.md
description: Describes the lambda functions to use in a Bicep file.
Previously updated : 09/20/2022 Last updated : 02/09/2023 # Lambda functions for Bicep
This article describes the lambda functions to use in Bicep. [Lambda expressions
Bicep lambda function has these limitations: -- Lambda expression can only be specified directly as function arguments in these functions: [`filter()`](#filter), [`map()`](#map), [`reduce()`](#reduce), and [`sort()`](#sort).
+- Lambda expression can only be specified directly as function arguments in these functions: [`filter()`](#filter), [`map()`](#map), [`reduce()`](#reduce), [`sort()`](#sort), and [`toOrder()`](#toobject).
- Using lambda variables (the temporary variables used in the lambda expressions) inside resource or module array access isn't currently supported. - Using lambda variables inside the [`listKeys`](./bicep-functions-resource.md#list) function isn't currently supported. - Using lambda variables inside the [reference](./bicep-functions-resource.md#reference) function isn't currently supported.
An array.
### Examples
-The following examples show how to use the filter function.
+The following examples show how to use the `filter` function.
```bicep var dogs = [
An array.
### Example
-The following example shows how to use the map function.
+The following example shows how to use the `map` function.
```bicep var dogs = [
Any.
### Example
-The following examples show how to use the reduce function.
+The following examples show how to use the `reduce` function.
```bicep var dogs = [
An array.
### Example
-The following example shows how to use the sort function.
+The following example shows how to use the `sort` function.
```bicep var dogs = [
The output from the preceding example sorts the dog objects from the youngest to
| - | - | -- | | dogsByAge | Array | [{"name":"Indy","age":2,"interests":["Butter"]},{"name":"Casper","age":3,"interests":["Other dogs"]},{"name":"Evie","age":5,"interests":["Ball","Frisbee"]},{"name":"Kira","age":8,"interests":["Rubs"]}] |
+## toObject
+
+`toObject(inputArray, lambda expression, [lambda expression])`
+
+Converts an array to an object with a custom key function and optional custom value function.
+
+Namespace: [sys](bicep-functions.md#namespaces-for-functions).
+
+### Parameters
+
+| Parameter | Required | Type | Description |
+|: |: |: |: |
+| inputArray |Yes |array |The array used for creating an object.|
+| lambda expression |Yes |expression |The lambda expression used to provide the key predicate.|
+| lambda expression |No |expression |The lambda expression used to provide the value predicate.|
+
+### Return value
+
+An object.
+
+### Example
+
+The following example shows how to use the `toObject` function with the two required parameters:
+
+```Bicep
+var dogs = [
+ {
+ name: 'Evie'
+ age: 5
+ interests: [ 'Ball', 'Frisbee' ]
+ }
+ {
+ name: 'Casper'
+ age: 3
+ interests: [ 'Other dogs' ]
+ }
+ {
+ name: 'Indy'
+ age: 2
+ interests: [ 'Butter' ]
+ }
+ {
+ name: 'Kira'
+ age: 8
+ interests: [ 'Rubs' ]
+ }
+]
+
+output dogsObject object = toObject(dogs, entry => entry.name)
+```
+
+The preceding example generates an object based on an array.
+
+| Name | Type | Value |
+| - | - | -- |
+| dogsObject | Object | {"Evie":{"name":"Evie","age":5,"interests":["Ball","Frisbee"]},"Casper":{"name":"Casper","age":3,"interests":["Other dogs"]},"Indy":{"name":"Indy","age":2,"interests":["Butter"]},"Kira":{"name":"Kira","age":8,"interests":["Rubs"]}} |
+
+The following `toObject` function with the third parameter provides the same output.
+
+```Bicep
+output dogsObject object = toObject(dogs, entry => entry.name, entry => entry)
+```
+
+The following example shows how to use the `toObject` function with three parameters.
+
+```Bicep
+var dogs = [
+ {
+ name: 'Evie'
+ properties: {
+ age: 5
+ interests: [ 'Ball', 'Frisbee' ]
+ }
+ }
+ {
+ name: 'Casper'
+ properties: {
+ age: 3
+ interests: [ 'Other dogs' ]
+ }
+ }
+ {
+ name: 'Indy'
+ properties: {
+ age: 2
+ interests: [ 'Butter' ]
+ }
+ }
+ {
+ name: 'Kira'
+ properties: {
+ age: 8
+ interests: [ 'Rubs' ]
+ }
+ }
+]
+output dogsObject object = toObject(dogs, entry => entry.name, entry => entry.properties)
+```
+
+The preceding example generates an object based on an array.
+
+| Name | Type | Value |
+| - | - | -- |
+| dogsObject | Object | {"Evie":{"age":5,"interests":["Ball","Frisbee"]},"Casper":{"age":3,"interests":["Other dogs"]},"Indy":{"age":2,"interests":["Butter"]},"Kira":{"age":8,"interests":["Rubs"]}} |
+ ## Next steps - See [Bicep functions - arrays](./bicep-functions-array.md) for additional array related Bicep functions.
azure-resource-manager Template Expressions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-expressions.md
Title: Template syntax and expressions description: Describes the declarative JSON syntax for Azure Resource Manager templates (ARM templates). Previously updated : 03/17/2020 Last updated : 02/09/2023
When passing in parameter values, the use of escape characters depends on where
```json {
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "demoParam1":{
- "type": "string",
- "defaultValue": "[[test value]"
- }
- },
- "resources": [],
- "outputs": {
- "exampleOutput": {
- "type": "string",
- "value": "[parameters('demoParam1')]"
- }
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "demoParam1": {
+ "type": "string",
+ "defaultValue": "[[test value]"
}
+ },
+ "resources": [],
+ "outputs": {
+ "exampleOutput": {
+ "type": "string",
+ "value": "[parameters('demoParam1')]"
+ }
+ }
} ```
The same formatting applies when passing values in from a parameter file. The ch
```json {
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "demoParam1": {
- "value": "[test value]"
- }
- }
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "demoParam1": {
+ "value": "[test value]"
+ }
+ }
} ```
To set a property to null, you can use `null` or `[json('null')]`. The [json fun
"objectValue": "[json('null')]" ```
+To totally remove an element, you can use the [filter() function](./template-functions-lambda.md#filter). For example:
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "deployCaboodle": {
+ "type": "bool",
+ "defaultValue": false
+ }
+ },
+ "variables": {
+ "op": [
+ {
+ "name": "ODB"
+ },
+ {
+ "name": "ODBRPT"
+ },
+ {
+ "name": "Caboodle"
+ }
+ ]
+ },
+ "resources": {},
+ "outputs": {
+ "backendAddressPools": {
+ "type": "array",
+ "value": "[if(parameters('deployCaboodle'), variables('op'), filter(variables('op'), lambda('on', not(equals(lambdaVariables('on').name, 'Caboodle')))))]"
+ }
+ }
+}
+```
+ ## Next steps * For the full list of template functions, see [ARM template functions](template-functions.md).
azure-resource-manager Template Functions Lambda https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-lambda.md
description: Describes the lambda functions to use in an Azure Resource Manager
Previously updated : 02/06/2023 Last updated : 02/09/2023 # Lambda functions for ARM templates
-This article describes the lambda functions to use in ARM templates. [Lambda expressions (or lambda functions)](/dotnet/csharp/language-reference/operators/lambda-expressions) are essentially blocks of code that can be passed as an argument. They can take multiple parameters, but are restricted to a single line of code.
+This article describes the lambda functions to use in ARM templates. [Lambda functions](/dotnet/csharp/language-reference/operators/lambda-expressions) are essentially blocks of code that can be passed as an argument. They can take multiple parameters, but are restricted to a single line of code. In Bicep, lambda expression is in this format:
+
+```bicep
+lambda(<lambda variable>, [<lambda variable>, ...], <expression>)
+```
+ > [!TIP] > We recommend [Bicep](../bicep/overview.md) because it offers the same capabilities as ARM templates and the syntax is easier to use. To learn more, see [deployment](../bicep/bicep-functions-deployment.md) functions.
This article describes the lambda functions to use in ARM templates. [Lambda exp
ARM template lambda function has these limitations: -- Lambda expression can only be specified directly as function arguments in these functions: [`filter()`](#filter), [`map()`](#map), [`reduce()`](#reduce), and [`sort()`](#sort).-- Using lambda variables (the temporary variables used in the lambda expressions) inside resource or module array access isn't currently supported.
+- Lambda function can only be specified directly as function arguments in these functions: [`filter()`](#filter), [`map()`](#map), [`reduce()`](#reduce), [`sort()`](#sort), and [`toObject()`](#toobject).
+- Using lambda variables (the temporary variables used in the lambda functions) inside resource or module array access isn't currently supported.
- Using lambda variables inside the [`listKeys`](./template-functions-resource.md#list) function isn't currently supported. - Using lambda variables inside the [reference](./template-functions-resource.md#reference) function isn't currently supported. ## filter
-`filter(inputArray, lambda expression)`
+`filter(inputArray, lambda function)`
Filters an array with a custom filtering function.
In Bicep, use the [filter](../bicep/bicep-functions-lambda.md#filter) function.
| Parameter | Required | Type | Description | |: |: |: |: | | inputArray |Yes |array |The array to filter.|
-| lambda expression |Yes |expression |The lambda expression applied to each input array element. If false, the item will be filtered out of the output array.|
+| lambda function |Yes |expression |The lambda function applied to each input array element. If false, the item will be filtered out of the output array.|
### Return value
An array.
### Examples
-The following examples show how to use the filter function.
+The following examples show how to use the `filter` function.
```json {
The output from the preceding example:
## map
-`map(inputArray, lambda expression)`
+`map(inputArray, lambda function)`
Applies a custom mapping function to each element of an array.
In Bicep, use the [map](../bicep/bicep-functions-lambda.md#map) function.
| Parameter | Required | Type | Description | |: |: |: |: | | inputArray |Yes |array |The array to map.|
-| lambda expression |Yes |expression |The lambda expression applied to each input array element, in order to generate the output array.|
+| lambda function |Yes |expression |The lambda function applied to each input array element, in order to generate the output array.|
### Return value
An array.
### Example
-The following example shows how to use the map function.
+The following example shows how to use the `map` function.
```json {
The output from the preceding example is:
## reduce
-`reduce(inputArray, initialValue, lambda expression)`
+`reduce(inputArray, initialValue, lambda function)`
Reduces an array with a custom reduce function.
In Bicep, use the [reduce](../bicep/bicep-functions-lambda.md#reduce) function.
|: |: |: |: | | inputArray |Yes |array |The array to reduce.| | initialValue |No |any |Initial value.|
-| lambda expression |Yes |expression |The lambda expression used to aggregate the current value and the next value.|
+| lambda function |Yes |expression |The lambda function used to aggregate the current value and the next value.|
### Return value
Any.
### Example
-The following examples show how to use the reduce function.
+The following examples show how to use the `reduce` function.
```json {
The [union](./template-functions-object.md#union) function returns a single obje
## sort
-`sort(inputArray, lambda expression)`
+`sort(inputArray, lambda function)`
Sorts an array with a custom sort function.
In Bicep, use the [sort](../bicep/bicep-functions-lambda.md#sort) function.
| Parameter | Required | Type | Description | |: |: |: |: | | inputArray |Yes |array |The array to sort.|
-| lambda expression |Yes |expression |The lambda expression used to compare two array elements for ordering. If true, the second element will be ordered after the first in the output array.|
+| lambda function |Yes |expression |The lambda function used to compare two array elements for ordering. If true, the second element will be ordered after the first in the output array.|
### Return value
An array.
### Example
-The following example shows how to use the sort function.
+The following example shows how to use the `sort` function.
```json {
The output from the preceding example sorts the dog objects from the youngest to
| - | - | -- | | dogsByAge | Array | [{"name":"Indy","age":2,"interests":["Butter"]},{"name":"Casper","age":3,"interests":["Other dogs"]},{"name":"Evie","age":5,"interests":["Ball","Frisbee"]},{"name":"Kira","age":8,"interests":["Rubs"]}] |
+## toObject
+
+`toObject(inputArray, lambda function, [lambda function])`
+
+Converts an array to an object with a custom key function and optional custom value function.
+
+In Bicep, use the [toObject](../templates/template-functions-lambda.md#toobject) function.
+
+### Parameters
+
+| Parameter | Required | Type | Description |
+|: |: |: |: |
+| inputArray |Yes |array |The array used for creating an object.|
+| lambda function |Yes |expression |The lambda function used to provide the key predicate.|
+| lambda function |No |expression |The lambda function used to provide the value predicate.|
+
+### Return value
+
+An object.
+
+### Example
+
+The following example shows how to use the `toObject` function with the two required parameters:
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "variables": {
+ "dogs": [
+ {
+ "name": "Evie",
+ "age": 5,
+ "interests": [
+ "Ball",
+ "Frisbee"
+ ]
+ },
+ {
+ "name": "Casper",
+ "age": 3,
+ "interests": [
+ "Other dogs"
+ ]
+ },
+ {
+ "name": "Indy",
+ "age": 2,
+ "interests": [
+ "Butter"
+ ]
+ },
+ {
+ "name": "Kira",
+ "age": 8,
+ "interests": [
+ "Rubs"
+ ]
+ }
+ ]
+ },
+ "resources": {},
+ "outputs": {
+ "dogsObject": {
+ "type": "object",
+ "value": "[toObject(variables('dogs'), lambda('entry', lambdaVariables('entry').name))]"
+ }
+ }
+}
+```
+
+The preceding example generates an object based on an array.
+
+| Name | Type | Value |
+| - | - | -- |
+| dogsObject | Object | {"Evie":{"name":"Evie","age":5,"interests":["Ball","Frisbee"]},"Casper":{"name":"Casper","age":3,"interests":["Other dogs"]},"Indy":{"name":"Indy","age":2,"interests":["Butter"]},"Kira":{"name":"Kira","age":8,"interests":["Rubs"]}} |
+
+The following `toObject` function with the third parameter provides the same output.
+
+```json
+"outputs": {
+ "dogsObject": {
+ "type": "object",
+ "value": "[toObject(variables('dogs'), lambda('entry', lambdaVariables('entry').name), lambda('entry', lambdaVariables('entry')))]"
+ }
+}
+```
+
+The following example shows how to use the `toObject` function with three parameters.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "variables": {
+ "dogs": [
+ {
+ "name": "Evie",
+ "properties": {
+ "age": 5,
+ "interests": [
+ "Ball",
+ "Frisbee"
+ ]
+ }
+ },
+ {
+ "name": "Casper",
+ "properties": {
+ "age": 3,
+ "interests": [
+ "Other dogs"
+ ]
+ }
+ },
+ {
+ "name": "Indy",
+ "properties": {
+ "age": 2,
+ "interests": [
+ "Butter"
+ ]
+ }
+ },
+ {
+ "name": "Kira",
+ "properties": {
+ "age": 8,
+ "interests": [
+ "Rubs"
+ ]
+ }
+ }
+ ]
+ },
+ "resources": {},
+ "outputs": {
+ "dogsObject": {
+ "type": "object",
+ "value": "[toObject(variables('dogs'), lambda('entry', lambdaVariables('entry').name), lambda('entry', lambdaVariables('entry').properties))]"
+ }
+ }
+}
+```
+
+The preceding example generates an object based on an array.
+
+| Name | Type | Value |
+| - | - | -- |
+| dogsObject | Object | {"Evie":{"age":5,"interests":["Ball","Frisbee"]},"Casper":{"age":3,"interests":["Other dogs"]},"Indy":{"age":2,"interests":["Butter"]},"Kira":{"age":8,"interests":["Rubs"]}} |
+ ## Next steps - See [Template functions - arrays](./template-functions-array.md) for additional array related template functions.
azure-web-pubsub Howto Develop Reliable Clients https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-develop-reliable-clients.md
When Websocket client connections drop due to intermittent network issues, messa
## Reliable Protocol
-The Web PubSub service supports two reliable subprotocols `json.reliable.webpubsub.azure.v1` and `protobuf.reliable.webpubsub.azure.v1`. Clients must follow the publisher, subscriber, and reconnection parts of the subprotocol to achieve reliability. Failing to properly implement the subprotocol may result in the message delivery not working as expected or the service terminating the client due to protocol violations.
+The Web PubSub service supports two reliable subprotocols `json.reliable.webpubsub.azure.v1` and `protobuf.reliable.webpubsub.azure.v1`. Clients must follow the publisher, subscriber, and recovery parts of the subprotocol to achieve reliability. Failing to properly implement the subprotocol may result in the message delivery not working as expected or the service terminating the client due to protocol violations.
-## Initialization
+## The Easy Way - Use Client SDK
+
+The simplest way to create a reliable client is to use Client SDK. Client SDK implements [Web PubSub client specification](./reference-client-specification.md) and uses `json.reliable.webpubsub.azure.v1` by default. Please refer to [PubSub with client SDK](./quickstart-use-client-sdk.md) for quick start.
++
+## The Hard Way - Implement by hand
+
+The following tutorial walks you through the important part of implementing the [Web PubSub client specification](./reference-client-specification.md). This guide is not for people looking for a quick start but who wants to know the principle of achieving reliability. For quick start, please use the Client SDK.
+
+### Initialization
To use reliable subprotocols, you must set the subprotocol when constructing Websocket connections. In JavaScript, you can use the following code:
To use reliable subprotocols, you must set the subprotocol when constructing Web
var pubsub = new WebSocket('wss://test.webpubsub.azure.com/client/hubs/hub1', 'protobuf.reliable.webpubsub.azure.v1'); ```
-## Reconnection
+### Connection recovery
-Reconnection is the basis of achieving reliability and must be implemented when using the `json.reliable.webpubsub.azure.v1` and `protobuf.reliable.webpubsub.azure.v1` protocols.
+Connection recovery is the basis of achieving reliability and must be implemented when using the `json.reliable.webpubsub.azure.v1` and `protobuf.reliable.webpubsub.azure.v1` protocols.
-Websocket connections rely on TCP. When the connection doesn't drop, messages are lossless and delivered in order. To prevent message loss over dropped connections, the Web PubSub service retains the connection status information, including group and message information. This information is used to restore the client on reconnection
+Websocket connections rely on TCP. When the connection doesn't drop, messages are lossless and delivered in order. To prevent message loss over dropped connections, the Web PubSub service retains the connection status information, including group and message information. This information is used to restore the client on connection recovery
When the client reconnects to the service using reliable subprotocols, the client will receive a `Connected` message containing the `connectionId` and `reconnectionToken`. The `connectionId` identifies the session of the connection in the service.
When the client reconnects to the service using reliable subprotocols, the clien
} ```
-Once the WebSocket connection drops, the client should try to reconnect with the same `connectionId` to restore the same session. Clients don't need to negotiate with the server and obtain the `access_token`. Instead, on reconnection the client should make a WebSocket connect request directly to the service with the service host name, `connection_id`, and `reconnection_token`:
+Once the WebSocket connection drops, the client should try to reconnect with the same `connectionId` to restore the same session. Clients don't need to negotiate with the server and obtain the `access_token`. Instead, to recover the connection, the client should make a WebSocket connect request directly to the service with the service host name, `connection_id`, and `reconnection_token`:
```text wss://<service-endpoint>/client/hubs/<hub>?awps_connection_id=<connection_id>&awps_reconnection_token=<reconnection_token> ```
-Reconnection may fail if the network issue hasn't been recovered yet. The client should keep retrying to reconnect until:
+Connection recovery may fail if the network issue hasn't been recovered yet. The client should keep retrying to reconnect until:
1. The Websocket connection is closed with status code 1008. The status code means the connectionId has been removed from the service.
-2. A reconnection failure continues to occur for more than 1 minute.
+2. A recovery failure continues to occur for more than 1 minute.
-## Publisher
+### Publisher
Clients that send events to event handlers or publish messages to other clients are called publishers. Publishers should set `ackId` in the message to receive an acknowledgment from the Web PubSub service that publishing the message was successful or not.
When the service experiences a transient internal error and the message can't be
![Message Failure](./media/howto-develop-reliable-clients/message-failed.png)
-If the service's ack response is lost because the WebSocket connection dropped, the publisher should resend the message with the same `ackId` after reconnection. When the message was previously processed by the service, it will send an ack containing a `Duplicate` error. The publisher should stop resending this message.
+If the service's ack response is lost because the WebSocket connection dropped, the publisher should resend the message with the same `ackId` after recovery. When the message was previously processed by the service, it will send an ack containing a `Duplicate` error. The publisher should stop resending this message.
```json {
If the service's ack response is lost because the WebSocket connection dropped,
![Message duplicated](./media/howto-develop-reliable-clients/message-duplicated.png)
-## Subscriber
+### Subscriber
Clients that receive messages from event handlers or publishers are called subscribers. When connections drop due to network issues, the Web PubSub service doesn't know how many messages were sent to subscribers. To determine the last message received by the subscriber, the service sends a data message containing a `sequenceId`. The subscriber responds with a sequence ack message:
azure-web-pubsub Quickstart Use Client Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/quickstart-use-client-sdk.md
+
+ Title: Quickstart - Pub-sub using Azure Web PubSub client SDK
+description: Quickstart showing how to use the Azure Web PubSub client SDK
++++ Last updated : 02/7/2023+
+ms.devlang: azurecli
++
+# Quickstart: Pub-sub using Web PubSub client SDK
+
+This quickstart guide demonstrates how to construct a project using the Web PubSub client SDK, connect to the Web PubSub, subscribe to messages from groups and publish a message to the group.
+
+> [!NOTE]
+> The client SDK is still in preview version. The interface may change in later versions
+
+## Prerequisites
+
+- A Web PubSub instance. If you haven't created one, you can follow the guidance: [Create a Web PubSub instance from Azure portal](./howto-develop-create-instance.md)
+- A file editor such as Visual Studio Code.
+
+Install the dependencies for the language you're using:
+
+# [JavaScript](#tab/javascript)
+
+Install Node.js
+
+[Node.js](https://nodejs.org)
+
+# [C#](#tab/csharp)
+
+Install both the .NET Core SDK and dotnet runtime.
+
+[.NET Core](https://dotnet.microsoft.com/download)
+++
+## Add the Web PubSub client SDK
+
+# [JavaScript](#tab/javascript)
+
+The SDK is available as an [npm module](https://www.npmjs.com/package/@azure/web-pubsub-client)
+
+```bash
+npm install @azure/web-pubsub-client
+```
+
+# [C#](#tab/csharp)
+
+The SDK is available as an [NuGet packet](https://www.nuget.org/packages/Azure.Messaging.WebPubSub.Client)
+
+```bash
+# Add a new .net project
+dotnet new console
+
+# Add the client SDK
+dotnet add package Azure.Messaging.WebPubSub.Client --prerelease
+```
+++
+## Connect to Web PubSub
+
+A client uses a Client Access URL to connect and authenticate with the service, which follows a pattern of `wss://<service_name>.webpubsub.azure.com/client/hubs/<hub_name>?access_token=<token>`. A client can have a few ways to obtain the Client Access URL. For this quick start, you can copy and paste one from Azure portal shown as the following diagram.
+
+![The diagram shows how to get client access url.](./media/howto-websocket-connect/generate-client-url.png)
+
+As shown in the diagram above, the client has the permissions to send messages to and join a specific group named `group1`.
++
+# [JavaScript](#tab/javascript)
+
+Add a file with name `index.js` and add following codes:
+
+```javascript
+const { WebPubSubClient } = require("@azure/web-pubsub-client");
+// Instantiates the client object. <client-access-url> is copied from Azure portal mentioned above.
+const client = new WebPubSubClient("<client-access-url>");
+```
+
+# [C#](#tab/csharp)
+
+Edit the `Program.cs` file and add following codes:
+
+```csharp
+using Azure.Messaging.WebPubSub.Clients;
+// Instantiates the client object. <client-access-uri> is copied from Azure portal mentioned above.
+var client = new WebPubSubClient(new Uri("<client-access-uri>"));
+```
+++
+## Subscribe to a group
+
+To receive message from groups, you need to add a callback to handle messages you receive from the group, and you must join the group before you can receive messages from it. The following code subscribes the client to a group called `group1`.
+
+# [JavaScript](#tab/javascript)
+
+```javascript
+// callback to group messages.
+client.on("group-message", (e) => {
+ console.log(`Received message: ${e.message.data}`);
+});
+
+// before joining group, the client needs to start
+client.start();
+
+// join a group to subscribe message from the group
+client.joinGroup("group1");
+```
+
+# [C#](#tab/csharp)
+
+```csharp
+// callback to group messages.
+client.GroupMessageReceived += eventArgs =>
+{
+ Console.WriteLine($"Receive group message from {eventArgs.Message.Group}: {eventArgs.Message.Data}");
+ return Task.CompletedTask;
+};
+
+// before joining group, the client needs to start
+await client.StartAsync();
+
+// join a group to subscribe message from the group
+await client.JoinGroupAsync("group1");
+```
++
+## Publish a message to a group
+
+Then you can send messages to the group and as the client has joined the group before, you can receive the message you've sent.
+
+# [JavaScript](#tab/javascript)
+
+```javascript
+client.sendToGroup("group1", "Hello World", "text");
+```
+
+# [C#](#tab/csharp)
+
+```csharp
+await client.SendToGroupAsync("group1", BinaryData.FromString("Hello World"), WebPubSubDataType.Text);
+```
+++
+## Repository and Samples
+
+# [JavaScript](#tab/javascript)
+
+[JavaScript SDK repository on GitHub](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/web-pubsub/web-pubsub-client)
+
+[TypeScript sample](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/web-pubsub/web-pubsub-client/samples/v1-beta/typescript)
+
+[Browser sample](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/web-pubsub/web-pubsub-client/samples-browser)
+
+[Chat app sample](https://github.com/Azure/azure-webpubsub/tree/main/samples/javascript/chatapp/sdk)
+
+# [C#](#tab/csharp)
+
+[.NET SDK repository on GitHub](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/webpubsub/Azure.Messaging.WebPubSub.Client)
+
+[Log streaming sample](https://github.com/Azure/azure-webpubsub/tree/main/samples/csharp/logstream/sdk)
+++
+## Next steps
+
+This quickstart provides you with a basic idea of how to connect to the Web PubSub with client SDK and how to subscribe to group messages and publish messages to groups.
+
azure-web-pubsub Reference Json Reliable Webpubsub Subprotocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-json-reliable-webpubsub-subprotocol.md
The response to the client connect request:
wss://<service-endpoint>/client/hubs/<hub>?awps_connection_id=<connectionId>&awps_reconnection_token=<reconnectionToken> ```
-Find more details in [Reconnection](./howto-develop-reliable-clients.md#reconnection)
+Find more details in [Connection Recovery](./howto-develop-reliable-clients.md#connection-recovery)
#### Disconnected
azure-web-pubsub Reference Protobuf Reliable Webpubsub Subprotocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-protobuf-reliable-webpubsub-subprotocol.md
For example, in JavaScript, you can create a Reliable PubSub WebSocket client wi
var pubsub = new WebSocket('wss://test.webpubsub.azure.com/client/hubs/hub1', 'protobuf.reliable.webpubsub.azure.v1'); ```
-When using `json.reliable.webpubsub.azure.v1` subprotocol, the client must follow the [How to create reliable clients](./howto-develop-reliable-clients.md) to implement reconnection, publisher and subscriber.
+To correctly use `json.reliable.webpubsub.azure.v1` subprotocol, the client must follow the [How to create reliable clients](./howto-develop-reliable-clients.md) to implement reconnection, publisher and subscriber.
> [!NOTE] > Currently, the Web PubSub service supports only [proto3](https://developers.google.com/protocol-buffers/docs/proto3).
message MessageData {
Reliable PubSub WebSocket client must send `SequenceAckMessage` once it received a message from the service. Find more in [How to create reliable clients](./howto-develop-reliable-clients.md#subscriber)
-* `sequence_id` is a incremental uint64 number from the message received.
+* `sequence_id` is an incremental uint64 number from the message received.
## Responses
When the client connects to the service, you receive a `DownstreamMessage.System
wss://<service-endpoint>/client/hubs/<hub>?awps_connection_id=<connectionId>&awps_reconnection_token=<reconnectionToken> ```
-Find more details in [Reconnection](./howto-develop-reliable-clients.md#reconnection)
+Find more details in [Connection Recovery](./howto-develop-reliable-clients.md#connection-recovery)
#### Disconnected
cognitive-services How To Configure Openssl Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-configure-openssl-linux.md
When the Speech SDK connects to the Speech Service, it checks the Transport Laye
If a destination posing as the Speech Service reports a certificate that's been revoked in a retrieved CRL, the SDK will terminate the connection and report an error via a `Canceled` event. The authenticity of a reported certificate can't be checked without an updated CRL. Therefore, the Speech SDK will also treat a failure to download a CRL from an Azure CA location as an error.
+> [!WARNING]
+> If your solution uses proxy or firewall it should be configured to allow access to all certificate revocation list URLs used by Azure. Note that many of these URLs are outside of `microsoft.com` domain, so allowing access to `*.microsoft.com` is not enough. See [this document](../../security/fundamentals/tls-certificate-changes.md) for details. In exceptional cases you may ignore CRL failures (see [the correspondent section](#bypassing-or-ignoring-crl-failures)), but such configuration is strongly not recommended, especially for production scenarios.
+ ### Large CRL files (>10 MB) One cause of CRL-related failures is the use of large CRL files. This class of error is typically only applicable to special environments with extended CA chains. Standard public endpoints shouldn't encounter this class of issue.
cognitive-services Create Use Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/how-to-guides/create-use-managed-identities.md
Previously updated : 12/17/2022 Last updated : 02/09/2023 # Managed identities for Document Translation + > [!IMPORTANT] > > * Currently, Document Translation doesn't support managed identity in the global region. If you intend to use managed identities for Document Translation operations, [create your Translator resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) in a non-global Azure region.
Managed identities for Azure resources are service principals that create an Azu
* You can use managed identities to grant access to any resource that supports Azure AD authentication, including your own applications. Managed identities eliminate the need for you to include shared access signature tokens (SAS) with your HTTP requests.
-* To grant access to an Azure resource, you'll assign an Azure role to a managed identity using [Azure role-based access control (`Azure RBAC`)](../../../../role-based-access-control/overview.md).
+* To grant access to an Azure resource, assign an Azure role to a managed identity using [Azure role-based access control (`Azure RBAC`)](../../../../role-based-access-control/overview.md).
* There's no added cost to use managed identities in Azure.
Managed identities for Azure resources are service principals that create an Azu
> > * Managed identities are a safer way to grant access to data without having SAS tokens included with your HTTP requests. + ## Prerequisites
-To get started, you'll need:
+
+To get started, you need:
* An active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/)ΓÇöif you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/).
To get started, you'll need:
* A brief understanding of [**Azure role-based access control (`Azure RBAC`)**](../../../../role-based-access-control/role-assignments-portal.md) using the Azure portal.
-* An [**Azure blob storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM) in the same region as your Translator resource. You'll create containers to store and organize your blob data within your storage account.
+* An [**Azure blob storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM) in the same region as your Translator resource. You also need to create containers to store and organize your blob data within your storage account.
* **If your storage account is behind a firewall, you must enable the following configuration**: </br>
There are two types of managed identities: **system-assigned** and **user-assign
* A system-assigned managed identity is **enabled** directly on a service instance. It isn't enabled by default; you must go to your resource and update the identity setting.
-* The system-assigned managed identity is tied to your resource throughout its lifecycle. If you delete your resource, the managed identity will be deleted as well.
+* The system-assigned managed identity is tied to your resource throughout its lifecycle. If you delete your resource, the managed identity is deleted as well.
-In the following steps, we'll enable a system-assigned managed identity and grant your Translator resource limited access to your Azure blob storage account.
+In the following steps, we enable a system-assigned managed identity and grant your Translator resource limited access to your Azure blob storage account.
## Enable a system-assigned managed identity
The **Storage Blob Data Contributor** role gives Translator (represented by the
:::image type="content" source="../../media/managed-identities/enable-system-assigned-managed-identity-portal.png" alt-text="Screenshot: enable system-assigned managed identity in Azure portal.":::
-1. An Azure role assignments page will open. Choose your subscription from the drop-down menu then select **&plus; Add role assignment**.
+1. On the Azure role assignments page that opened, choose your subscription from the drop-down menu then select **&plus; Add role assignment**.
:::image type="content" source="../../media/managed-identities/azure-role-assignments-page-portal.png" alt-text="Screenshot: Azure role assignments page in the Azure portal.":::
The **Storage Blob Data Contributor** role gives Translator (represented by the
* A batch Document Translation request is submitted to your Translator service endpoint via a POST request.
-* With managed identity and `Azure RBAC`, you'll no longer need to include SAS URLs.
+* With managed identity and `Azure RBAC`, you no longer need to include SAS URLs.
* If successful, the POST method returns a `202 Accepted` response code and the batch request is created by the service.
-* The translated documents will appear in your target container.
+* The translated documents appear in your target container.
### Headers
cognitive-services Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/models.md
When using our Embeddings models, keep in mind their limitations and risks.
| Text-davinci-003 | Yes | No | East US | N/A | | Text-davinci-fine-tune-002* | Yes | No | N/A | East US, West Europe |
-\*Models available by request only. Please open a support request.
+\*Models available by request only. We are currently unable to onboard new customers at this time.
### Codex Models | Model | Supports Completions | Supports Embeddings | Base model Regions | Fine-Tuning Regions |
When using our Embeddings models, keep in mind their limitations and risks.
| Code-Davinci-002 | Yes | No | East US, West Europe | N/A | | Code-Davinci-Fine-tune-002* | Yes | No | N/A | East US, West Europe |
-\*Models available for Fine-tuning by request only. Please open a support request.
+\*Models available for Fine-tuning by request only. We are currently unable to enable new cusetomers at this time.
### Embeddings Models | Model | Supports Completions | Supports Embeddings | Base model Regions | Fine-Tuning Regions | | | | | | |
+| text-ada-embeddings-002 | No | Yes | East US, South Central US, West Europe | N/A |
| text-similarity-ada-001 | No | Yes | East US, South Central US, West Europe | N/A | | text-similarity-babbage-001 | No | Yes | South Central US, West Europe | N/A | | text-similarity-curie-001 | No | Yes | East US, South Central US, West Europe | N/A |
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/overview.md
The Azure OpenAI service provides REST API access to OpenAI's powerful language
| Feature | Azure OpenAI | | | | | Models available | GPT-3 base series <br> Codex series <br> Embeddings series <br> Learn more in our [Models](./concepts/models.md) page.|
-| Fine-tuning | Ada <br> Babbage <br> Curie <br> Cushman* <br> Davinci* <br> \* available by request. Please open a support request|
+| Fine-tuning | Ada <br> Babbage <br> Curie <br> Cushman* <br> Davinci* <br> \* Currently unavailable.|
| Price | [Available here](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) | | Virtual network support | Yes | | Managed Identity| Yes, via Azure Active Directory |
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/whats-new.md
keywords:
* **Service GA**. Azure OpenAI is now generally available.ΓÇï
-* **New models**: Addition of the latest text model, text-davinci-003
+* **New models**: Addition of the latest text model, text-davinci-003 (East US, West Europe), text-ada-embeddings-002 (East US, South Central US, West Europe)
## December 2022
communication-services Call Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/call-automation.md
The following list presents the set of features that are currently available in
| Pre-call scenarios | Answer a one-to-one call | ✔️ | ✔️ | | | Answer a group call | ✔️ | ✔️ | | | Place new outbound call to one or more endpoints | ✔️ | ✔️ |
-| | Redirect (forward) a call to one or more endpoints | ✔️ | ✔️ |
+| | Redirect* (forward) a call to one or more endpoints | ✔️ | ✔️ |
| | Reject an incoming call | ✔️ | ✔️ | | Mid-call scenarios | Add one or more endpoints to an existing call | ✔️ | ✔️ | | | Play Audio from an audio file | ✔️ | ✔️ | | | Recognize user input through DTMF | ✔️ | ✔️ | | | Remove one or more endpoints from an existing call| ✔️ | ✔️ |
-| | Blind Transfer* a call to another endpoint | ✔️ | ✔️ |
+| | Blind Transfer* a 1:1 call to another endpoint | ✔️ | ✔️ |
| | Hang up a call (remove the call leg) | ✔️ | ✔️ | | | Terminate a call (remove all participants and end call)| ✔️ | ✔️ | | Query scenarios | Get the call state | ✔️ | ✔️ |
The following list presents the set of features that are currently available in
| | List all participants in a call | ✔️ | ✔️ | | Call Recording | Start/pause/resume/stop recording | ✔️ | ✔️ |
-*Transfer of VoIP call to a phone number is currently not supported.
+*Transfer or redirect of a VoIP call to a phone number is currently not supported.
## Architecture
communication-services Teams Interop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/teams-interop.md
All usage of Azure Communication Service APIs and SDKs increments [Azure Communi
If your Azure application has a user spend 10 minutes in a meeting with a user of Microsoft Teams, those two users combined consumed 20 calling minutes. The 10 minutes exercised through the custom application and using Azure APIs and SDKs will be billed to your resource. However, the 10 minutes consumed by the user in the native Teams application is covered by the applicable Teams license and is not metered by Azure.
+## Trademark and brand guideline
+Third parties must follow the [Microsoft Trademark and Brand Guidelines](https://www.microsoft.com/legal/intellectualproperty/trademarks) when using Microsoft Teams trademarks or product logos in advertising or promotional materials. In general, wordmarks can be used to truthfully convey information about your product or service, as long as customers and the public will not be confused into believing Microsoft is affiliated with or endorses your product or service. However, our logos, app, product icons, illustrations, photographs, videos, and designs can never be used without an express license. To get more details about branding, read [Microsoft Trademark and Brand Guidelines](https://www.microsoft.com/legal/intellectualproperty/trademarks).
+ ## Teams in Government Clouds (GCC) Azure Communication Services interoperability isn't compatible with Teams deployments using [Microsoft 365 government clouds (GCC)](/MicrosoftTeams/plan-for-government-gcc) at this time.
communication-services Ui Library Cross Platform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/samples/ui-library-cross-platform.md
Title: Cross Platform development using the UI library
-description: Cross Platform development solutions using the UI library to enable Xamarin and React Native developers build communication applications
+description: Cross Platform development solutions using the UI library to enable .NET MAUI, Xamarin and React Native developers build communication applications
Last updated 08/30/2021
-zone_pivot_groups: acs-xamarin-react
+zone_pivot_groups: acs-maui-xamarin-react
# Get started with Cross Platform development using the UI library [!INCLUDE [Public Preview Notice](../includes/public-preview-include.md)]
-Azure Communication Services introduces Cross Platform development using **Xamarin and React Native** solutions. This sample demonstrates how Azure Communication Services Calling integrates the UI Library for mobile platforms and create the bindings to allow developers to begin building with the calling capabilities.
+Azure Communication Services introduces Cross Platform development using **.NET MAUI, Xamarin and React Native** solutions. This sample demonstrates how Azure Communication Services Calling integrates the UI Library for mobile platforms and create the bindings to allow developers to begin building with the calling capabilities.
+ ::: zone pivot="platform-xamarin" [!INCLUDE [Xamarin](./includes/ui-xamarin.md)]
communications-gateway Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/security.md
Previously updated : 01/27/2023 Last updated : 02/09/2023
The customer data Azure Communications Gateway handles can be split into:
- Content data, such as media for voice calls. - Customer data present in call metadata.
-## Encryption between Microsoft Teams and Azure Communications Gateway
-
-All traffic between Azure Communications Gateway and Microsoft Teams is encrypted. SIP traffic is encrypted using TLS. Media traffic is encrypted using SRTP.
- ## Data retention, data security and encryption at rest
-Azure Communications Gateway doesn't store content data, but it does store customer data and provide statistics based on it. This data is stored for a maximum of 30 days. After this period, it's no longer accessible to perform diagnostics or analysis of individual calls. Anonymized statistics and logs produced based on customer data will continue to be available beyond the 30 days limit.
+Azure Communications Gateway doesn't store content data, but it does store customer data and provide statistics based on it. This data is stored for a maximum of 30 days. After this period, it's no longer accessible to perform diagnostics or analysis of individual calls. Anonymized statistics and logs produced based on customer data are available after the 30 days limit.
Azure Communications Gateway doesn't support [Customer Lockbox for Microsoft Azure](/azure/security/fundamentals/customer-lockbox-overview). However Microsoft engineers can only access data on a just-in-time basis, and only for diagnostic purposes. Azure Communications Gateway stores all data at rest securely, including any customer data that has to be temporarily stored, such as call records. It uses standard Azure infrastructure, with platform-managed encryption keys, to provide server-side encryption compliant with a range of security standards including FedRAMP. For more information, see [encryption of data at rest](../security/fundamentals/encryption-overview.md).
+## Encryption in transit
+
+All traffic handled by Azure Communications Gateway is encrypted. This encryption is used between Azure Communications Gateway components and towards Microsoft Teams.
+* SIP and HTTP traffic is encrypted using TLS.
+* Media traffic is encrypted using SRTP.
+
+When encrypting traffic to send to your network, Azure Communications Gateway prefers TLSv1.3. It falls back to TLSv1.2 if necessary.
+
+The following cipher suites are used for encrypting SIP and RTP.
+
+### Ciphers used with TLSv1.2
+
+* TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
+* TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
+* TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
+* TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
+* TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384
+* TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256
+* TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
+* TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
+
+### Ciphers used with TLSv1.3
+
+* TLS_AES_256_GCM_SHA384
+* TLS_AES_128_GCM_SHA256
+
+### Ciphers used with SRTP
+
+* AES_CM_128_HMAC_SHA1_80
+ ## Next steps - Learn about [how Azure Communications Gateway communicates with Microsoft Teams and your network](interoperability.md).
confidential-computing Confidential Enclave Nodes Aks Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/confidential-enclave-nodes-aks-get-started.md
Title: 'Quickstart: Deploy an AKS cluster with Enclave Confidential Container Intel SGX nodes by using the Azure CLI' description: Learn how to create an Azure Kubernetes Service (AKS) cluster with enclave confidential containers a Hello World app by using the Azure CLI. --+ Last updated 11/1/2021
confidential-computing Confidential Nodes Aks Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/confidential-nodes-aks-overview.md
Title: Confidential computing application enclave nodes on Azure Kubernetes Serv
description: Intel SGX based confidential computing VM nodes with application enclave support --+ Last updated 07/15/2022
confidential-computing Enclave Aware Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/enclave-aware-containers.md
Title: Enclave aware containers on Azure description: enclave ready application containers support on Azure Kubernetes Service (AKS) --+ Last updated 9/22/2020
cosmos-db Monitor Request Unit Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/monitor-request-unit-usage.md
To get the request unit usage of each operation either by total(sum) or average,
:::image type="content" source="./media/monitor-request-unit-usage/request-unit-usage-operations.png" alt-text="Azure Cosmos DB Request units for operations in Azure monitor":::
-If you want to see the request unit usage by collection, select **Apply splitting** and choose the collection name as a filter. You will see a chat like the following with a choice of collections within the dashboard. You can then select a specific collection name to view more details:
+If you want to see the request unit usage by collection, select **Apply splitting** and choose the collection name as a filter. You will see a chart like the following with a choice of collections within the dashboard. You can then select a specific collection name to view more details:
:::image type="content" source="./media/monitor-request-unit-usage/request-unit-usage-collection.png" alt-text="Azure Cosmos DB Request units for all operations by the collection in Azure monitor" border="true":::
databox-gateway Data Box Gateway 2301 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-gateway/data-box-gateway-2301-release-notes.md
+
+ Title: Azure Data Box Gateway 2301 release notes| Microsoft Docs
+description: Describes critical open issues and resolutions for the Azure Data Box Gateway running 2301 release.
++
+
+++ Last updated : 02/10/2023+++
+# Azure Data Box Gateway 2301 release notes
+
+The following release notes identify the critical open issues and the resolved issues for the 2301 release of Azure Data Box Gateway.
+
+The release notes are continuously updated. As critical issues that require a workaround are discovered, they are added. Before you deploy your Azure Data Box Gateway, carefully review the information in the release notes.
+
+This release corresponds to the software version:
+
+- **Data Box Gateway 2301 (1.6.2225.773)** - KB 5023529
+
+> [!NOTE]
+> Update 2301 can be applied only to devices that are running 2105 versions of the software or later. If you are running a version earlier than 2105, update your device to 2105 and then update to 2301.
+
+## What's new
+
+This release contains the following bug fixes:
+
+- **Update Agent SDK** - An update to SaaS agent SDK to fix the expired certificate issue.
+- **MSRC fixes** - Security fixes.
+
+This release also contains the following updates:
+
+- **Monitoring agent update**.
+- **SAAS agent SDK update** - Provides certificate rotation.
+- **Updated Nuget Package References** - Enhances security.
+- **Other updates** - All cumulative updates and .NET framework updates through November 2022.
+
+## Known issues in this release
+
+No new issues are release noted for this release. All the release noted issues have carried over from the previous releases. For a list of known issues, see [Known issues in the GA release](data-box-gateway-release-notes.md#known-issues-in-ga-release).
+
+## Next steps
+
+- [Prepare to deploy Azure Data Box Gateway](data-box-gateway-deploy-prep.md)
databox-online Azure Stack Edge 2301 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-2301-release-notes.md
+
+ Title: Azure Stack Edge Pro FPGA 2301 release notes | Microsoft Docs
+description: Describes Azure Stack Edge Pro FPGA 2301 release critical open issues and resolutions.
+++++ Last updated : 02/10/2023+++
+# Azure Stack Edge Pro with FPGA 2301 release notes
+
+The following release notes identify critical open issues and the resolved issues for the 2301 release of Azure Stack Edge Pro FPGA with a built-in Field Programmable Gate Array (FPGA).
+
+The release notes are continuously updated. As critical issues that require a workaround are discovered, they are added. Before you deploy your Azure Stack Edge device, carefully review the information in the release notes.
+
+This release corresponds to software version:
+
+- **Azure Stack Edge 2301 (1.6.2225.773)** - KB 5023528
+
+> [!NOTE]
+> Update 2301 can be applied only to devices that are running 2101 versions of the software or later. If you are running a version earlier than 2101, update your device to 2101 and then update to 2301.
+
+## What's new
+
+This release contains the following bug fixes:
+
+- **Update Agent SDK** - An update to SaaS agent SDK to fix the expired certificate issue.
+- **MSRC fixes** - Security fixes.
+
+This release also contains the following updates:
+
+- **Monitoring agent update**.
+- **SAAS agent SDK update** - Provides certificate rotation.
+- **Updated Nuget Package References** - Enhances security.
+- **Other updates** - All cumulative updates and .NET framework updates through November 2022.
+
+## Known issues in this release
+
+No new issues are release noted for this release. All the release noted issues have carried over from the previous releases. To see a list of known issues, go to [Known issues in the GA release](../databox-gateway/data-box-gateway-release-notes.md#known-issues-in-ga-release).
+
+## Next steps
+
+- [Prepare to deploy Azure Stack Edge](../databox-online/azure-stack-edge-deploy-prep.md)
defender-for-cloud Supported Machines Endpoint Solutions Clouds Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/supported-machines-endpoint-solutions-clouds-containers.md
The **tabs** below show the features that are available, by environment, for Mic
| Compliance | Docker CIS | VM, Virtual Machine Scale Set | GA | - | Log Analytics agent | Defender for Servers Plan 2 | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet | | Vulnerability Assessment <sup>[2](#footnote2)</sup> | Registry scan - OS packages | ACR, Private ACR | GA | Preview | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet | | Vulnerability Assessment <sup>[3](#footnote3)</sup> | Registry scan - language specific packages | ACR, Private ACR | Preview | - | Agentless | Defender for Containers | Commercial clouds |
-| Vulnerability Assessment | View vulnerabilities for running images | AKS | GA | GA | Defender profile | Defender for Containers | Commercial clouds |
+| Vulnerability Assessment | View vulnerabilities for running images | AKS | GA | Preview | Defender profile | Defender for Containers | Commercial clouds |
| Hardening | Control plane recommendations | ACR, AKS | GA | Preview | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet | | Hardening | Kubernetes data plane recommendations | AKS | GA | - | Azure Policy | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet | | Runtime protection| Threat detection (control plane)| AKS | GA | GA | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
dms Tutorial Login Migration Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-login-migration-ads.md
+
+ Title: "Tutorial: Migrate SQL Server logins (preview) to Azure SQL in Azure Data Studio"
+
+description: Learn how to migrate on-premises SQL Server logins (preview) to Azure SQL by using Azure Data Studio and Azure Database Migration Service.
+++ Last updated : 01/31/2023+++++
+# Tutorial: Migrate SQL Server logins (preview) to Azure SQL in Azure Data Studio
+
+You can use Azure Database Migration Service and the Azure SQL Migration extension to assess, get right-sized Azure recommendations and migrate databases from an on-premises SQL Server to Azure SQL. As part of the post-migration tasks, we're introducing a new user experience with an independent workflow you can use to migrate logins (preview) and server roles from your on-premises source SQL Server to the Azure SQL target.
+
+> [!NOTE]
+> The option to migrate SQL Server logins to Azure SQL targets by using Azure Data Studio is currently in preview. This new migration experience is only available by using the [Azure Data Studio Insiders](/sql/azure-data-studio/download-azure-data-studio#download-the-insiders-build-of-azure-data-studio) version of the Azure SQL Migration extension.
+
+This login migration experience automates manual tasks such as the synchronization of logins with their corresponding user mappings and replicating server/securable permissions and server roles.
+
+> [!IMPORTANT]
+> Currently, only Azure SQL Managed Instance and SQL Server on Azure Virtual Machines targets are supported.
+>
+> Completing the database migrations of your on-premises databases to Azure SQL before starting the login migration **is recommended**. It will ensure that the database-level users have already been migrated to the target; therefore the login migration process will perform the user-login mappings synchronization.
+
+In this tutorial, learn how to migrate a set of different SQL Server logins from an on-premises SQL Server to Azure SQL Managed Instance, by using the Azure SQL Migration extension for Azure Data Studio.
+
+> [!NOTE]
+> You can use the Azure SQL Migration extension for Azure Data Studio, PowerShell or Azure CLI for starting the login migration process.
+
+In this tutorial, you learn how to:
+> [!div class="checklist"]
+>
+> - Open the Migrate to Azure SQL wizard in Azure Data Studio
+> - Start the SQL Server login migration wizard
+> - Select your logins from the source SQL Server instance
+> - Select and connect to your Azure SQL target
+> - Start your SQL Server login migration and monitor progress to completion
+
+ > [!NOTE]
+ > Windows account migrations are supported only for Azure SQL Managed Instance targets.
+
+## Prerequisites
+
+Before you begin the tutorial:
+
+- [Download and install Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio).
+- [Install the Azure SQL Migration extension](/sql/azure-data-studio/extensions/azure-sql-migration-extension) from Azure Data Studio Marketplace.
+
+- Create a target instance of [Azure SQL Managed Instance](/azure/azure-sql/managed-instance/instance-create-quickstart) or [SQL Server on Azure Virtual Machines](/azure/azure-sql/virtual-machines/windows/create-sql-vm-portal).
+
+- The machine in which the client such as Azure Data Studio, PowerShell or Azure CLI runs login migrations should have connectivity to both sources and target SQL servers.
+
+- Ensure that the login that you use to connect to the source and target SQL Server instance are members of the **sysadmin** server role.
+
+- As an optional step. You can migrate your on-premises databases to your selected Azure SQL target using one of the following tutorials:
+
+ | Migration scenario | Migration mode |
+ | | |
+ | SQL Server to Azure SQL Managed Instance | [Online](tutorial-sql-server-managed-instance-online-ads.md) / [Offline](tutorial-sql-server-managed-instance-offline-ads.md) |
+ | SQL Server to SQL Server on an Azure virtual machine | [Online](tutorial-sql-server-to-virtual-machine-online-ads.md) / [Offline](./tutorial-sql-server-to-virtual-machine-offline-ads.md) |
+
+ > [!IMPORTANT]
+ > If you haven't completed the database migration and the login migration process is started, the migration of logins and server roles will still happen, but login/role mappings won't be performed correctly.
+ >
+ > Nevertheless, the login migration process can be performed at any time, to update the user mapping synchronization for recently migrated databases.
+
+- For Windows accounts, ensure that the target SQL managed instance has Azure Active Directory read access. This option can be configured via the Azure portal by a user with the Global Administrator role. For more information, see [Provision Azure AD admin (SQL Managed Instance)](/sql/database/authentication-aad-configure#provision-azure-ad-admin-sql-managed-instance).
+
+ Domain federation between local Active Directory Domain Services (AD DS) and Azure Active Directory (Azure AD) has to be set up by an administrator. This configuration is required so that the on-premises Windows users can be synced with the company Azure AD. The login migrations process would then be able to create an external login for the corresponding Azure AD user in the target managed instance.
+
+ In case the domain federation hasn't been set up yet in your Azure Active Directory tenant, the administrator can refer to the following links to get started:
+ - [Tutorial: Basic Active Directory environment](/azure/active-directory/cloud-sync/tutorial-basic-ad-azure)
+ - [Tutorial: Integrate a single forest with a single Azure AD tenant](/azure/active-directory/cloud-sync/tutorial-single-forest)
+ - [Provision Azure AD admin (SQL Managed Instance)](/azure/azure-sql/database/authentication-aad-configure#provision-azure-ad-admin-sql-managed-instance)
+
+- Windows account migrations are supported **only for Azure SQL Managed Instance targets**. The Login Migration wizard will show you a prompt, where you have to enter the Azure AD domain name to convert the Windows users to their Azure AD versions.
+
+ For example, if the Windows user is `contoso\username`, and the Azure AD domain name is `contoso.com`, then the converted Azure AD username will be `username@contoso.com`. For this conversion to happen correctly, the domain federation between the local Active Directory and Azure AD should be set up.
+
+ > [!IMPORTANT]
+ > For large number of logins, we recommend using automation. With PowerShell or Azure CLI you can use the `CSVFilePath` switch, that allows you to pass a CSV file type as a list of logins to be migrated.
+ >
+ > Bulk login migrations might be time-consuming using Azure Data Studio, as you need to manually select each login to migrate on the the login selection screen.
+
+## Open the Login Migration wizard in Azure Data Studio
+
+To open the Login Migration wizard:
+
+1. In Azure Data Studio, go to **Connections**. Select and connect to your on-premises instance of SQL Server. You also can connect to SQL Server on an Azure virtual machine.
+
+1. Right-click the server connection and select **Manage**.
+
+ :::image type="content" source="media/tutorial-login-migration-ads/azure-data-studio-manage-panel.png" alt-text="Screenshot that shows a server connection and the Manage option in Azure Data Studio." lightbox="media/tutorial-login-migration-ads/azure-data-studio-manage-panel.png":::
+
+1. In the server menu under **General**, select **Azure SQL Migration**.
+
+ :::image type="content" source="media/tutorial-login-migration-ads/launch-migrate-to-azure-sql-wizard-1.png" alt-text="Screenshot that shows the Azure Data Studio server menu.":::
+
+1. In the Azure SQL Migration dashboard, select **New login migration** button to open the login migration wizard.
+
+ :::image type="content" source="media/tutorial-login-migration-ads/launch-login-migration-wizard.png" alt-text="Screenshot that shows the Login migration wizard.":::
+
+## Configure login migration settings
+
+1. In **Step 1: Azure SQL target** on the New login migration wizard, complete the following steps:
+
+ 1. Select your Azure SQL target type and Azure account. Then in the next section, select your Azure subscription, the Azure region or location, and the resource group that contains the target Azure SQL target.
+
+ :::image type="content" source="media/tutorial-login-migration-ads/configuration-azure-target-account.png" alt-text="Screenshot that shows Azure account details.":::
+
+ 1. Use your SQL login username and password in connecting to the target managed instance. Select **Connect** to verify if the connection to the target is successful. Then, select **Next**.
+
+ :::image type="content" source="media/tutorial-login-migration-ads/configuration-azure-target-database.png" alt-text="Screenshot that shows Azure SQL Managed Instance connectivity.":::
+
+1. In **Step 2: Select login(s) to migrate**, select the logins that you wish to migrate from the source SQL server to the Azure SQL target. For Windows accounts, you'll be prompted to enter the associated Azure Active Directory domain name. Then select **Migrate** to start the login migration process.
+
+ :::image type="content" source="media/tutorial-login-migration-ads/logins-to-migrate.png" alt-text="Screenshot that shows the source logins details.":::
+
+## Start the login migration process
+
+1. In **Step 3: Migration Status**, the login migrations will proceed, along with other steps in the process such as validations, mappings and permissions.
+
+ :::image type="content" source="media/tutorial-login-migration-ads/migration-status-1.png" alt-text="Screenshot that shows the initial login migration status.":::
+
+ :::image type="content" source="media/tutorial-login-migration-ads/migration-status-2.png" alt-text="Screenshot that shows the continuation of the login migration status.":::
+
+1. Once the login migration is successfully completed (or if it has failures), the page displays the relevant updates.
+
+ :::image type="content" source="media/tutorial-login-migration-ads/migration-status-3.png" alt-text="Screenshot that shows the completed login migration status.":::
+
+## Monitor your migration
+
+1. You can monitor the process for each login by selecting the link under the login's Migration Status.
+
+ :::image type="content" source="media/tutorial-login-migration-ads/migration-details-1.png" alt-text="Screenshot that shows the details of the migrated logins.":::
+
+1. In the dialog that opens, you can monitor individual steps of the process, and selecting any of them will populate Step details with the following relevant details.
+
+ :::image type="content" source="media/tutorial-login-migration-ads/migration-details-2.png" alt-text="Screenshot that shows details of the ongoing login migration.":::
+
+The migration details page displays the different stages involved in the login migration process:
+
+| Status | Description |
+| | |
+| Migration of logins | Migrating logins that have been selected by the user to the target |
+| Migration of server roles | All server roles will be migrated from source to target |
+| User-login mappings | Synchronization between users of the databases and migrated logins |
+| Login-server role mappings | Server role membership of logins and membership between roles will be set in the target |
+| Establish server and object (securable) | Level permissions for logins in target |
+| Establish server and object (securable) | Level permissions for server roles in target |
+
+## Post-migration steps
+
+- Your target Azure SQL should now have the logins you selected to migrate, in addition to all the server roles from the source SQL Server, the associated user mappings, role memberships and permissions copied over.
+
+ You can verify by logging into the target Azure SQL using one of the logins migrated, by entering the same password as it had on the source SQL Server instance.
+
+- If you have also migrated Windows accounts, make sure to check the option of **Azure Active Directory - Password** while logging into the target managed instance using the same password that the Windows account had on the source SQL Server.
+
+ The username should be in the format of `username@contoso.com` (the Azure Active Directory domain name provided in Step 2 of the login migration wizard).
+
+## Limitations
+
+The following table describes the current status of the Login migration support by Azure SQL target by Login type:
+
+| Target | Login type | Support | Status |
+| - | - |:-:|:-:|
+| Azure SQL Database | SQL login | No | |
+| Azure SQL Database | Windows account | No | |
+| Azure SQL Managed Instance | SQL login | Yes | Preview |
+| Azure SQL Managed Instance | Windows account | Yes | Preview |
+| SQL Server on Azure VM | SQL login | Yes | Preview |
+| SQL Server on Azure VM | Windows account | No | |
+
+### SQL Server on Azure Virtual Machines
+
+- Windows account migrations aren't supported for this Azure SQL target
+
+- Only the SQL Server default port (1433) with no option to override is supported in Azure Data Studio. An alternative is to use PowerShell or Azure CLI to complete this type of migration.
+
+- Only the primary IP address with no option to override is supported in Azure Data Studio. An alternative is to use PowerShell or Azure CLI to complete this type of migration.
+
+## Next steps
+
+- [Migrate databases with Azure SQL Migration extension for Azure Data Studio](/azure/dms/migration-using-azure-data-studio)
+- [Tutorial: Migrate SQL Server to Azure SQL Database - Offline](/azure/dms/tutorial-sql-server-azure-sql-database-offline-ads)
+- [Tutorial: Migrate SQL Server to Azure SQL Managed Instance - Online](/azure/dms/tutorial-sql-server-managed-instance-online-ads)
+- [Tutorial: Migrate SQL Server to SQL Server On Azure Virtual Machines - Online](/azure/dms/tutorial-sql-server-to-virtual-machine-online-ads)
dms Tutorial Transparent Data Encryption Migration Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-transparent-data-encryption-migration-ads.md
+
+ Title: "Tutorial: Migrate TDE-enabled databases (preview) to Azure SQL in Azure Data Studio"
+
+description: Learn how to migrate on-premises SQL Server TDE-enabled databases (preview) to Azure SQL by using Azure Data Studio and Azure Database Migration Service.
+++ Last updated : 02/03/2023++++
+# Tutorial: Migrate TDE-enabled databases (preview) to Azure SQL in Azure Data Studio
+
+For securing a SQL Server database, you can take precautions like designing a secure system, encrypting confidential assets, and building a firewall. However, physical theft of media like drives or tapes can still compromise the data.
+
+TDE provides a solution to this problem, with real-time I/O encryption/decryption of data at rest (data and log files) by using a symmetric database encryption key (DEK) secured by a certificate. For more information about migrating TDE certificates manually, see [Move a TDE Protected Database to Another SQL Server](/sql/relational-databases/security/encryption/move-a-tde-protected-database-to-another-sql-server).
+
+When you migrate a TDE-protected database, the certificate (asymmetric key) used to open the database encryption key (DEK) must also be moved along with the source database. Therefore, you need to recreate the server certificate in the `master` database of the target SQL Server for that instance to access the database files.
+
+You can use the [Azure SQL Migration extension for Azure Data Studio](/sql/azure-data-studio/extensions/azure-sql-migration-extension) to help you migrate TDE-enabled databases (preview) from an on-premises instance of SQL Server to Azure SQL.
+
+ > [!NOTE]
+ > The option to migrate TDE-enabled databases from on-premises SQL Server to Azure SQL targets by using Azure Data Studio is currently in preview. This new migration experience is only available by using the [Azure Data Studio Insiders](/sql/azure-data-studio/download-azure-data-studio#download-the-insiders-build-of-azure-data-studio) version of the Azure SQL Migration extension.
+
+The TDE-enabled database migration process automates manual tasks such as backing up the database certificate keys (DEK), copying the certificate files from the on-premises SQL Server to the Azure SQL target, and then reconfiguring TDE for the target database again.
+
+ > [!IMPORTANT]
+ > Currently, only Azure SQL Managed Instance targets are supported.
+
+In this tutorial, you learn how to migrate the example `AdventureWorksTDE` encrypted database from an on-premises instance of SQL Server to an Azure SQL managed instance.
+
+> [!div class="checklist"]
+>
+> - Open the Migrate to Azure SQL wizard in Azure Data Studio
+> - Run an assessment of your source SQL Server databases
+> - Configure your TDE certificates migration
+> - Connect to your Azure SQL target
+> - Start your TDE certificate migration and monitor progress to completion
+
+## Prerequisites
+
+Before you begin the tutorial:
+
+- [Download and install Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio).
+
+- [Install the Azure SQL Migration extension](/sql/azure-data-studio/extensions/azure-sql-migration-extension) from Azure Data Studio Marketplace.
+
+- Run Azure Data Studio as Administrator.
+
+- Have an Azure account that is assigned to one of the following built-in roles:
+ - Contributor for the target managed instance (and Storage Account to upload your backups of the TDE certificate files from SMB network share).
+ - Reader role for the Azure Resource Groups containing the target managed instance or the Azure storage account.
+ - Owner or Contributor role for the Azure subscription (required if creating a new DMS service).
+ - As an alternative to using the above built-in roles, you can assign a custom role. For more information, see [Custom roles: Online SQL Server to SQL Managed Instance migrations using ADS](resource-custom-roles-sql-db-managed-instance-ads.md).
+
+- Create a target instance of [Azure SQL Managed Instance](/azure/azure-sql/managed-instance/instance-create-quickstart).
+
+- Ensure that the login that you use to connect to the SQL Server source is a member of the **sysadmin** server role.
+
+- The machine in which Azure Data Studio runs the TDE-enabled database migration should have connectivity to both sources and target SQL servers.
+
+## Open the Migrate to Azure SQL wizard in Azure Data Studio
+
+To open the Migrate to Azure SQL wizard:
+
+1. In Azure Data Studio, go to **Connections**. Connect to your on-premises instance of SQL Server. You also can connect to SQL Server on an Azure virtual machine.
+
+1. Right-click the server connection and select **Manage**.
+
+ :::image type="content" source="media/tutorial-transparent-data-encryption-migration-ads/azure-data-studio-manage-panel.png" alt-text="Screenshot that shows a server connection and the Manage option in Azure Data Studio." lightbox="media/tutorial-transparent-data-encryption-migration-ads/azure-data-studio-manage-panel.png":::
+
+1. In the server menu under **General**, select **Azure SQL Migration**.
+
+ :::image type="content" source="media/tutorial-transparent-data-encryption-migration-ads/launch-migrate-to-azure-sql-wizard-1.png" alt-text="Screenshot that shows the Azure Data Studio server menu.":::
+
+1. In the Azure SQL Migration dashboard, select **Migrate to Azure SQL** to open the migration wizard.
+
+ :::image type="content" source="media/tutorial-transparent-data-encryption-migration-ads/launch-migrate-to-azure-sql-wizard-2.png" alt-text="Screenshot that shows the Migrate to Azure SQL wizard.":::
+
+1. On the first page of the wizard, start a new session or resume a previously saved session.
+
+## Run database assessment
+
+1. In **Step 1: Databases for assessment** in the Migrate to Azure SQL wizard, select the databases you want to assess. Then, select **Next**.
+
+ :::image type="content" source="media/tutorial-transparent-data-encryption-migration-ads/assessment-database-selection.png" alt-text="Screenshot that shows selecting a database for assessment." lightbox="media/tutorial-transparent-data-encryption-migration-ads/assessment-database-selection.png":::
+
+1. In **Step 2: Assessment results**, complete the following steps:
+
+ 1. In **Choose your Azure SQL target**, select **Azure SQL Managed Instance**.
+
+ :::image type="content" source="media/tutorial-transparent-data-encryption-migration-ads/assessment-target-selection.png" alt-text="Screenshot that shows selecting the Azure SQL Managed Instance target." lightbox="media/tutorial-transparent-data-encryption-migration-ads/assessment-target-selection.png":::
+
+ 1. Select **View/Select** to view the assessment results.
+
+ :::image type="content" source="media/tutorial-transparent-data-encryption-migration-ads/assessment.png" alt-text="Screenshot that shows view/select assessment results." lightbox="media/tutorial-transparent-data-encryption-migration-ads/assessment.png":::
+
+ 1. In the assessment results, select the database, and then review the assessment findings. In this example, you can see the `AdventureWorksTDE` database is protected with transparent data encryption (TDE). The assessment is recommending to migrate the TDE certificate before migrating the source database to the managed instance target.
+
+ :::image type="content" source="media/tutorial-transparent-data-encryption-migration-ads/assessment-findings-details.png" alt-text="Screenshot that shows assessment findings report." lightbox="media/tutorial-transparent-data-encryption-migration-ads/assessment-findings-details.png":::
+
+ 1. Choose **Select** to open the TDE migration configuration panel.
+
+## Configure TDE migration settings
+
+1. In the **Encrypted database selected** section, select **Export my certificates and private key to the target**.
+
+ :::image type="content" source="media/tutorial-transparent-data-encryption-migration-ads/tde-migration-configuration.png" alt-text="Screenshot that shows the TDE migration configuration." lightbox="media/tutorial-transparent-data-encryption-migration-ads/tde-migration-configuration.png":::
+
+ > [!IMPORTANT]
+ > The **Info box** section describes the required permissions to export the DEK certificates.
+ >
+ > You must ensure the SQL Server service account has write access to network share path you will use to backup the DEK certificates. Also, the current user should have administrator privileges on the computer where this network path exists.
+
+1. Enter the **network path**.
+
+ :::image type="content" source="media/tutorial-transparent-data-encryption-migration-ads/tde-migration-network-share.png" alt-text="Screenshot that shows the TDE migration configuration for a network share." lightbox="media/tutorial-transparent-data-encryption-migration-ads/tde-migration-network-share.png":::
+
+ Then check **I give consent to use my credentials for accessing the certificates.** With this action, you're allowing the database migration wizard to back up your DEK certificate into the network share.
+
+1. If you don't want the migration wizard, help you migrate TDE-enabled databases. Select **I don't want Azure Data Studio to export the certificates.** to skip this step.
+
+ :::image type="content" source="media/tutorial-transparent-data-encryption-migration-ads/tde-migration-configuration-stop.png" alt-text="Screenshot that shows how to decline the TDE migration." lightbox="media/tutorial-transparent-data-encryption-migration-ads/tde-migration-configuration-stop.png":::
+
+ > [!IMPORTANT]
+ > You must migrate the certificates before proceeding with the migration otherwise the migration will fail. For more information about migrating TDE certificates manually, see [Move a TDE Protected Database to Another SQL Server](/sql/relational-databases/security/encryption/move-a-tde-protected-database-to-another-sql-server).
+
+1. If you want to proceed with the TDE certification migration, select **Apply**.
+
+ :::image type="content" source="media/tutorial-transparent-data-encryption-migration-ads/tde-migration-configuration-apply.png" alt-text="Screenshot that shows how to apply the TDE migration configuration." lightbox="media/tutorial-transparent-data-encryption-migration-ads/tde-migration-configuration-apply.png":::
+
+ The TDE migration configuration panel will close, but you can select **Edit** to modify your network share configuration at any time. Select **Next** to continue the migration process.
+
+ :::image type="content" source="media/tutorial-transparent-data-encryption-migration-ads/tde-migration-configuration-edit.png" alt-text="Screenshot that shows how to edit the TDE migration configuration." lightbox="media/tutorial-transparent-data-encryption-migration-ads/tde-migration-configuration-edit.png":::
+
+## Configure migration settings
+
+1. In **Step 3: Azure SQL target** in the Migrate to Azure SQL wizard, complete these steps for your target managed instance:
+
+ 1. Select your Azure account, Azure subscription, the Azure region or location, and the resource group that contains the managed instance.
+
+ :::image type="content" source="media/tutorial-transparent-data-encryption-migration-ads/configuration-azure-target.png" alt-text="Screenshot that shows Azure account details." lightbox="media/tutorial-transparent-data-encryption-migration-ads/configuration-azure-target.png":::
+
+ 1. When you're ready, select **Migrate certificates** to start the TDE certificates migration.
+
+## Start and monitor the TDE certificate migration
+
+1. In **Step 3: Migration Status**, the **Certificates Migration** panel will open. The TDE certificates migration progress details are shown on the screen.
+
+ :::image type="content" source="media/tutorial-transparent-data-encryption-migration-ads/tde-migration-start.png" alt-text="Screenshot that shows how the TDE migration process starts." lightbox="media/tutorial-transparent-data-encryption-migration-ads/tde-migration-start.png":::
+
+1. Once the TDE migration is completed (or if it has failures), the page displays the relevant updates.
+
+ :::image type="content" source="media/tutorial-transparent-data-encryption-migration-ads/tde-migration-completed.png" alt-text="Screenshot that shows how the TDE migration process continues." lightbox="media/tutorial-transparent-data-encryption-migration-ads/tde-migration-completed.png":::
+
+1. In case you need to retry the migration, select **Retry migration**.
+
+ :::image type="content" source="media/tutorial-transparent-data-encryption-migration-ads/tde-migration-retry.png" alt-text="Screenshot that shows how to retry the TDE migration." lightbox="media/tutorial-transparent-data-encryption-migration-ads/tde-migration-retry.png":::
+
+1. When you're ready, select **Done** to continue the migration wizard.
+
+ :::image type="content" source="media/tutorial-transparent-data-encryption-migration-ads/tde-migration-done.png" alt-text="Screenshot that shows how to complete the TDE migration." lightbox="media/tutorial-transparent-data-encryption-migration-ads/tde-migration-done.png":::
+
+1. You can monitor the process for each TDE certificate by selecting **Migrate certificates**.
+
+1. Select **Next** to continue the migration wizard until you complete the database migration.
+
+ :::image type="content" source="media/tutorial-transparent-data-encryption-migration-ads/database-migration-continue.png" alt-text="Screenshot that shows how to continue the database migration." lightbox="media/tutorial-transparent-data-encryption-migration-ads/database-migration-continue.png":::
+
+ Check the following step-by-step tutorials for more information about migrating databases online or offline to Azure SQL Managed Instance targets:
+
+ - [Tutorial: Migrate SQL Server to Azure SQL Managed Instance online](/azure/dms/tutorial-sql-server-managed-instance-offline-ads)
+ - [Tutorial: Migrate SQL Server to Azure SQL Managed Instance offline](/azure/dms/tutorial-sql-server-managed-instance-offline-ads)
+
+## Post-migration steps
+
+Your target managed instance should now have the databases, and their respective certificates migrated. To verify the current status of the recently migrated database, copy and paste the following example into a new query window on Azure Data Studio while connected to your managed instance target. Then, select **Run**.
+
+```sql
+USE master;
+GO
+
+SELECT db_name(database_id),
+ key_algorithm,
+ encryption_state_desc,
+ encryption_scan_state_desc,
+ percent_complete
+FROM sys.dm_database_encryption_keys
+WHERE database_id = DB_ID('Your database name');
+GO
+```
+
+The query returns the information about the database, the encryption status and the pending percent complete. In this case, it's zero because the TDE certificate has been already completed.
+
+
+For more information about encryption with SQL Server, see: [Transparent data encryption (TDE).](/sql/relational-databases/security/encryption/transparent-data-encryption)
+
+## Limitations
+
+The following table describes the current status of the TDE-enabled database migrations support by Azure SQL target:
+
+| Target | Support | Status |
+| - | - |:-:|
+| Azure SQL Database | No | |
+| Azure SQL Managed Instance | Yes | Preview |
+| SQL Server on Azure VM | No | |
+
+## Next steps
+
+- [Migrate databases with Azure SQL Migration extension for Azure Data Studio](/azure/dms/migration-using-azure-data-studio)
+- [Tutorial: Migrate SQL Server to Azure SQL Database - Offline](/azure/dms/tutorial-sql-server-azure-sql-database-offline-ads)
+- [Tutorial: Migrate SQL Server to Azure SQL Managed Instance - Online](/azure/dms/tutorial-sql-server-managed-instance-online-ads)
+- [Tutorial: Migrate SQL Server to SQL Server On Azure Virtual Machines - Online](/azure/dms/tutorial-sql-server-to-virtual-machine-online-ads)
energy-data-services Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/release-notes.md
Microsoft Energy Data Services is updated on an ongoing basis. To stay up to dat
<hr width = 100%>
+## February 2023
+### Product Billing Update
+
+Microsoft Energy Data Services will begin billing February 15, 2023. Prices will be based on a fixed per-hour consumption rate at a 50 percent discount during preview.
+- No upfront costs or termination feesΓÇöpay only for what you use.
+- No charges for storage, data transfers or compute overage during preview.
++ ## January 2023 ### Managed Identity Support
event-grid Event Filtering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-filtering.md
The JSON syntax for filtering by event type is:
For simple filtering by subject, specify a starting or ending value for the subject. For example, you can specify the subject ends with `.txt` to only get events related to uploading a text file to storage account. Or, you can filter the subject begins with `/blobServices/default/containers/testcontainer` to get all events for that container but not other containers in the storage account.
-When publishing events to custom topics, create subjects for your events that make it easy for subscribers to know whether they're interested in the event. Subscribers use the subject property to filter and route events. Consider adding the path for where the event happened, so subscribers can filter by segments of that path. The path enables subscribers to narrowly or broadly filter events. If you provide a three segment path like `/A/B/C` in the subject, subscribers can filter by the first segment `/A` to get a broad set of events. Those subscribers get events with subjects like `/A/B/C` or `/A/D/E`. Other subscribers can filter by `/A/B` to get a narrower set of events.
+When publishing events to custom topics, create subjects for your events that make it easy for subscribers to know whether they're interested in the event. Subscribers use the **subject** property to filter and route events. Consider adding the path for where the event happened, so subscribers can filter by segments of that path. The path enables subscribers to narrowly or broadly filter events. If you provide a three segment path like `/A/B/C` in the subject, subscribers can filter by the first segment `/A` to get a broad set of events. Those subscribers get events with subjects like `/A/B/C` or `/A/D/E`. Other subscribers can filter by `/A/B` to get a narrower set of events.
-The JSON syntax for filtering by subject is:
+### Examples (Blob Storage events)
+Blob events can be filtered by the event type, container name, or name of the object that was created or deleted.
-```json
-"filter": {
- "subjectBeginsWith": "/blobServices/default/containers/mycontainer/blobs/log",
- "subjectEndsWith": ".jpg"
-}
+The subject of Blob storage events uses the format:
```
+/blobServices/default/containers/<containername>/blobs/<blobname>
+```
+
+To match all events for a storage account, you can leave the subject filters empty.
+
+To match events from blobs created in a set of containers sharing a prefix, use a `subjectBeginsWith` filter like:
+
+```
+/blobServices/default/containers/containerprefix
+```
+
+To match events from blobs created in specific container, use a `subjectBeginsWith` filter like:
+
+```
+/blobServices/default/containers/containername/
+```
+
+To match events from blobs created in specific container sharing a blob name prefix, use a `subjectBeginsWith` filter like:
+
+```
+/blobServices/default/containers/containername/blobs/blobprefix
+```
+To match events from blobs create in a specific subfolder of a container, use a `subjectBeginsWith` filter like:
+
+```
+/blobServices/default/containers/{containername}/blobs/{subfolder}/
+```
+
+To match events from blobs created in specific container sharing a blob suffix, use a `subjectEndsWith` filter like ".log" or ".jpg".
## Advanced filtering
event-hubs Event Hubs Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-features.md
Title: Overview of features - Azure Event Hubs | Microsoft Docs
description: This article provides details about features and terminology of Azure Event Hubs. Previously updated : 08/25/2022 Last updated : 02/09/2023 # Features and terminology in Azure Event Hubs
Event Hubs ensures that all events sharing a partition key value are stored toge
Published events are removed from an event hub based on a configurable, timed-based retention policy. Here are a few important points: -- The **default** value and **shortest** possible retention period is **1 hour**.
+- The **default** value and **shortest** possible retention period is **1 hour**. Currently, you can set the retention period in hours only in the Azure portal. Resource Manager template, PowerShell, and CLI allow this property to be set only in days.
- For Event Hubs **Standard**, the maximum retention period is **7 days**. - For Event Hubs **Premium** and **Dedicated**, the maximum retention period is **90 days**. - If you change the retention period, it applies to all events including events that are already in the event hub.
event-hubs Event Hubs Java Get Started Send https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-java-get-started-send.md
This quickstart shows how to send events to and receive events from an event hub using the **azure-messaging-eventhubs** Java package.
+> [!TIP]
+> If you're working with Azure Event Hubs resources in a Spring application, we recommend that you consider [Spring Cloud Azure](/azure/developer/java/spring-framework/) as an alternative. Spring Cloud Azure is an open-source project that provides seamless Spring integration with Azure services. To learn more about Spring Cloud Azure, and to see an example using Event Hubs, see [Spring Cloud Stream with Azure Event Hubs](/azure/developer/java/spring-framework/configure-spring-cloud-stream-binder-java-app-azure-event-hub).
+ ## Prerequisites If you're new to Azure Event Hubs, see [Event Hubs overview](event-hubs-about.md) before you do this quickstart.
event-hubs Event Hubs Premium Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-premium-overview.md
It replicates events to three replicas, distributed across Azure availability zo
In addition to these storage-related features and all capabilities and protocol support of the standard tier, the isolation model of the premium tier enables features like [dynamic partition scale-up](dynamically-add-partitions.md). You also get far more generous [quota allocations](event-hubs-quotas.md). Event Hubs Capture is included at no extra cost. > [!NOTE]
-> Event Hubs Premium supports TLS 1.2 or greater.
+> - Event Hubs Premium supports TLS 1.2 or greater.
+> - The premium tier isn't available in all regions. Try to create a namespace in the Azure portal and see supported regions in the **Location** drop-down list on the **Create Namespace** page.
+ You can purchase 1, 2, 4, 8 and 16 processing units for each namespace. As the premium tier is a capacity-based offering, the achievable throughput isn't set by a throttle as it is in the standard tier, but depends on the work you ask Event Hubs to do, similar to the dedicated tier. The effective ingest and stream throughput per PU will depend on various factors, including:
firewall-manager Policy Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/policy-overview.md
Previously updated : 10/26/2021 Last updated : 02/10/2023
Azure Firewall supports both Classic rules and policies, but policies is the rec
|Pricing |Billed based on firewall association. See [Pricing](#pricing).|Free| |Supported deployment mechanisms |Portal, REST API, templates, Azure PowerShell, and CLI|Portal, REST API, templates, PowerShell, and CLI. |
-## Standard and Premium policies
+## Basic, Standard, and Premium policies
-Azure Firewall supports Standard and Premium policies. The following table summarizes the difference between the two:
+Azure Firewall supports Basic (preview), Standard, and Premium policies. The following table summarizes the difference between these policies:
|Policy type|Feature support | Firewall SKU support| |||-|
+|Basic policy|NAT rules, Application rules<br>IP Groups<br>Threat Intelligence (alerts)|Basic
|Standard policy |NAT rules, Network rules, Application rules<br>Custom DNS, DNS proxy<br>IP Groups<br>Web Categories<br>Threat Intelligence|Standard or Premium| |Premium policy |All Standard feature support, plus:<br><br>TLS Inspection<br>Web Categories<br>URL Filtering<br>IDPS|Premium
Azure Firewall supports Standard and Premium policies. The following table summa
New policies can be created from scratch or inherited from existing policies. Inheritance allows DevOps to create local firewall policies on top of organization mandated base policy. Policies created with non-empty parent policies inherit all rule collections from the parent policy.
-Network rule collections inherited from a parent policy are always prioritized above network rule collections defined as part of a new policy. The same logic also applies to application rule collections. However, network rule collections are always processed before application rule collections regardless of inheritance.
+Network rule collections inherited from a parent policy are always prioritized over network rule collections defined as part of a new policy. The same logic also applies to application rule collections. However, network rule collections are always processed before application rule collections regardless of inheritance.
Threat Intelligence mode is also inherited from the parent policy. You can set your threat Intelligence mode to a different value to override this behavior, but you can't turn it off. It's only possible to override with a stricter value. For example, if your parent policy is set to **Alert only**, you can configure this local policy to **Alert and deny**.
-Like Threat Intelligence mode, the Threat Intelligence allowlist is inherited from the parent policy. The child policy can add additional IP addresses to the allowlist.
+Like Threat Intelligence mode, the Threat Intelligence allowlist is inherited from the parent policy. The child policy can add more IP addresses to the allowlist.
NAT rule collections aren't inherited because they're specific to a given firewall.
firewall-manager Private Link Inspection Secure Virtual Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/private-link-inspection-secure-virtual-hub.md
Private endpoints allow resources access to the private link service deployed in a virtual network. Access to the private endpoint through virtual network peering and on-premises network connections extend the connectivity.
-You may need to filter traffic from clients either on premises or in Azure destined to services exposed via private endpoints in a Virtual WAN connected virtual network. This article walks you through this task using [secured virtual hub](../firewall-manager/secured-virtual-hub.md) with [Azure Firewall](../firewall/overview.md) as the security provider.
+You may need to filter traffic from clients either on-premises or in Azure destined to services exposed via private endpoints in a Virtual WAN connected virtual network. This article walks you through this task using [secured virtual hub](../firewall-manager/secured-virtual-hub.md) with [Azure Firewall](../firewall/overview.md) as the security provider.
Azure Firewall filters traffic using any of the following methods:
Azure Firewall filters traffic using any of the following methods:
* [FQDN in application rules](../firewall/features.md#application-fqdn-filtering-rules) for HTTP, HTTPS, and MSSQL. * Source and destination IP addresses, port, and protocol using [network rules](../firewall/features.md#network-traffic-filtering-rules)
-Use application rules over network rules to inspect traffic destined to private endpoints.
+Application rules are preferred over network rules to inspect traffic destined to private endpoints because Azure Firewall always SNATs traffic with application rules. SNAT is recommended when inspecting traffic destined to a private endpoint due to the limitation described here: [What is a private endpoint?][private-endpoint-overview]. If you're planning on using network rules instead, it is recommended to configure Azure Firewall to always perform SNAT: [Azure Firewall SNAT private IP address ranges][firewall-snat-private-ranges].
+ A secured virtual hub is managed by Microsoft and it cannot be linked to a [Private DNS Zone](../dns/private-dns-privatednszone.md). This is required to resolve a [private link resource](../private-link/private-endpoint-overview.md#private-link-resource) FQDN to its corresponding private endpoint IP address. SQL FQDN filtering is supported in [proxy-mode](/azure/azure-sql/database/connectivity-architecture#connection-policy) only (port 1433). *Proxy* mode can result in more latency compared to *redirect*. If you want to continue using redirect mode, which is the default for clients connecting within Azure, you can filter access using FQDN in firewall network rules.
-## Filter traffic using FQDN in network and application rules
+## Filter traffic using network or application rules in Azure Firewall
+
+The following steps enable Azure Firewall to filter traffic using either network rules (FQDN or IP address-based) or application rules:
-The following steps enable Azure Firewall to filter traffic using FQDN in network and application rules:
+### Network rules:
1. Deploy a [DNS forwarder](../private-link/private-endpoint-dns.md#virtual-network-and-on-premises-workloads-using-a-dns-forwarder) virtual machine in a virtual network connected to the secured virtual hub and linked to the Private DNS Zones hosting the A record types for the private endpoints.
-2. Configure [custom DNS settings](../firewall/dns-settings.md#configure-custom-dns-serversazure-portal) to point to the DNS forwarder virtual machine IP address and enable DNS proxy in the firewall policy associated with the Azure Firewall deployed in the secured virtual hub.
+2. Configure [custom DNS servers](../virtual-network/manage-virtual-network.md#change-dns-servers) for the virtual networks connected to the secured virtual hub:
+ - **FQDN-based network rules** - configure [custom DNS settings](../firewall/dns-settings.md#configure-custom-dns-serversazure-portal) to point to the DNS forwarder virtual machine IP address and enable DNS proxy in the firewall policy associated with the Azure Firewall. Enabling DNS proxy is required if you want to do FQDN filtering in network rules.
+ - **IP address-based network rules** - the custom DNS settings described in the previous point are **optional**. You can simply configure the custom DNS servers to point to the private IP of the DNS forwarder virtual machine.
+
+3. Depending on the configuration chosen in step **2.**, configure on-premises DNS servers to forward DNS queries for the private endpoints **public DNS zones** to either the private IP address of the Azure Firewall, or of the DNS forwarder virtual machine.
+
+4. Configure a [network rule](../firewall/tutorial-firewall-deploy-portal.md#configure-a-network-rule) as required in the firewall policy associated with the Azure Firewall. Choose *Destination Type* IP Addresses if going with an **IP address-based** rule and configure the IP address of the private endpoint as *Destination*. For **FQDN-based** network rules, choose *Destination Type* FQDN and configure the private link resource public FQDN as *Destination*.
-3. Configure [custom DNS servers](../virtual-network/manage-virtual-network.md#change-dns-servers) for the virtual networks connected to the secured virtual hub to point to the private IP address associated with the Azure Firewall deployed in the secured virtual hub.
+5. Navigate to the firewall policy associated with the Azure Firewall deployed in the secured virtual hub. Select *Private IP ranges (SNAT)* and select the option to *Always perform SNAT*.
-4. Configure on premises DNS servers to forward DNS queries for the private endpoints public DNS zones to the private IP address associated with the Azure Firewall deployed in the secured virtual hub.
+### Application rules:
-5. Configure an [application rule](../firewall/tutorial-firewall-deploy-portal.md#configure-an-application-rule) or [network rule](../firewall/tutorial-firewall-deploy-portal.md#configure-a-network-rule) as necessary in the firewall policy associated with the Azure Firewall deployed in the secured virtual hub with *Destination Type* FQDN and the private link resource public FQDN as *Destination*.
+1. For application rules, steps **1.** to **3.** from the previous section still apply. For the custom DNS server configuration, you can either choose to use the Azure Firewall as DNS proxy, or point to the DNS forwarder virtual machine directly.
-6. Navigate to *Secured virtual hubs* in the firewall policy associated with the Azure Firewall deployed in the secured virtual hub and select the secured virtual hub where traffic filtering destined to private endpoints will be configured.
+2. Configure an [application rule](../firewall/tutorial-firewall-deploy-portal.md#configure-an-application-rule) as required in the firewall policy associated with the Azure Firewall. Choose *Destination Type* FQDN and the private link resource public FQDN as *Destination*.
-7. Navigate to **Security configuration**, select **Send via Azure Firewall** under **Private traffic**.
+Lastly, and regardless of the type of rules configured in the Azure Firewall, make sure [Network Policies][network-policies-overview] (at least for UDR support) are enabled in the subnet(s) where the private endpoints are deployed. This will ensure traffic destined to private endpoints will not bypass the Azure Firewall.
-8. Select **Private traffic prefixes** to edit the CIDR prefixes that will be inspected via Azure Firewall in secured virtual hub and add one /32 prefix for each private endpoint as follows:
+ > [!IMPORTANT]
+ > By default, RFC 1918 prefixes are automatically included in the *Private Traffic Prefixes* of the Azure Firewall. For most private endpoints, this will be enough to make sure traffic from on-premises clients, or in different virtual networks connected to the same secured hub, will be inspected by the firewall. In case traffic destined to private endpoints is not being logged in the firewall, try adding the /32 prefix for each private endpoint to the list of *Private Traffic Prefixes*.
- > [!IMPORTANT]
- > If these /32 prefixes are not configured, traffic destined to private endpoints will bypass Azure Firewall.
+If needed, you can edit the CIDR prefixes that will be inspected via Azure Firewall in a secured hub as follows:
+
+1. Navigate to *Secured virtual hubs* in the firewall policy associated with the Azure Firewall deployed in the secured virtual hub and select the secured virtual hub where traffic filtering destined to private endpoints will be configured.
+
+2. Navigate to **Security configuration**, select **Send via Azure Firewall** under **Private traffic**.
+
+3. Select **Private traffic prefixes** to edit the CIDR prefixes that will be inspected via Azure Firewall in secured virtual hub and add one /32 prefix for each private endpoint.
:::image type="content" source="./media/private-link-inspection-secure-virtual-hub/firewall-manager-security-configuration.png" alt-text="Firewall Manager Security Configuration" border="true":::
-These steps only work when the clients and private endpoints are deployed in different virtual networks connected to the same secured virtual hub and for on premises clients. If the clients and private endpoints are deployed in the same virtual network, a UDR with /32 routes for the private endpoints must be created. Configure these routes with **Next hop type** set to **Virtual appliance** and **Next hop address** set to the private IP address of the Azure Firewall deployed in the secured virtual hub. **Propagate gateway routes** must be set to **Yes**.
+To inspect traffic from clients in the same virtual network as private endpoints, it is not required to specifically override the /32 routes from private endpoints. As long as **Network Policies** are enabled in the private endpoints subnet(s), a UDR with a wider address range will take precedence. For instance, configure this UDR with **Next hop type** set to **Virtual Appliance**, **Next hop address** set to the private IP of the Azure Firewall, and **Address prefix** destination set to the subnet dedicated to all private endpoint deployed in the virtual network. **Propagate gateway routes** must be set to **Yes**.
The following diagram illustrates the DNS and data traffic flows for the different clients to connect to a private endpoint deployed in Azure virtual WAN:
In most cases, these problems are caused by one of the following issues:
:::image type="content" source="./media/private-link-inspection-secure-virtual-hub/firewall-policy-private-traffic-configuration.png" alt-text="Private Traffic Secured by Azure Firewall" border="true":::
-2. Verify **Security configuration** in the firewall policy associated with the Azure Firewall deployed in the secured virtual hub. Make sure there's a /32 entry for each private endpoint private IP address you want to filter traffic for under **Private traffic prefixes**.
+2. Verify **Security configuration** in the firewall policy associated with the Azure Firewall deployed in the secured virtual hub. In case traffic destined to private endpoints is not being logged in the firewall, try adding the /32 prefix for each private endpoint to the list of **Private Traffic Prefixes**.
:::image type="content" source="./media/private-link-inspection-secure-virtual-hub/firewall-manager-security-configuration.png" alt-text="Firewall Manager Security Configuration - Private Traffic Prefixes" border="true":::
-3. In the secured virtual hub under virtual WAN, inspect effective routes for the route tables associated with the virtual networks and branches connections you want to filter traffic for. Make sure there are /32 entries for each private endpoint private IP address you want to filter traffic for.
+3. In the secured virtual hub under virtual WAN, inspect effective routes for the route tables associated with the virtual networks and branches connections you want to filter traffic for. If /32 entries were added for each private endpoint you want to inspect traffic for, make sure these are listed in the effective routes.
:::image type="content" source="./media/private-link-inspection-secure-virtual-hub/secured-virtual-hub-effective-routes.png" alt-text="Secured Virtual Hub Effective Routes" border="true":::
-4. Inspect the effective routes on the NICs attached to the virtual machines deployed in the virtual networks you want to filter traffic for. Make sure there are /32 entries for each private endpoint private IP address you want to filter traffic for.
+4. Inspect the effective routes on the NICs attached to the virtual machines deployed in the virtual networks you want to filter traffic for. Make sure there are /32 entries for each private endpoint private IP address you want to filter traffic for (if added).
Azure CLI: ```azurecli-interactive az network nic show-effective-route-table --name <Network Interface Name> --resource-group <Resource Group Name> -o table ```
-5. Inspect the routing tables of your on premises routing devices. Make sure you're learning the address spaces of the virtual networks where the private endpoints are deployed.
+5. Inspect the routing tables of your on-premises routing devices. Make sure you're learning the address spaces of the virtual networks where the private endpoints are deployed.
- Azure virtual WAN doesn't advertise the prefixes configured under **Private traffic prefixes** in firewall policy **Security configuration** to on premises. It's expected that the /32 entries won't show in the routing tables of your on premises routing devices.
+ Azure virtual WAN doesn't advertise the prefixes configured under **Private traffic prefixes** in firewall policy **Security configuration** to on-premises. It's expected that the /32 entries won't show in the routing tables of your on-premises routing devices.
6. Inspect **AzureFirewallApplicationRule** and **AzureFirewallNetworkRule** Azure Firewall logs. Make sure traffic destined to the private endpoints is being logged.
In most cases, these problems are caused by one of the following issues:
## Next steps - [Use Azure Firewall to inspect traffic destined to a private endpoint](../private-link/inspect-traffic-with-azure-firewall.md)+
+[private-endpoint-overview]: ../private-link/private-endpoint-overview.md#limitations
+[firewall-snat-private-ranges]: ../firewall/snat-private-range.md
+[network-policies-overview]: ../private-link/disable-private-endpoint-network-policy.md
healthcare-apis Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/release-notes.md
Last updated 06/16/2022 -+ # Release notes: Azure API for FHIR
healthcare-apis Dicom Digital Pathology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-digital-pathology.md
+
+ Title: Digital pathology using Azure Health Data Services DICOM service
+description: This guide is on using DICOM service for digital pathology
+++++ Last updated : 02/09/2023+++
+# Digital pathology using DICOM service
+
+ ## Overview
+
+`Pathology` is a branch of medical science primarily concerning the cause, origin, and nature of disease. It involves the examination of tissues, organs, bodily fluids, and autopsies in order to study and diagnose disease.
+Historically biopsies of tissues are stored in glass slides and investigated under microscope. This creates challenges when clinicians and pathologists need to share information for consultations and diagnosis as well as for research.
+
+Digital imaging is becoming increasingly popular in the field of pathology as a way to support sharing images outside the lab, training AI/ML models, and for long term storage. This transformation is fueled by the commercial availability of instruments for digitizing microscope slides.
+
+Today, digital pathology scanners generally output imaging into proprietary formats. This complicates sharing and AI/ML model training, blunting many of the advantages of digitization. To ease this transformation, many organizations are beginning to convert [Whole Slide Imaging (WSI)](https://dicom.nema.org/Dicom/DICOMWSI/) digital slides to DICOM-standard format. Once these images are in DICOM format, these images can be stored in commercially available PACS systems, where they can be managed using tools and processes that have been perfected over decades by radiologists.
+
+## DICOM service for digital pathology
+
+DICOM service supports unique digital pathology requirements like:
+
+1. Scale and performance needed to upload pathology DICOM instances that are multiple GBs in size.
+2. Fast frame access to allow the web viewer to pan and zoom DICOM pathology images smoothly with no lags or blurriness.
+3. A cost effective way to store images long-term post diagnosis for archival and research use.
+
+## End to End reference solution
+
+### Digitization
+
+Although the [DICOM standard now supports whole-slide images (WSI)](https://dicom.nema.org/dicom/dicomwsi/), many acquisition scanners don't capture images in the DICOM format. The DICOM service only supports images in the DICOM format, therefore a format conversion is required for WSI in other formats. Several commercial and open-source solutions exist to perform these format conversions.
+
+Here are some samples open source tools to build your own converter:
+
+- [Orthanc - DICOM Server (orthanc-server.com)](https://www.orthanc-server.com/static.php?page=wsi)
+- [OpenSlide](https://github.com/openslide/openslide)
++
+### Storage
+
+Each converted WSI results in a DICOM series with multiple instances. We recommend uploading each instance as a single part POST for better performance.
+
+[Prerequisites](dicomweb-standard-apis-curl.md#prerequisites)
+
+```cmd
+curl -X POST \
+ -H "Content-Type: application/dicom" \
+ -H "Authorization: Bearer {token}" \
+ -H "Accept: application/dicom+json" \
+{service-url}/{version}/studies \
+ --data-binary {dcmFile}.dcm
+```
+
+We have tested supporting **tens of GBs upload in few seconds**.
+
+### Retrieving
+
+Viewers retrieve tiles that are stored as frames in a DICOM instance. Each DICOM instance can contain multiple frames. We recommend using parallel single part GET frame for better performance.
+
+ [Prerequisites](dicomweb-standard-apis-curl.md#prerequisites)
+
+```cmd
+curl -X GET \
+ -H "Authorization: Bearer {token}" \
+ -H "Accept: application/octet-stream; transfer-syntax=*" \
+{service-url}/{version}/studies/{studyInstanceUid}/series/{seriesInstanceUid}/instances/{sopInstanceUid}/frames/{frameNumber} \
+ --output {fileName}
+```
+
+We have tested supporting **download of 60Kb tile in around 60-70ms** from client.
+
+### Viewers
+
+We recommend using any WSI Viewer that can be configured with a DICOMWeb service and OIDC Authentication.
+
+Sample Open source viewer
+
+- [Slim (MGB)](https://github.com/herrmannlab/slim)
+
+Follow the [CORS guidelines](configure-cross-origin-resource-sharing.md) if the Viewer directly interacts with the DICOM service
++
+## Recommended ISVs
+
+Reach out to dicom-support@microsoft.com if you want to work with our partner ISVs that provides E2E solution and support.
+
+## Next steps
+
+For more information about DICOM service, see
+
+>[!div class="nextstepaction"]
+>[Deploy DICOM service to Azure](deploy-dicom-services-in-azure.md)
+
+>[!div class="nextstepaction"]
+>[Using DICOMweb&trade;Standard APIs with DICOM service](dicomweb-standard-apis-with-dicom-services.md)
healthcare-apis How To Configure Device Mappings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-configure-device-mappings.md
Previously updated : 1/12/2023 Last updated : 02/09/2023
You can define one or more templates within the MedTech service device mapping.
|Template Type|Description| |-|--|
-|[CalculatedContentTemplate](how-to-use-calculatedcontenttemplate-mappings.md)|A template that supports writing expressions using one of several expression languages. Supports data transformation via the use of JMESPath functions.|
+|[CalculatedContent](how-to-use-calculatedcontent-mappings.md)|A template that supports writing expressions using one of several expression languages. Supports data transformation via the use of JMESPath functions.|
|[IotJsonPathContentTemplate](how-to-use-iot-jsonpath-content-mappings.md)|A template that supports messages sent from Azure Iot Hub or the Legacy Export Data feature of Azure Iot Central. > [!TIP]
healthcare-apis How To Configure Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-configure-metrics.md
Title: Configure the MedTech service metrics - Azure Health Data Services
+ Title: How to configure the MedTech service metrics - Azure Health Data Services
description: This article explains how to configure the MedTech service metrics. Previously updated : 1/20/2023 Last updated : 02/09/2023
Metric category|Metric name|Metric description|
|--|--|--| |Availability|IotConnector Health Status|The overall health of the MedTech service.| |Errors|Total Error Count|The total number of errors.|
-|Latency|Average Group Stage Latency|The average latency of the group stage. The [group stage](understand-service.md#group) performs buffering, aggregating, and grouping on normalized messages.|
+|Latency|Average Group Stage Latency|The average latency of the group stage. The [group stage](understand-service.md#groupoptional) performs buffering, aggregating, and grouping on normalized messages.|
|Latency|Average Normalize Stage Latency|The average latency of the normalized stage. The [normalized stage](understand-service.md#normalize) performs normalization on raw incoming messages.| |Traffic|Number of Fhir resources saved|The total number of Fast Healthcare Interoperability Resources (FHIR&#174;) resources [updated or persisted](understand-service.md#persist) by the MedTech service.| |Traffic|Number of Incoming Messages|The number of received raw [incoming messages](understand-service.md#ingest) (for example, the device events) from the configured source event hub.|
healthcare-apis How To Use Calculatedcontent Mappings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-calculatedcontent-mappings.md
+
+ Title: How to use CalculatedContentT mappings with the MedTech service device mappings - Azure Health Data Services
+description: This article describes how to use CalculatedContent mappings with the MedTech service device mappings.
++++ Last updated : 02/09/2023+++
+# How to use CalculatedContent mappings
+
+This article describes how to use CalculatedContent mappings with MedTech service device mappings.
+
+## CalculatedContent mappings
+
+The MedTech service provides an expression-based content template to both match the wanted template and extract values. **Expressions** may be used by either JSONPath or JMESPath. Each expression within the template may choose its own expression language.
+
+> [!NOTE]
+> If an expression language isn't defined, the default expression language configured for the template will be used. The default is JSONPath but can be overwritten if needed.
+
+An expression is defined as:
+
+```json
+<name of expression> : {
+ "value" : <the expression>,
+ "language": <the expression language>
+ }
+```
+
+In the example below, *typeMatchExpression* is defined as:
+
+```json
+"templateType": "CalculatedContent",
+ "template": {
+ "typeName": "heartrate",
+ "typeMatchExpression": {
+ "value" : "$..[?(@heartRate)]",
+ "language": "JsonPath"
+ },
+ ...
+ }
+```
+
+> [!TIP]
+> The default expression language to use for a MedTech service device mappings is JsonPath. If you want to use JsonPath, the expression alone may be supplied.
+
+```json
+"templateType": "CalculatedContent",
+ "template": {
+ "typeName": "heartrate",
+ "typeMatchExpression": "$..[?(@heartRate)]",
+ ...
+ }
+```
+
+The default expression language to use for a MedTech service device mappings can be explicitly set using the `defaultExpressionLanguage` parameter:
+
+```json
+"templateType": "CalculatedContent",
+ "template": {
+ "typeName": "heartrate",
+ "defaultExpressionLanguage": "JsonPath",
+ "typeMatchExpression": "$..[?(@heartRate)]",
+ ...
+ }
+```
+
+The CalculatedContent mappings allow matching on and extracting values from an Azure Event Hubs message using **Expressions** as defined below:
+
+|Property|Description|Example|
+|--|--|-|
+|TypeName|The type to associate with measurements that match the template|`heartrate`|
+|TypeMatchExpression|The expression that is evaluated against the EventData payload. If a matching JToken is found, the template is considered a match. All later expressions are evaluated against the extracted JToken matched here.|`$..[?(@heartRate)]`|
+|TimestampExpression|The expression to extract the timestamp value for the measurement's OccurrenceTimeUtc.|`$.matchedToken.endDate`|
+|DeviceIdExpression|The expression to extract the device identifier.|`$.matchedToken.deviceId`|
+|PatientIdExpression|*Required* when IdentityResolution is in **Create** mode and *Optional* when IdentityResolution is in **Lookup** mode. The expression to extract the patient identifier.|`$.matchedToken.patientId`|
+|EncounterIdExpression|*Optional*: The expression to extract the encounter identifier.|`$.matchedToken.encounterId`|
+|CorrelationIdExpression|*Optional*: The expression to extract the correlation identifier. This output can be used to group values into a single observation in the FHIR destination mappings.|`$.matchedToken.correlationId`|
+|Values[].ValueName|The name to associate with the value extracted by the next expression. Used to bind the wanted value/component in the FHIR destination mapping template.|`hr`|
+|Values[].ValueExpression|The expression to extract the wanted value.|`$.matchedToken.heartRate`|
+|Values[].Required|Will require the value to be present in the payload. If not found, a measurement won't be generated, and an InvalidOperationException will be created.|`true`|
+
+### Expression Languages
+
+When specifying the language to use for the expression, the below values are valid:
+
+| Expression Language | Value |
+||--|
+| JSONPath | **JsonPath** |
+| JMESPath | **JmesPath** |
+
+>[!TIP]
+> For more information on JSONPath, see [JSONPath](https://goessner.net/articles/JsonPath/). CalculatedContent mappings use the [JSON .NET implementation](https://www.newtonsoft.com/json/help/html/QueryJsonSelectTokenJsonPath.htm) for resolving JSONPath expressions.
+>
+> For more information on JMESPath, see [JMESPath](https://jmespath.org/specification.html). CalculatedContent mappings use the [JMESPath .NET implementation](https://github.com/jdevillard/JmesPath.Net) for resolving JMESPath expressions.
+
+### Custom functions
+
+A set of MedTech service custom functions are also available. The MedTech service custom functions are outside of the functions provided as part of the JMESPath specification. For more information on the MedTech service custom functions, see [How to use MedTech service custom functions](how-to-use-custom-functions.md).
+
+### Matched Token
+
+The **TypeMatchExpression** is evaluated against the incoming EventData payload. If a matching JToken is found, the template is considered a match.
+
+All later expressions are evaluated against a new JToken. This new JToken contains both the original EventData payload and the extracted JToken matched here.
+
+In this way, the original payload and the matched object are available to each later expression. The extracted JToken will be available as the property **matchedToken**.
+
+Given this example message:
+
+*Message*
+
+```json
+{
+ "Body": {
+ "deviceId": "device123",
+ "data": [
+ {
+ "systolic": "120", // Match
+ "diastolic": "80", // Match
+ "date": "2021-07-13T17:29:01.061144Z"
+ },
+ {
+ "systolic": "122", // Match
+ "diastolic": "82", // Match
+ "date": "2021-07-13T17:28:01.061122Z"
+ }
+ ]
+ },
+ "Properties": {},
+ "SystemProperties": {}
+}
+```
+
+*Template*
+
+```json
+{
+ "templateType": "CollectionContent",
+ "template": [
+ {
+ "templateType": "CalculatedContent",
+ "template": {
+ "typeName": "heartrate",
+ "typeMatchExpression": "$..[?(@systolic && @diastolic)]", // Expression
+ "deviceIdExpression": "$.Body.deviceId", // This accesses the attribute 'deviceId' which belongs to the original event data
+ "timestampExpression": "$.matchedToken.date",
+ "values": [
+ {
+ "required": "true",
+ "valueExpression": "$.matchedToken.systolic",
+ "valueName": "systolic"
+ },
+ {
+ "required": "true",
+ "valueExpression": "$.matchedToken.diastolic",
+ "valueName": "diastolic"
+ }
+ ]
+ }
+ }
+ ]
+}
+```
+
+Two matches will be extracted using the above expression and used to create JTokens. Later expressions will be evaluated using the following JTokens:
+
+```json
+{
+ "Body": {
+ "deviceId": "device123",
+ "data": [
+ {
+ "systolic": "120",
+ "diastolic": "80",
+ "date": "2021-07-13T17:29:01.061144Z"
+ },
+ {
+ "systolic": "122",
+ "diastolic": "82",
+ "date": "2021-07-13T17:28:01.061122Z"
+ }
+ ]
+ },
+ "Properties": {},
+ "SystemProperties": {},
+ "matchedToken" : {
+ "systolic": "120",
+ "diastolic": "80",
+ "date": "2021-07-13T17:29:01.061144Z"
+ }
+}
+```
+
+And
+
+```json
+{
+ "Body": {
+ "deviceId": "device123",
+ "data": [
+ {
+ "systolic": "120",
+ "diastolic": "80",
+ "date": "2021-07-13T17:29:01.061144Z"
+ },
+ {
+ "systolic": "122",
+ "diastolic": "82",
+ "date": "2021-07-13T17:28:01.061122Z"
+ }
+ ]
+ },
+ "Properties": {},
+ "SystemProperties": {},
+ "matchedToken" : {
+ "systolic": "122",
+ "diastolic": "82",
+ "date": "2021-07-13T17:28:01.061122Z"
+ }
+}
+```
+
+### Examples
+
+**Heart Rate**
+
+*Message*
+
+```json
+{
+ "Body": {
+ "heartRate": "78",
+ "endDate": "2019-02-01T22:46:01.8750000Z",
+ "deviceId": "device123"
+ },
+ "Properties": {},
+ "SystemProperties": {}
+}
+```
+
+*Template*
+
+```json
+ {
+ "templateType": "CalculatedContent",
+ "template": {
+ "typeName": "heartrate",
+ "typeMatchExpression": "$..[?(@heartRate)]",
+ "deviceIdExpression": "$.matchedToken.deviceId",
+ "timestampExpression": "$.matchedToken.endDate",
+ "values": [
+ {
+ "required": "true",
+ "valueExpression": "$.matchedToken.heartRate",
+ "valueName": "hr"
+ }
+ ]
+ }
+ }
+```
+
+**Blood Pressure**
+
+*Message*
+
+```json
+{
+ "Body": {
+ "systolic": "123", // Match
+ "diastolic" : "87", // Match
+ "endDate": "2019-02-01T22:46:01.8750000Z",
+ "deviceId": "device123"
+ },
+ "Properties": {},
+ "SystemProperties": {}
+}
+```
+
+*Template*
+
+```json
+ {
+ "templateType": "CalculatedContent",
+ "template": {
+ "typeName": "bloodpressure",
+ "typeMatchExpression": "$..[?(@systolic && @diastolic)]", // Expression
+ "deviceIdExpression": "$.matchedToken.deviceId",
+ "timestampExpression": "$.matchedToken.endDate",
+ "values": [
+ {
+ "required": "true",
+ "valueExpression": "$.matchedToken.systolic",
+ "valueName": "systolic"
+ },
+ {
+ "required": "true",
+ "valueExpression": "$.matchedToken.diastolic",
+ "valueName": "diastolic"
+ }
+ ]
+ }
+ }
+```
+
+**Project Multiple Measurements from Single Message**
+
+*Message*
+
+```json
+{
+ "Body": {
+ "heartRate": "78", // Match (Template 1)
+ "steps": "2", // Match (Template 2)
+ "endDate": "2019-02-01T22:46:01.8750000Z",
+ "deviceId": "device123"
+ },
+ "Properties": {},
+ "SystemProperties": {}
+}
+```
+
+*Template 1*
+
+```json
+ {
+ "templateType": "CalculatedContent",
+ "template": {
+ "typeName": "heartrate",
+ "typeMatchExpression": "$..[?(@heartRate)]", // Expression
+ "deviceIdExpression": "$.matchedToken.deviceId",
+ "timestampExpression": "$.matchedToken.endDate",
+ "values": [
+ {
+ "required": "true",
+ "valueExpression": "$.matchedToken.heartRate",
+ "valueName": "hr"
+ }
+ ]
+ }
+ },
+```
+
+*Template 2*
+
+```json
+ {
+ "templateType": "CalculatedContent",
+ "template": {
+ "typeName": "stepcount",
+ "typeMatchExpression": "$..[?(@steps)]", // Expression
+ "deviceIdExpression": "$.matchedToken.deviceId",
+ "timestampExpression": "$.matchedToken.endDate",
+ "values": [
+ {
+ "required": "true",
+ "valueExpression": "$.matchedToken.steps",
+ "valueName": "steps"
+ }
+ ]
+ }
+ }
+```
+
+**Project Multiple Measurements from Array in Message**
+
+*Message*
+
+```json
+{
+ "Body": [
+ {
+ "heartRate": "78", // Match
+ "endDate": "2019-02-01T20:46:01.8750000Z",
+ "deviceId": "device123"
+ },
+ {
+ "heartRate": "81", // Match
+ "endDate": "2019-02-01T21:46:01.8750000Z",
+ "deviceId": "device123"
+ },
+ {
+ "heartRate": "72", // Match
+ "endDate": "2019-02-01T22:46:01.8750000Z",
+ "deviceId": "device123"
+ }
+ ],
+ "Properties": {},
+ "SystemProperties": {}
+}
+```
+
+*Template*
+
+```json
+ {
+ "templateType": "CalculatedContent",
+ "template": {
+ "typeName": "heartrate",
+ "typeMatchExpression": "$..[?(@heartRate)]", // Expression
+ "deviceIdExpression": "$.matchedToken.deviceId",
+ "timestampExpression": "$.matchedToken.endDate",
+ "values": [
+ {
+ "required": "true",
+ "valueExpression": "$.matchedToken.heartRate",
+ "valueName": "hr"
+ }
+ ]
+ }
+ }
+```
+
+**Project Data From Matched Token And Original Event**
+
+*Message*
+
+```json
+{
+ "Body": {
+ "deviceId": "device123",
+ "data": [
+ {
+ "systolic": "120", // Match
+ "diastolic": "80", // Match
+ "date": "2021-07-13T17:29:01.061144Z"
+ },
+ {
+ "systolic": "122", // Match
+ "diastolic": "82", // Match
+ "date": "2021-07-13T17:28:01.061122Z"
+ }
+ ]
+ },
+ "Properties": {},
+ "SystemProperties": {}
+}
+```
+
+*Template*
+
+```json
+ {
+ "templateType": "CalculatedContent",
+ "template": {
+ "typeName": "heartrate",
+ "typeMatchExpression": "$..[?(@systolic && @diastolic)]", // Expression
+ "deviceIdExpression": "$.Body.deviceId", // This accesses the attribute 'deviceId' which belongs to the original event data
+ "timestampExpression": "$.matchedToken.date",
+ "values": [
+ {
+ "required": "true",
+ "valueExpression": "$.matchedToken.systolic",
+ "valueName": "systolic"
+ },
+ {
+ "required": "true",
+ "valueExpression": "$.matchedToken.diastolic",
+ "valueName": "diastolic"
+ }
+ ]
+ }
+ }
+```
+
+**Select and transform incoming data**
+
+In the below example, height data arrives in either inches or meters. We want all normalized height data to be in meters. To achieve this outcome, we create a template that targets only height data in inches and transforms it into meters. Another template targets height data in meters and simply stores it as is.
+
+*Message*
+
+```json
+{
+ "Body": [
+ {
+ "height": "78",
+ "unit": "inches", // Match (Template 1)
+ "endDate": "2019-02-01T22:46:01.8750000Z",
+ "deviceId": "device123"
+ },
+ {
+ "height": "1.9304",
+ "unit": "meters", // Match (Template 2)
+ "endDate": "2019-02-01T23:46:01.8750000Z",
+ "deviceId": "device123"
+ }
+ ],
+ "Properties": {},
+ "SystemProperties": {}
+}
+```
+
+*Template 1*
+
+```json
+ {
+ "templateType": "CalculatedContent",
+ "template": {
+ "typeName": "heightInMeters",
+ "typeMatchExpression": "$..[?(@unit == 'inches')]",
+ "deviceIdExpression": "$.matchedToken.deviceId",
+ "timestampExpression": "$.matchedToken.endDate",
+ "values": [
+ {
+ "required": "true",
+ "valueExpression": {
+ "value": "multiply(to_number(matchedToken.height), `0.0254`)", // Convert inches to meters. Notice we utilize JMESPath as that gives us access to transformation functions
+ "language": "JmesPath"
+ },
+ "valueName": "height"
+ }
+ ]
+ }
+ }
+```
+
+*Template 2*
+
+```json
+ {
+ "templateType": "CalculatedContent",
+ "template": {
+ "typeName": "heightInMeters",
+ "typeMatchExpression": "$..[?(@unit == 'meters')]",
+ "deviceIdExpression": "$.matchedToken.deviceId",
+ "timestampExpression": "$.matchedToken.endDate",
+ "values": [
+ {
+ "required": "true",
+ "valueExpression": "$.matchedToken.height", // Simply extract the height as it is already in meters
+ "valueName": "height"
+ }
+ ]
+ }
+ }
+```
+
+> [!TIP]
+> See the MedTech service article [Troubleshoot MedTech service errors](troubleshoot-errors.md) for assistance fixing MedTech service errors.
+
+## Next steps
+
+In this article, you learned how to configure the MedTech service device mappings using CalculatedContent mappings.
+
+To learn how to configure FHIR destination mappings, see
+
+> [!div class="nextstepaction"]
+> [How to configure FHIR destination mappings](how-to-configure-fhir-mappings.md)
+
+(FHIR&#174;) is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis How To Use Monitoring Tab https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-monitoring-tab.md
Previously updated : 1/18/2023 Last updated : 02/09/2023
Metric category|Metric name|Metric description|
|--|--|--| |Availability|IotConnector Health Status|The overall health of the MedTech service.| |Errors|**Total Error Count**|The total number of errors.|
-|Latency|**Average Group Stage Latency**|The average latency of the group stage. The [group stage](understand-service.md#group) performs buffering, aggregating, and grouping on normalized messages.|
+|Latency|**Average Group Stage Latency**|The average latency of the group stage. The [group stage](understand-service.md#groupoptional) performs buffering, aggregating, and grouping on normalized messages.|
|Latency|**Average Normalize Stage Latency**|The average latency of the normalized stage. The [normalized stage](understand-service.md#normalize) performs normalization on raw incoming messages.| |Traffic|Number of Fhir resources saved|The total number of Fast Healthcare Interoperability Resources (FHIR&#174;) resources [updated or persisted](understand-service.md#persist) by the MedTech service.| |Traffic|**Number of Incoming Messages**|The number of received raw [incoming messages](understand-service.md#ingest) (for example, the device events) from the configured source event hub.|
healthcare-apis Understand Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/understand-service.md
Title: Understand the MedTech service device message data transformation - Azure Health Data Services
-description: This article will provide you with an understanding of the MedTech service device messaging data transformation to FHIR Observation resources. The MedTech service ingests, normalizes, groups, transforms, and persists device message data into the FHIR service.
+description: This article provides an understanding of the MedTech service device messaging data transformation to FHIR Observation resources. The MedTech service ingests, normalizes, groups, transforms, and persists device message data in the FHIR service.
Previously updated : 1/25/2023 Last updated : 02/09/2023 # Understand the MedTech service device message data transformation
-This article provides an overview of the device message data processing stages within the [MedTech service](overview.md). The MedTech service transforms device message data into Fast Healthcare Interoperability Resources (FHIR&#174;) [Observation](https://www.hl7.org/fhir/observation.html) resources for persistence on the [FHIR service](../fhir/overview.md).
+> [!NOTE]
+> [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
+
+This article provides an overview of the device message data processing stages within the [MedTech service](overview.md). The MedTech service transforms device message data into FHIR [Observation](https://www.hl7.org/fhir/observation.html) resources for persistence on the [FHIR service](../fhir/overview.md).
The MedTech service device message data processing follows these steps and in this order: > [!div class="checklist"] > - Ingest > - Normalize - Device mappings applied.
-> - Group
+> - Group - (Optional)
> - Transform - FHIR destination mappings applied. > - Persist ## Ingest Ingest is the first stage where device messages are received from an [Azure Event Hubs](../../event-hubs/index.yml) event hub (`device message event hub`) and immediately pulled into the MedTech service. The Event Hubs service supports high scale and throughput with the ability to receive and process millions of device messages per second. It also enables the MedTech service to consume messages asynchronously, removing the need for devices to wait while device messages are processed.
Normalize is the next stage where device message data is processed using user-se
The normalization process not only simplifies data processing at later stages, but also provides the capability to project one device message into multiple normalized messages. For instance, a device could send multiple vital signs for body temperature, pulse rate, blood pressure, and respiration rate in a single device message. This device message would create four separate FHIR Observation resources. Each resource would represent a different vital sign, with the device message projected into four different normalized messages.
-## Group
+## Group - (Optional)
Group is the next *optional* stage where the normalized messages available from the MedTech service normalization stage are grouped using three different parameters: > [!div class="checklist"]
At this point, the [Device](https://www.hl7.org/fhir/device.html) resource, alon
> [!NOTE] > All identity look ups are cached once resolved to decrease load on the FHIR service. If you plan on reusing devices with multiple patients, it is advised you create a virtual device resource that is specific to the patient and send the virtual device identifier in the device message payload. The virtual device can be linked to the actual device resource as a parent.
-If no Device resource for a given device identifier exists in the FHIR service, the outcome depends upon the value of [Resolution Type](deploy-new-config.md#configure-the-destination-tab) set at the time of the MedTech service deployment. When set to `Lookup`, the specific message is ignored, and the pipeline will continue to process other incoming device messages. If set to `Create`, the MedTech service will create minimal Device and Patient resources on the FHIR service.
+If no Device resource for a given device identifier exists in the FHIR service, the outcome depends upon the value of [Resolution Type](deploy-new-config.md#configure-the-destination-tab) set at the time of the MedTech service deployment. When set to `Lookup`, the specific message is ignored, and the pipeline continues to process other incoming device messages. If set to `Create`, the MedTech service creates minimal Device and Patient resources on the FHIR service.
> [!NOTE] > The `Resolution Type` can also be adjusted post deployment of the MedTech service in the event that a different type is later desired.
If no Device resource for a given device identifier exists in the FHIR service,
The MedTech service buffers the FHIR Observations resources created during the transformation stage and provides near real-time processing. However, it can potentially take up to five minutes for FHIR Observation resources to be persisted in the FHIR service. ## Persist
-Persist is the final stage where the FHIR Observation resources from the transform stage are persisted in the [FHIR service](../fhir/overview.md). If the FHIR Observation resource is new, it will be created in the FHIR service. If the FHIR Observation resource already existed, it will get updated in the FHIR service.
+Persist is the final stage where the FHIR Observation resources from the transform stage are persisted in the [FHIR service](../fhir/overview.md). If the FHIR Observation resource is new, it's created in the FHIR service. If the FHIR Observation resource already existed, it gets updated in the FHIR service.
The FHIR service uses the MedTech service's [system-assigned managed identity](/azure/active-directory/managed-identities-azure-resources/overview#managed-identity-types) and [Azure resource-based access control (Azure RBAC)](/azure/role-based-access-control/overview) for secure access to the FHIR service.
healthcare-apis Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/release-notes.md
Last updated 01/25/2023-+
Azure Health Data Services is a set of managed API services based on open standa
## January 2023
-### MedTech service
+### Azure Health Data Services
-**Qatar Central region is Generally Available (GA)**
+**Azure Health Data services General Available (GA) in new regions**
-Customers in Qatar Central can now access the MedTech service.
+General availability (GA) of Azure Health Data services in France Central, North Central US and Qatar Central Regions.
### DICOM service
One new sample app has been released in the [Health Data Services samples repo](
## **December 2022**
-#### Azure Health Data Services
-
-**Azure Health Data services General Available (GA) in new regions**
-
-General availability (GA) of Azure Health Data services in France Central, North Central US and Qatar Central Regions.
-
#### DICOM service
Enabled DICOM service to work with workspaces that have names beginning with a l
**MedTech service normalized improvements with calculations to support and enhance health data standardization.**
-See [Use Device mappings](./../healthcare-apis/iot/how-to-use-device-mappings.md) and [Calculated Content Templates](./../healthcare-apis/iot/how-to-use-calculatedcontenttemplate-mappings.md)
+See [Use device mappings](./../healthcare-apis/iot/how-to-use-device-mappings.md) and [CalculatedContent](./../healthcare-apis/iot/how-to-use-calculatedcontent-mappings.md)
iot-develop Tutorial Use Mqtt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/tutorial-use-mqtt.md
You can now navigate the IoT Plug and Play component:
:::image type="content" source="media/tutorial-use-mqtt/components-iot-explorer.png" alt-text="Screenshot showing the component view of an IoT Plug and Play device in Azure IoT explorer.":::
-You can now modify your device code to implement the telemetry, properties, and commands defined in your model. To see an example implementation of the thermostat device using the Mosquitto library, see [Using MQTT PnP with Azure IoTHub without the IoT SDK on Windows](https://github.com/Azure-Samples/IoTMQTTSample/tree/master/src/Windows/PnPMQTTWin32) on GitHub.
+You can now modify your device code to implement the telemetry, properties, and commands defined in your model. For an example implementation using the Mosquitto library, see [Using MQTT with Azure IoT Hub without an SDK](https://github.com/Azure-Samples/IoTMQTTSample/tree/master/mosquitto) on GitHub.
> [!NOTE] >The client uses the `IoTHubRootCA_Baltimore.pem` root certificate file to verify the identity of the IoT hub it connects to.
For more information about MQTT, visit the [MQTT Samples for Azure IoT](https://
In this tutorial, you learned how to modify an MQTT device client to follow the IoT Plug and Play conventions. To learn more about IoT Hub support for the MQTT protocol, see: > [!div class="nextstepaction"]
-> [Communicate with your IoT hub using the MQTT protocol](../iot-hub/iot-hub-mqtt-support.md)
+> [Communicate with your IoT hub using the MQTT protocol](../iot-hub/iot-hub-mqtt-support.md)
key-vault About Keys Details https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/about-keys-details.md
Previously updated : 01/20/2023 Last updated : 02/09/2023
Key vault key auto-rotation can be set by configuring key auto-rotation policy.
In addition to the key material, the following attributes may be specified. In a JSON Request, the attributes keyword and braces, '{' '}', are required even if there are no attributes specified. -- *enabled*: boolean, optional, default is **true**. Specifies whether the key is enabled and useable for cryptographic operations. The *enabled* attribute is used with *nbf* and *exp*. When an operation occurs between *nbf* and *exp*, it will only be permitted if *enabled* is set to **true**. Operations outside the *nbf* / *exp* window are automatically disallowed, except for certain operation types under [particular conditions](#date-time-controlled-operations).-- *nbf*: IntDate, optional, default is now. The *nbf* (not before) attribute identifies the time before which the key MUST NOT be used for cryptographic operations, except for certain operation types under [particular conditions](#date-time-controlled-operations). The processing of the *nbf* attribute requires that the current date/time MUST be after or equal to the not-before date/time listed in the *nbf* attribute. Key Vault MAY provide for some small leeway, normally no more than a few minutes, to account for clock skew. Its value MUST be a number containing an IntDate value. -- *exp*: IntDate, optional, default is "forever". The *exp* (expiration time) attribute identifies the expiration time on or after which the key MUST NOT be used for cryptographic operation, except for certain operation types under [particular conditions](#date-time-controlled-operations). The processing of the *exp* attribute requires that the current date/time MUST be before the expiration date/time listed in the *exp* attribute. Key Vault MAY provide for some small leeway, typically no more than a few minutes, to account for clock skew. Its value MUST be a number containing an IntDate value.
+- *enabled*: boolean, optional, default is **true**. Specifies whether the key is enabled and useable for cryptographic operations. The *enabled* attribute is used with *nbf* and *exp*. When an operation occurs between *nbf* and *exp*, it will only be permitted if *enabled* is set to **true**. Operations outside the *nbf* / *exp* window are automatically disallowed, except for [decrypt, unwrap, and verify](#date-time-controlled-operations).
+- *nbf*: IntDate, optional, default is now. The *nbf* (not before) attribute identifies the time before which the key MUST NOT be used for cryptographic operations, except for [decrypt, unwrap, and verify](#date-time-controlled-operations). The processing of the *nbf* attribute requires that the current date/time MUST be after or equal to the not-before date/time listed in the *nbf* attribute. Key Vault MAY provide for some small leeway, normally no more than a few minutes, to account for clock skew. Its value MUST be a number containing an IntDate value.
+- *exp*: IntDate, optional, default is "forever". The *exp* (expiration time) attribute identifies the expiration time on or after which the key MUST NOT be used for cryptographic operation, except for [decrypt, unwrap, and verify](#date-time-controlled-operations). The processing of the *exp* attribute requires that the current date/time MUST be before the expiration date/time listed in the *exp* attribute. Key Vault MAY provide for some small leeway, typically no more than a few minutes, to account for clock skew. Its value MUST be a number containing an IntDate value.
There are more read-only attributes that are included in any response that includes key attributes:
machine-learning Apache Spark Azure Ml Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/apache-spark-azure-ml-concepts.md
Previously updated : 01/30/2023 Last updated : 02/10/2023 #Customer intent: As a full-stack machine learning pro, I want to use Apache Spark in Azure Machine Learning. # Apache Spark in Azure Machine Learning (preview)
-Azure Machine Learning integration with Azure Synapse Analytics (preview) provides easy access to distributed computing through the Apache Spark framework. This integration offers these Apache Spark computing experiences:
+Azure Machine Learning integration with Azure Synapse Analytics (preview) provides easy access to distributed computation resources through the Apache Spark framework. This integration offers these Apache Spark computing experiences:
- Managed (Automatic) Spark compute - Attached Synapse Spark pool ## Managed (Automatic) Spark compute
-Azure Machine Learning Managed (Automatic) Spark compute is the easiest way to accomplish distributed computing tasks in the Azure Machine Learning environment by using the Apache Spark framework. Azure Machine Learning users can use a fully managed, serverless, on-demand Apache Spark compute cluster. Those users can avoid the need to create an Azure Synapse workspace and a Synapse Spark pool.
+With the Apache Spark framework, Azure Machine Learning Managed (Automatic) Spark compute is the easiest way to accomplish distributed computing tasks in the Azure Machine Learning environment. Azure Machine Learning offers a fully managed, serverless, on-demand Apache Spark compute cluster. Its users can avoid the need to create an Azure Synapse workspace and a Synapse Spark pool.
Users can define resources, including instance type and Apache Spark runtime version. They can then use those resources to access Managed (Automatic) Spark compute in Azure Machine Learning notebooks for:
Users can define resources, including instance type and Apache Spark runtime ver
### Points to consider
-Managed (Automatic) Spark compute works well for most user scenarios that require quick access to distributed computing through Apache Spark. But to make an informed decision, users should consider the advantages and disadvantages of this approach.
+Managed (Automatic) Spark compute works well for most user scenarios that require quick access to distributed computing through Apache Spark. However, to make an informed decision, users should consider the advantages and disadvantages of this approach.
Advantages: -- There are no dependencies on other Azure resources to be created for Apache Spark.-- No permissions are required in the subscription to create Azure Synapse-related resources.-- There's no need for SQL pool quotas.
+- No dependencies on other Azure resources to be created for Apache Spark (Azure Synapse infrastructure operates under the hood).
+- No required subscription permissions to create Azure Synapse-related resources.
+- No need for SQL pool quotas.
Disadvantages: - A persistent Hive metastore is missing. Managed (Automatic) Spark compute supports only in-memory Spark SQL.-- No tables or databases are available.-- Azure Purview integration is missing.-- Linked services aren't available.-- There are fewer data sources and connectors.-- Pool-level configuration is missing.-- Pool-level library management is missing.-- There's only partial support for `mssparkutils`.
+- No available tables or databases.
+- Missing Azure Purview integration.
+- No available linked services.
+- Fewer data sources and connectors.
+- No pool-level configuration.
+- No pool-level library management.
+- Only partial support for `mssparkutils`.
### Network configuration
-As of January 2023, creating a Managed (Automatic) Spark compute inside a virtual network and creating a private endpoint to Azure Synapse are not supported.
+As of January 2023, creation of a Managed (Automatic) Spark compute, inside a virtual network, and creation of a private endpoint to Azure Synapse, aren't supported.
### Inactivity periods and tear-down mechanism
-A Managed (Automatic) Spark compute (*cold start*) resource might need three to five minutes to start the Spark session when it's first launched. The automated Managed (Automatic) Spark compute provisioning, backed by Azure Synapse, causes this delay. After the Managed (Automatic) Spark compute is provisioned and an Apache Spark session starts, subsequent code executions (*warm start*) won't experience this delay.
+At first launch, Managed (Automatic) Spark compute (*cold start*) resource might need three to five minutes to start the Spark session itself. The automated Managed (Automatic) Spark compute provisioning, backed by Azure Synapse, causes this delay. After the Managed (Automatic) Spark compute is provisioned, and an Apache Spark session starts, subsequent code executions (*warm start*) won't experience this delay.
-The Spark session configuration offers an option that defines a session timeout (in minutes). The Spark session will end after an inactivity period that exceeds the user-defined timeout. If another Spark session doesn't start in the following 10 minutes, resources provisioned for the Managed (Automatic) Spark compute will be torn down.
+The Spark session configuration offers an option that defines a session timeout (in minutes). The Spark session will end after an inactivity period that exceeds the user-defined timeout. If another Spark session doesn't start in the following ten minutes, resources provisioned for the Managed (Automatic) Spark compute will be torn down.
After the Managed (Automatic) Spark compute resource tear-down happens, submission of the next job will require a *cold start*. The next visualization shows some session inactivity period and cluster teardown scenarios. :::image type="content" source="./media/apache-spark-azure-ml-concepts/spark-session-timeout-teardown.png" lightbox="./media/apache-spark-azure-ml-concepts/spark-session-timeout-teardown.png" alt-text="Expandable diagram that shows scenarios for Apache Spark session inactivity period and cluster teardown.":::
+> [!NOTE]
+> For a session-level conda package:
+> - *Cold start* time will need about ten to fifteen minutes.
+> - *Warm start* time using same conda package will need about one minute.
+> - *Warm start* with a different conda package will also need about ten to fifteen minutes.
+ ## Attached Synapse Spark pool A Spark pool created in an Azure Synapse workspace becomes available in the Azure Machine Learning workspace with the attached Synapse Spark pool. This option might be suitable for users who want to reuse an existing Synapse Spark pool.
Attachment of a Synapse Spark pool to an Azure Machine Learning workspace requir
- [Spark batch job submission](./how-to-submit-spark-jobs.md) - [Running machine learning pipelines with a Spark component](./how-to-submit-spark-jobs.md#spark-component-in-a-pipeline-job)
-An attached Synapse Spark pool provides access to native Azure Synapse features. The user is responsible for provisioning, attaching, configuring, and managing the Synapse Spark pool.
+An attached Synapse Spark pool provides access to native Azure Synapse features. The user is responsible for the Synapse Spark pool provisioning, attaching, configuration, and management.
The Spark session configuration for an attached Synapse Spark pool also offers an option to define a session timeout (in minutes). The session timeout behavior resembles the description in [the previous section](#inactivity-periods-and-tear-down-mechanism), except that the associated resources are never torn down after the session timeout. ## Defining Spark cluster size
-You can define Spark cluster size by using three parameter values in Azure Machine Learning Spark jobs:
+You can define Spark cluster size with three parameter values in Azure Machine Learning Spark jobs:
- Number of executors - Executor cores - Executor memory
-You should consider an Azure Machine Learning Apache Spark executor as an equivalent of Azure Spark worker nodes. An example can explain these parameters. Let's say that you defined the number of executors as 6 (equivalent to six worker nodes), executor cores as 4, and executor memory as 28 GB. Your Spark job will then have access to a cluster with 24 cores and 168 GB of memory.
+You should consider an Azure Machine Learning Apache Spark executor as an equivalent of Azure Spark worker nodes. An example can explain these parameters. Let's say that you defined the number of executors as 6 (equivalent to six worker nodes), executor cores as 4, and executor memory as 28 GB. Your Spark job then has access to a cluster with 24 cores and 168 GB of memory.
## Ensuring resource access for Spark jobs
To access data and other resources, a Spark job can use either a user identity p
[This article](./how-to-submit-spark-jobs.md#ensuring-resource-access-for-spark-jobs) describes resource access for Spark jobs. In a notebook session, both the Managed (Automatic) Spark compute and the attached Synapse Spark pool use user identity passthrough for data access during [interactive data wrangling](./interactive-data-wrangling-with-apache-spark-azure-ml.md). > [!NOTE]
-> To ensure successful Spark job execution, assign **Contributor** and **Storage Blob Data Contributor** roles (on the Azure storage account that's used for data input and output) to the identity that's used for submitting the Spark job.
->
-> If an [attached Synapse Spark pool](./how-to-manage-synapse-spark-pool.md) points to a Synapse Spark pool in an Azure Synapse workspace, and that workspace has an associated managed virtual network, [configure a managed private endpoint to a storage account](../synapse-analytics/security/connect-to-a-secure-storage-account.md). This configuration will help ensure data access.
+> - To ensure successful Spark job execution, assign **Contributor** and **Storage Blob Data Contributor** roles (on the Azure storage account used for data input and output) to the identity that's used for submitting the Spark job.
+> - If an [attached Synapse Spark pool](./how-to-manage-synapse-spark-pool.md) points to a Synapse Spark pool in an Azure Synapse workspace, and that workspace has an associated managed virtual network, [configure a managed private endpoint to a storage account](../synapse-analytics/security/connect-to-a-secure-storage-account.md). This configuration will help ensure data access.
+> - Both Managed (Automatic) Spark compute and attached Synapse Spark pool do not work in a notebook created in a private link enabled workspace.
-[This quickstart](./quickstart-spark-jobs.md) describes how to start using Managed (Automatic) Spark compute to submit your Spark jobs in Azure Machine Learning.
+[This quickstart](./quickstart-spark-data-wrangling.md) describes how to start using Managed (Automatic) Spark compute in Azure Machine Learning.
## Next steps
+- [Quickstart: Submit Apache Spark jobs in Azure Machine Learning (preview)](./quickstart-spark-jobs.md)
- [Attach and manage a Synapse Spark pool in Azure Machine Learning (preview)](./how-to-manage-synapse-spark-pool.md) - [Interactive data wrangling with Apache Spark in Azure Machine Learning (preview)](./interactive-data-wrangling-with-apache-spark-azure-ml.md) - [Submit Spark jobs in Azure Machine Learning (preview)](./how-to-submit-spark-jobs.md)
machine-learning Concept Model Management And Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-model-management-and-deployment.md
For more information on using Azure Pipelines with Machine Learning, see:
Learn more by reading and exploring the following resources: ++ [Set up MLOps with Azure DevOps](how-to-setup-mlops-azureml.md) + [Learning path: End-to-end MLOps with Azure Machine Learning](/training/paths/build-first-machine-operations-workflow/) + [How to deploy a model to an online endpoint](how-to-deploy-online-endpoints.md) with Machine Learning + [Tutorial: Train and deploy a model](tutorial-train-deploy-notebook.md)
machine-learning How To Deploy Mlflow Models Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-mlflow-models-online-endpoints.md
version = registered_model.version
__endpoint.yaml__
- <!-- :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/online/mlflow/create-endpoint.yaml"::: -->
+ :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/online/ncd/create-endpoint.yaml":::
# [Python (Azure ML SDK)](#tab/sdk)
version = registered_model.version
# [Azure CLI](#tab/cli)
- <!-- :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-mlflow.sh" ID="create_endpoint"::: -->
+ :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-ncd.sh" ID="create_endpoint":::
# [Python (Azure ML SDK)](#tab/sdk)
version = registered_model.version
__sklearn-deployment.yaml__
- <!-- :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/online/mlflow/sklearn-deployment.yaml"::: -->
+ :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/online/ncd/sklearn-deployment.yaml":::
# [Python (Azure ML SDK)](#tab/sdk)
version = registered_model.version
# [Azure CLI](#tab/cli)
- <!-- :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-mlflow.sh" ID="create_sklearn_deployment"::: -->
+ :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-ncd.sh" ID="create_sklearn_deployment":::
# [Python (Azure ML SDK)](#tab/sdk)
Once your deployment completes, your deployment is ready to serve request. One o
**sample-request-sklearn.json**
-<!-- :::code language="json" source="~/azureml-examples-main/cli/endpoints/online/mlflow/sample-request-sklearn.json"::: -->
> [!NOTE] > Notice how the key `input_data` has been used in this example instead of `inputs` as used in MLflow serving. This is because Azure Machine Learning requires a different input format to be able to automatically generate the swagger contracts for the endpoints. See [Differences between models deployed in Azure Machine Learning and MLflow built-in server](how-to-deploy-mlflow-models.md#differences-between-models-deployed-in-azure-machine-learning-and-mlflow-built-in-server) for details about expected input format.
To submit a request to the endpoint, you can do as follows:
# [Azure CLI](#tab/cli)
-<!-- :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-mlflow.sh" ID="test_sklearn_deployment"::: -->
# [Python (Azure ML SDK)](#tab/sdk)
Use the following steps to deploy an MLflow model with a custom scoring script.
**sample-request-sklearn.json**
- <!-- :::code language="json" source="~/azureml-examples-main/cli/endpoints/online/mlflow/sample-request-sklearn.json"::: -->
+ :::code language="json" source="~/azureml-examples-main/cli/endpoints/online/ncd/sample-request-sklearn.json":::
To submit a request to the endpoint, you can do as follows:
Once you're done with the endpoint, you can delete the associated resources:
# [Azure CLI](#tab/cli)
-<!-- :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-mlflow.sh" ID="delete_endpoint"::: -->
# [Python (Azure ML SDK)](#tab/sdk)
machine-learning How To Export Delete Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-export-delete-data.md
- Previously updated : 10/21/2021+ Last updated : 02/09/2023
# Export or delete your Machine Learning service workspace data
-In Azure Machine Learning, you can export or delete your workspace data using either the portal's graphical interface or the Python SDK. This article describes both options.
+In Azure Machine Learning, you can export or delete your workspace data using either the portal graphical interface or the Python SDK. This article describes both options.
[!INCLUDE [GDPR-related guidance](../../includes/gdpr-dsr-and-stp-note.md)]
In Azure Machine Learning, you can export or delete your workspace data using ei
## Control your workspace data
-In-product data stored by Azure Machine Learning is available for export and deletion. You can export and delete using Azure Machine Learning studio, CLI, and SDK. Telemetry data can be accessed through the Azure Privacy portal.
+In-product data stored by Azure Machine Learning is available for export and deletion. You can export and delete data with Azure Machine Learning studio, the CLI, and the SDK. Additionally, you can access telemetry data through the Azure Privacy portal.
-In Azure Machine Learning, personal data consists of user information in job history documents.
+In Azure Machine Learning, personal data consists of user information in job history documents.
## Delete high-level resources using the portal
When you create a workspace, Azure creates several resources within the resource
- An Applications Insights instance - A key vault
-These resources can be deleted by selecting them from the list and choosing **Delete**:
+To delete these resources, selecting them from the list and choose **Delete**:
> [!IMPORTANT]
-> If the resource is configured for soft delete, the data won't be deleted unless you optionally select to delete the resource permanently. For more information, see the following articles:
+> If the resource is configured for soft delete, the data won't actually delete unless you optionally select to delete the resource permanently. For more information, see the following articles:
> * [Workspace soft-deletion](concept-soft-delete.md). > * [Soft delete for blobs](../storage/blobs/soft-delete-blob-overview.md). > * [Soft delete in Azure Container Registry](../container-registry/container-registry-soft-delete-policy.md).
These resources can be deleted by selecting them from the list and choosing **De
:::image type="content" source="media/how-to-export-delete-data/delete-resource-group-resources.png" alt-text="Screenshot of portal, with delete icon highlighted.":::
-Job history documents, which may contain personal user information, are stored in the storage account in blob storage, in subfolders of `/azureml`. You can download and delete the data from the portal.
+Job history documents, which may contain personal user information, are stored in the storage account in blob storage, in `/azureml` subfolders. You can download and delete the data from the portal.
:::image type="content" source="media/how-to-export-delete-data/storage-account-folders.png" alt-text="Screenshot of azureml directory in storage account, within the portal."::: ## Export and delete machine learning resources using Azure Machine Learning studio
-Azure Machine Learning studio provides a unified view of your machine learning resources, such as notebooks, data assets, models, and jobs. Azure Machine Learning studio emphasizes preserving a record of your data and experiments. Computational resources such as pipelines and compute resources can be deleted using the browser. For these resources, navigate to the resource in question and choose **Delete**.
+Azure Machine Learning studio provides a unified view of your machine learning resources - for example, notebooks, data assets, models, and jobs. Azure Machine Learning studio emphasizes preservation of a record of your data and experiments. You can delete computational resources such as pipelines and compute resources with the browser. For these resources, navigate to the resource in question and choose **Delete**.
-Data assets can be unregistered and jobs can be archived, but these operations don't delete the data. To entirely remove the data, data assets and job data must be deleted at the storage level. Deleting at the storage level is done using the portal, as described previously. An individual Job can be deleted directly in studio. Deleting a Job deletes the Job's data.
+You can unregister data assets and archive jobs, but these operations don't delete the data. To entirely remove the data, data assets and job data require deletion at the storage level. Storage level deletion happens in the portal, as described earlier. Azure ML Studio can handle individual deletion. Job deletion deletes the data of that job.
-You can download training artifacts from experimental jobs using the Studio. Choose the **Job** in which you're interested. Choose **Output + logs** and navigate to the specific artifacts you wish to download. Choose **...** and **Download** or select **Download all**.
+Azure ML Studio can handle training artifact downloads from experimental jobs. Choose the relevant **Job**. Choose **Output + logs**, and navigate to the specific artifacts you wish to download. Choose **...** and **Download**, or select **Download all**.
-You can download a registered model by navigating to the **Model** and choosing **Download**.
+To download a registered model, navigate to the **Model** and choose **Download**.
:::image type="contents" source="media/how-to-export-delete-data/model-download.png" alt-text="Screenshot of studio model page with download option highlighted.":::
machine-learning How To Integrate Azure Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-integrate-azure-policy.md
You can also assign policies by using [Azure PowerShell](../governance/policy/as
## Conditional access policies
-To control who can access your Azure Machine Learning workspace, use Azure Active Directory [Conditional Access](../active-directory/conditional-access/overview.md).
+> [!IMPORTANT]
+> [Azure AD Conditional Access](/azure/active-directory/conditional-access/overview) is __not__ supported with Azure Machine Learning.
## Enable self-service using landing zones
machine-learning How To Setup Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-setup-authentication.md
Learn how to set up authentication to your Azure Machine Learning workspace from
Regardless of the authentication workflow used, Azure role-based access control (Azure RBAC) is used to scope the level of access (authorization) allowed to the resources. For example, an admin or automation process might have access to create a compute instance, but not use it, while a data scientist could use it, but not delete or create it. For more information, see [Manage access to Azure Machine Learning workspace](how-to-assign-roles.md).
-Azure AD Conditional Access can be used to further control or restrict access to the workspace for each authentication workflow. For example, an admin can allow workspace access from managed devices only.
- ## Prerequisites * Create an [Azure Machine Learning workspace](how-to-manage-workspace.md).
print(ml_client)
## Use Conditional Access
-As an administrator, you can enforce [Azure AD Conditional Access policies](../active-directory/conditional-access/overview.md) for users signing in to the workspace. For example, you
-can require two-factor authentication, or allow sign in only from managed devices. To use Conditional Access for Azure Machine Learning workspaces specifically, [assign the Conditional Access policy](../active-directory/conditional-access/concept-conditional-access-cloud-apps.md) to Machine Learning Cloud app.
+> [!IMPORTANT]
+> [Azure AD Conditional Access](/azure/active-directory/conditional-access/overview) is __not__ supported with Azure Machine Learning.
## Next steps
machine-learning How To Troubleshoot Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-online-endpoints.md
Below is a list of common deployment errors that are reported as part of the dep
* [ImageBuildFailure](#error-imagebuildfailure) * [OutOfQuota](#error-outofquota)
-* [OutOfCapacity](#error-outofcapacity)
* [BadArgument](#error-badargument) * [ResourceNotReady](#error-resourcenotready) * [ResourceNotFound](#error-resourcenotfound)
Below is a list of common resources that might run out of quota when using Azure
* [Memory](#memory-quota) * [Role assignments](#role-assignment-quota) * [Endpoints](#endpoint-quota)
+* [Region-wide VM capacity](#region-wide-vm-capacity)
* [Other](#other-quota) Additionally, below is a list of common resources that might run out of quota only for Kubernetes online endpoint:
A possible mitigation is to check if there are unused deployments that can be de
#### Disk quota
-This issue happens when the size of the model is larger than the available disk space and the model is not able to be downloaded. Try a SKU with more disk space.
-* Try a [Managed online endpoints SKU list](reference-managed-online-endpoints-vm-sku-list.md) with more disk space.
-* Try reducing image and model size.
+This issue happens when the size of the model is larger than the available disk space and the model is not able to be downloaded. Try a [SKU](reference-managed-online-endpoints-vm-sku-list.md) with more disk space or reducing the image and model size.
#### Memory quota
-This issue happens when the memory footprint of the model is larger than the available memory. Try a [Managed online endpoints SKU list](reference-managed-online-endpoints-vm-sku-list.md) with more memory.<br>
-
-#### Endpoint quota
-
-Try to delete some unused endpoints in this subscription.
+This issue happens when the memory footprint of the model is larger than the available memory. Try a [SKU](reference-managed-online-endpoints-vm-sku-list.md) with more memory.
#### Role assignment quota
-When you are creating a managed online endpoint, role assignment is required for the [managed identity](../active-directory/managed-identities-azure-resources/overview.md) to access workspace resources. If you've reached the [role assignment limit](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-rbac-limits), try to delete some unused role assignments in this subscription. You can check all role assignments in the Azure portal by going to the Access Control menu.
+When you are creating a managed online endpoint, role assignment is required for the [managed identity](../active-directory/managed-identities-azure-resources/overview.md) to access workspace resources. If you've reached the [role assignment limit](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-rbac-limits), try to delete some unused role assignments in this subscription. You can check all role assignments in the Azure portal by navigating to the Access Control menu.
-#### Kubernetes quota
+#### Endpoint quota
-This issue happens when the requested CPU or memory couldn't be satisfied, such as nodes are cordoned or nodes are unavailable, which means all nodes are unschedulable.
+Try to delete some unused endpoints in this subscription. If all of your endpoints are actively in use, you can try [requesting an endpoint quota increase](how-to-manage-quotas.md#endpoint-quota-increases).
-Try to delete some unused endpoints in this subscription. Alternatively, follow [How to manage quotas](how-to-manage-quotas.md#endpoint-quota-increases) to request endpoint quota increase.
+#### Region-wide VM capacity
-Adjust your request in the cluster, you can directly [adjust resource request of the instance type](how-to-manage-kubernetes-instance-types.md).
+Due to a lack of Azure Machine Learning capacity in the region, the service has failed to provision the specified VM size. Retry later or try deploying to a different region.
-##### Container can't be scheduled
+#### Kubernetes quota
+
+This issue happens when the requested CPU, memory could not be provided. At times, nodes may be retained or unavailable, meaning that these nodes are unschedulable. When you are deploying a model to a Kubernetes compute target, Azure Machine Learning will attempt to schedule the service with the requested amount of resources. If there are no nodes available in the cluster with the appropriate amount of resources after 5 minutes, the deployment will fail. To work around this, try to delete some unused endpoints in this subscription. You can also address this error by either adding more nodes, changing the SKU of your nodes, or changing the resource requirements of your service.
-When you are deploying a model to a Kubernetes compute target, Azure Machine Learning will attempt to schedule the service with the requested amount of resources. If there are no nodes available in the cluster with the appropriate amount of resources after 5 minutes, the deployment will fail. The failure message is `Couldn't Schedule because the kubernetes cluster didn't have available resources after trying for 00:05:00`. You can address this error by either adding more nodes, changing the SKU of your nodes, or changing the resource requirements of your service.
+The error message will typically indicate which resource you need more of. For instance, if you see an error message detailing `0/3 nodes are available: 3 Insufficient nvidia.com/gpu`, that means that the service requires GPUs and there are three nodes in the cluster that don't have sufficient GPUs. This can be addressed by adding more nodes if you're using a GPU SKU, switching to a GPU-enabled SKU if you aren't, or changing your environment to not require GPUs.
-The error message will typically indicate which resource you need more of - for instance, if you see an error message indicating `0/3 nodes are available: 3 Insufficient nvidia.com/gpu` that means that the service requires GPUs and there are three nodes in the cluster that don't have available GPUs. This could be addressed by adding more nodes if you're using a GPU SKU, switching to a GPU enabled SKU if you aren't or changing your environment to not require GPUs.
+You can also try adjusting your request in the cluster, you can directly [adjust the resource request of the instance type](how-to-manage-kubernetes-instance-types.md).
#### Other quota
Use the **Endpoints** in the studio:
-### ERROR: OutOfCapacity
-
-For managed online endpoint, the specified VM Size failed to provision due to a lack of Azure Machine Learning capacity. Retry later or try deploying to a different region.
- ### ERROR: BadArgument Below is a list of reasons you might run into this error when using either managed online endpoint or Kubernetes online endpoint:
The reason you might run into this error when creating/updating Kubernetes onlin
In this case, you can detach and then **re-attach** your compute.
-> [!NOTE]
+> [!]
> > To troubleshoot errors by re-attaching, please guarantee to re-attach with the exact same configuration as previously detached compute, such as the same compute name and namespace, otherwise you may encounter other errors.
machine-learning Migrate To V2 Execution Automl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-execution-automl.md
This article gives a comparison of scenario(s) in SDK v1 and SDK v2.
print(azureml_url) ```
-* SDK v2: Below is a sample AutoML classification task. For the entire code, check out our [examples repo](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/automl-standalone-jobs/automl-classification-task-bankmarketing/automl-classification-task-bankmarketing-mlflow.ipynb).
+* SDK v2: Below is a sample AutoML classification task. For the entire code, check out our [examples repo](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/automl-standalone-jobs/automl-classification-task-bankmarketing/automl-classification-task-bankmarketing.ipynb).
```python # Imports
machine-learning Quickstart Spark Data Wrangling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/quickstart-spark-data-wrangling.md
Previously updated : 02/06/2023 Last updated : 02/10/2023 #Customer intent: As a Full Stack ML Pro, I want to perform interactive data wrangling in Azure Machine Learning, with Apache Spark.
Last updated 02/06/2023
[!INCLUDE [preview disclaimer](../../includes/machine-learning-preview-generic-disclaimer.md)] - To handle interactive Azure Machine Learning notebook data wrangling, Azure Machine Learning integration, with Azure Synapse Analytics (preview), provides easy access to the Apache Spark framework. This access allows for Azure Machine Learning Notebook interactive data wrangling. In this quickstart guide, you'll learn how to perform interactive data wrangling using Azure Machine Learning Managed (Automatic) Synapse Spark compute, Azure Data Lake Storage (ADLS) Gen 2 storage account, and user identity passthrough.
We must ensure that the input and output data paths are accessible, before we st
To assign appropriate roles to the user identity:
-1. In the Microsoft Azure portal, navigate to the Azure Data Lake Storage (ADLS) Gen 2 storage account page
+1. Open the [Microsoft Azure portal](https://portal.azure.com).
+1. Search and select the **Storage accounts** service.
+
+ :::image type="content" source="media/quickstart-spark-data-wrangling/find-storage-accounts-service.png" lightbox="media/quickstart-spark-data-wrangling/find-storage-accounts-service.png" alt-text="Expandable screenshot showing Storage accounts service search and selection, in Microsoft Azure portal.":::
+
+1. On the **Storage accounts** page, select the Azure Data Lake Storage (ADLS) Gen 2 storage account from the list. A page showing the storage account **Overview** will open.
+
+ :::image type="content" source="media/quickstart-spark-data-wrangling/storage-accounts-list.png" lightbox="media/quickstart-spark-data-wrangling/storage-accounts-list.png" alt-text="Expandable screenshot showing selection of the Azure Data Lake Storage (ADLS) Gen 2 storage account Storage account.":::
+ 1. Select **Access Control (IAM)** from the left panel 1. Select **Add role assignment**
A Managed (Automatic) Spark compute is available in Azure Machine Learning Noteb
## Interactive data wrangling with Titanic data > [!TIP]
-> Data wrangling with a Managed (Automatic) Spark compute, and user identity passthrough for data access in a Azure Data Lake Storage (ADLS) Gen 2 storage account, both require the lowest number of configuration steps.
+> Data wrangling with a Managed (Automatic) Spark compute, and user identity passthrough for data access in an Azure Data Lake Storage (ADLS) Gen 2 storage account, both require the lowest number of configuration steps.
-The data wrangling code shown here uses the `titanic.csv` file, available [here](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/spark/data/titanic.csv). Upload this file to a container created in the Azure Data Lake Storage (ADLS) Gen 2 storage account. This Python code snippet shows interactive data wrangling with an Azure Machine Learning Managed (Automatic) Spark compute, user identity passthrough, and an input/output data URI, in format `abfss://<FILE_SYSTEM_NAME>@<STORAGE_ACCOUNT_NAME>.dfs.core.windows.net/<PATH_TO_DATA>`:
+The data wrangling code shown here uses the `titanic.csv` file, available [here](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/spark/data/titanic.csv). Upload this file to a container created in the Azure Data Lake Storage (ADLS) Gen 2 storage account. This Python code snippet shows interactive data wrangling with an Azure Machine Learning Managed (Automatic) Spark compute, user identity passthrough, and an input/output data URI, in the `abfss://<FILE_SYSTEM_NAME>@<STORAGE_ACCOUNT_NAME>.dfs.core.windows.net/<PATH_TO_DATA>` format. Here, `<FILE_SYSTEM_NAME>` matches the container name.
```python import pyspark.pandas as pd
df.to_csv(
## Next steps - [Apache Spark in Azure Machine Learning (preview)](./apache-spark-azure-ml-concepts.md)
+- [Quickstart: Submit Apache Spark jobs in Azure Machine Learning (preview)](./quickstart-spark-jobs.md)
- [Attach and manage a Synapse Spark pool in Azure Machine Learning (preview)](./how-to-manage-synapse-spark-pool.md) - [Interactive Data Wrangling with Apache Spark in Azure Machine Learning (preview)](./interactive-data-wrangling-with-apache-spark-azure-ml.md) - [Submit Spark jobs in Azure Machine Learning (preview)](./how-to-submit-spark-jobs.md)
machine-learning Quickstart Spark Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/quickstart-spark-jobs.md
Previously updated : 01/09/2023 Last updated : 02/10/2023 #Customer intent: As a Full Stack ML Pro, I want to submit a Spark job in Azure Machine Learning.
Last updated 01/09/2023
The Azure Machine Learning integration, with Azure Synapse Analytics (preview), provides easy access to distributed computing capability - backed by Azure Synapse - for scaling Apache Spark jobs on Azure Machine Learning.
-In this quickstart guide, you'll learn how to submit a Spark job using Azure Machine Learning Managed (Automatic) Spark compute, Azure Data Lake Storage (ADLS) Gen 2 storage account, and user identity passthrough in a few simple steps.
+In this quickstart guide, you learn how to submit a Spark job using Azure Machine Learning Managed (Automatic) Spark compute, Azure Data Lake Storage (ADLS) Gen 2 storage account, and user identity passthrough in a few simple steps.
-See [this resource](./apache-spark-azure-ml-concepts.md) for more information about **Apache Spark in Azure Machine Learning** concepts.
+For more information about **Apache Spark in Azure Machine Learning** concepts, see [this resource](./apache-spark-azure-ml-concepts.md).
## Prerequisites
Before we submit an Apache Spark job, we must ensure that input, and output, dat
To assign appropriate roles to the user identity:
-1. Navigate to the Azure Data Lake Storage (ADLS) Gen 2 storage account page in the Microsoft Azure portal.
+1. Open the [Microsoft Azure portal](https://portal.azure.com).
+1. Search for, and select, the **Storage accounts** service.
+
+ :::image type="content" source="media/quickstart-spark-jobs/find-storage-accounts-service.png" lightbox="media/quickstart-spark-jobs/find-storage-accounts-service.png" alt-text="Expandable screenshot showing search for and selection of Storage accounts service, in Microsoft Azure portal.":::
+
+1. On the **Storage accounts** page, select the Azure Data Lake Storage (ADLS) Gen 2 storage account from the list. A page showing **Overview** of the storage account opens.
+
+ :::image type="content" source="media/quickstart-spark-jobs/storage-accounts-list.png" lightbox="media/quickstart-spark-jobs/storage-accounts-list.png" alt-text="Expandable screenshot showing selection of Azure Data Lake Storage (ADLS) Gen 2 storage account Storage account.":::
+ 1. Select **Access Control (IAM)** from the left panel. 1. Select **Add role assignment**.
df.to_csv(args.wrangled_data, index_col="PassengerId")
> [!NOTE] > - This Python code sample uses `pyspark.pandas`, which is only supported by Spark runtime version 3.2.
-> - Please ensure that `titanic.py` file is uploaded to a folder named `src`. The `src` folder should be located in the same directory where you have created the Python script/notebook or the YAML specification file defining the standalone Spark job.
+> - Please ensure that `titanic.py` file is uploaded to a folder named `src`. The `src` folder should be located in the same directory where you have created the Python script/notebook or the YAML specification file defining the standalone Spark job.
-The above script takes two arguments `--titanic_data` and `--wrangled_data`, which pass the path of input data and output folder respectively. The script uses `titanic.csv` file, which can be [found here](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/spark/data/titanic.csv). This file should be uploaded to the Azure Data Lake Storage (ADLS) Gen 2 storage account.
+That script takes two arguments: `--titanic_data` and `--wrangled_data`. These arguments pass the input data path, and the output folder, respectively. The script uses the `titanic.csv` file, [available here](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/spark/data/titanic.csv). Upload this file to a container created in the Azure Data Lake Storage (ADLS) Gen 2 storage account.
## Submit a standalone Spark job
The above script takes two arguments `--titanic_data` and `--wrangled_data`, whi
> - terminal of [Visual Studio Code connected to an Azure Machine Learning compute instance](./how-to-set-up-vs-code-remote.md?tabs=studio). > - your local computer that has [the Azure Machine Learning CLI](./how-to-configure-cli.md?tabs=public) installed.
-This example YAML specification shows a standalone Spark job. It uses an Azure Machine Learning Managed (Automatic) Spark compute, user identity passthrough, and input/output data URI in format `abfss://<FILE_SYSTEM_NAME>@<STORAGE_ACCOUNT_NAME>.dfs.core.windows.net/<PATH_TO_DATA>`:
+This example YAML specification shows a standalone Spark job. It uses an Azure Machine Learning Managed (Automatic) Spark compute, user identity passthrough, and input/output data URI in the `abfss://<FILE_SYSTEM_NAME>@<STORAGE_ACCOUNT_NAME>.dfs.core.windows.net/<PATH_TO_DATA>` format. Here, `<FILE_SYSTEM_NAME>` matches the container name.
```yaml $schema: http://azureml/sdk-2-0/SparkJob.json
az ml job create --file <YAML_SPECIFICATION_FILE_NAME>.yaml --subscription <SUBS
> - [Visual Studio Code connected to an Azure Machine Learning compute instance](./how-to-set-up-vs-code-remote.md?tabs=studio). > - your local computer that has [the Azure Machine Learning SDK for Python](/python/api/overview/azure/ai-ml-readme) installed.
-This Python code snippet shows the creation of a standalone Spark job, with an Azure Machine Learning Managed (Automatic) Spark compute, user identity passthrough, and input/output data URI in format `abfss://<FILE_SYSTEM_NAME>@<STORAGE_ACCOUNT_NAME>.dfs.core.windows.net/<PATH_TO_DATA>`:
+This Python code snippet shows a standalone Spark job creation, with an Azure Machine Learning Managed (Automatic) Spark compute, user identity passthrough, and input/output data URI in the `abfss://<FILE_SYSTEM_NAME>@<STORAGE_ACCOUNT_NAME>.dfs.core.windows.net/<PATH_TO_DATA>`format. Here, the `<FILE_SYSTEM_NAME>` matches the container name.
```python from azure.ai.ml import MLClient, spark, Input, Output
In the above code sample:
# [Studio UI](#tab/studio-ui) First, upload the parameterized Python code `titanic.py` to the Azure Blob storage container for workspace default datastore `workspaceblobstore`. To submit a standalone Spark job using the Azure Machine Learning studio UI: 1. In the left pane, select **+ New**. 2. Select **Spark job (preview)**. 3. On the **Compute** screen:
- :::image type="content" source="media/quickstart-spark-jobs/create-standalone-spark-job-compute.png" lightbox="media/quickstart-spark-jobs/create-standalone-spark-job-compute.png" alt-text="Expandable screenshot showing compute selection screen for a new Spark job in Azure Machine Learning studio UI.":::
+ :::image type="content" source="media/quickstart-spark-jobs/create-standalone-spark-job-compute.png" lightbox="media/quickstart-spark-jobs/create-standalone-spark-job-compute.png" alt-text="Expandable screenshot showing compute selection screen for a new Spark job in the Azure Machine Learning studio UI.":::
1. Under **Select compute type**, select **Spark automatic compute (Preview)** for Managed (Automatic) Spark compute. 2. Select **Virtual machine size**. The following instance types are currently supported:
First, upload the parameterized Python code `titanic.py` to the Azure Blob stora
2. Select **Input type** as **Data**. 3. Select **Data type** as **File**. 4. Select **Data source** as **URI**.
- 5. Enter an Azure Data Lake Storage (ADLS) Gen 2 data URI for `titanic.csv` file in format `abfss://<FILE_SYSTEM_NAME>@<STORAGE_ACCOUNT_NAME>.dfs.core.windows.net/<PATH_TO_DATA>`.
+ 5. Enter an Azure Data Lake Storage (ADLS) Gen 2 data URI for `titanic.csv` file in the `abfss://<FILE_SYSTEM_NAME>@<STORAGE_ACCOUNT_NAME>.dfs.core.windows.net/<PATH_TO_DATA>` format. Here, `<FILE_SYSTEM_NAME>` matches the container name.
7. To add an input, select **+ Add output** under **Outputs** and 1. Enter **Output name** as `wrangled_data`. The output should refer to this name later in the **Arguments**. 2. Select **Output type** as **Folder**.
- 3. For **Output URI destination**, enter an Azure Data Lake Storage (ADLS) Gen 2 folder URI in format `abfss://<FILE_SYSTEM_NAME>@<STORAGE_ACCOUNT_NAME>.dfs.core.windows.net/<PATH_TO_DATA>`.
+ 3. For **Output URI destination**, enter an Azure Data Lake Storage (ADLS) Gen 2 folder URI in the `abfss://<FILE_SYSTEM_NAME>@<STORAGE_ACCOUNT_NAME>.dfs.core.windows.net/<PATH_TO_DATA>` format. Here `<FILE_SYSTEM_NAME>` matches the container name.
8. Enter **Arguments** as `--titanic_data ${{inputs.titanic_data}} --wrangled_data ${{outputs.wrangled_data}}`. 5. Under the **Spark configurations** section: 1. For **Executor size**:
First, upload the parameterized Python code `titanic.py` to the Azure Blob stora
> [!TIP]
-> You may have an existing Synapse Spark pool in your Azure Synapse workspace. If you want to use an existing Synapse Spark pool, please follow the instructions to [attach a Synapse Spark pool in Azure Machine Learning workspace](./how-to-manage-synapse-spark-pool.md).
+> You might have an existing Synapse Spark pool in your Azure Synapse workspace. To use an existing Synapse Spark pool, please follow the instructions to [attach a Synapse Spark pool in Azure Machine Learning workspace](./how-to-manage-synapse-spark-pool.md).
## Next steps
+- [Apache Spark in Azure Machine Learning (preview)](./apache-spark-azure-ml-concepts.md)
+- [Quickstart: Interactive Data Wrangling with Apache Spark (preview)](./quickstart-spark-data-wrangling.md)
- [Attach and manage a Synapse Spark pool in Azure Machine Learning (preview)](./how-to-manage-synapse-spark-pool.md) - [Interactive Data Wrangling with Apache Spark in Azure Machine Learning (preview)](./interactive-data-wrangling-with-apache-spark-azure-ml.md) - [Submit Spark jobs in Azure Machine Learning (preview)](./how-to-submit-spark-jobs.md)
machine-learning How To Setup Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-setup-authentication.md
Learn how to set up authentication to your Azure Machine Learning workspace. Aut
Regardless of the authentication workflow used, Azure role-based access control (Azure RBAC) is used to scope the level of access (authorization) allowed to the resources. For example, an admin or automation process might have access to create a compute instance, but not use it, while a data scientist could use it, but not delete or create it. For more information, see [Manage access to Azure Machine Learning workspace](../how-to-assign-roles.md).
-Azure AD Conditional Access can be used to further control or restrict access to the workspace for each authentication workflow. For example, an admin can allow workspace access from managed devices only.
- ## Prerequisites * Create an [Azure Machine Learning workspace](../how-to-manage-workspace.md).
ws = Workspace(subscription_id="your-sub-id",
## Use Conditional Access
-As an administrator, you can enforce [Azure AD Conditional Access policies](../../active-directory/conditional-access/overview.md) for users signing in to the workspace. For example, you
-can require two-factor authentication, or allow sign in only from managed devices. To use Conditional Access for Azure Machine Learning workspaces specifically, [assign the Conditional Access policy](../../active-directory/conditional-access/concept-conditional-access-cloud-apps.md) to Machine Learning Cloud app.
+> [!IMPORTANT]
+> [Azure AD Conditional Access](/azure/active-directory/conditional-access/overview) is __not__ supported with Azure Machine Learning.
## Next steps
managed-grafana Concept Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/concept-whats-new.md
Previously updated : 01/18/2023 Last updated : 02/06/2023 # What's New in Azure Managed Grafana
+## February 2023
+
+### Support for SMTP settings
+
+Configuring SMTP settings for Azure Managed Grafana is now supported.
+
+For more information, go to [SMTP settings](how-to-smtp-settings.md).
+ ## January 2023 ### Support for Grafana Enterprise
managed-grafana How To Smtp Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-smtp-settings.md
+
+ Title: 'How to configure SMTP settings (preview) within Azure Managed Grafana'
+
+description: Learn how to configure SMTP settings (preview) to generate email notifications for Azure Managed Grafana
++++ Last updated : 02/01/2023++
+# Configure SMTP settings (preview)
+
+In this guide, learn how to configure SMTP settings to generate email alerts in Azure Managed Grafana. Notifications alert users when some given scenarios occur on a Grafana dashboard.
+
+> [!IMPORTANT]
+> Email settings is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+SMTP settings can be enabled on an existing Azure Managed Grafana instance via the Azure Portal and the Azure CLI. Enabling SMTP settings while creating a new instance is currently not supported.
+
+## Prerequisites
+
+To follow the steps in this guide, you must have:
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+- An Azure Managed Grafana instance. If you don't have one yet, [create a new instance](quickstart-managed-grafana-portal.md).
+- An SMTP server. If you don't have one yet, you may want to consider using [Twilio SendGrid's email API for Azure](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/sendgrid.tsg-saas-offer).
+
+## Enable and configure SMTP settings
+
+To activate SMTP settings, enable email notifications and configure an email contact point in Azure Managed Grafana, follow the steps below.
+
+### [Portal](#tab/azure-portal)
+
+ 1. In the Azure portal, open your Grafana instance and under **Settings**, select **Configuration**.
+ 1. Select the **Email Settings (Preview)** tab.
+ :::image type="content" source="media/smtp-settings/find-settings.png" alt-text="Screenshot of the Azure platform. Selecting the SMTP settings tab.":::
+ 1. Toggle **SMTP Settings** on, so that **Enable** is displayed.
+ 1. SMTP settings appear. Fill out the form with the following configuration:
+
+ | Parameter | Example | Description |
+ |-|--|--|
+ | Host | test.sendgrid.net:587 | Enter the SMTP server hostname with port. |
+ | User | admin | Enter the name of the user of the SMTP authentication. |
+ | Password | password | Enter password of the SMTP authentication. If the password contains "#" or ";" wrap it within triple quotes. |
+ | From Address | user@domain.com | Enter the email address used when sending out emails. |
+ | From Name | Azure Managed Grafana Notification | Enter the name used when sending out emails. Default is "Azure Managed Grafana Notification" if parameter isn't given or empty. |
+ | Skip Verify | Disable |This setting controls whether a client verifies the server's certificate chain and host name. If **Skip Verify** is **Enable**, client accepts any certificate presented by the server and any host name in that certificate. In this mode, TLS is susceptible to machine-in-the-middle attacks unless custom verification is used. Default is **Disable** (toggled off). [More information](https://pkg.go.dev/crypto/tls#Config). |
+ | StartTLS Policy | OpportunisticStartTLS | There are 3 options. [More information](https://pkg.go.dev/github.com/go-mail/mail#StartTLSPolicy).<br><ul><li>**OpportunisticStartTLS** means that SMTP transactions are encrypted if STARTTLS is supported by the SMTP server. Otherwise, messages are sent in the clear. This is the default setting.</li><li>**MandatoryStartTLS** means that SMTP transactions must be encrypted. SMTP transactions are aborted unless STARTTLS is supported by the SMTP server.</li><li>**NoStartTLS** means encryption is disabled and messages are sent in the clear.</li></ul> |
+
+ 1. Select **Save** to save the SMTP settings. Updating may take a couple of minutes.
+
+ :::image type="content" source="media/smtp-settings/save-updated-settings.png" alt-text="Screenshot of the Azure platform. Email Settings tab with new data.":::
+
+ 1. Once the process has completed, the message "Updating the selections. Update successful" is displayed in the Azure **Notifications**. In the **Overview** page, the provisioning state of the instance turns to **Updating**, and then **Succeeded** once the update is complete.
+
+### [Azure CLI](#tab/azure-cli)
+
+1. Azure Managed Grafana CLI extension 1.1 or above is required to enable or update SMTP settings. To update your extension, run `az extension update --name amg`.
+1. Run the [az grafana update](/cli/azure/grafana#az-grafana-update) command to configure SMTP settings for a given Azure Managed Grafana instance. When doing this, replace the placeholders below with information from your own instance.
+
+ ```azurecli
+ az grafana update --resource-group <resource-group> \
+ --name <azure-managed-grafana-name> \
+ --smtp enabled \
+ --from-address <from-address> \
+ --from-name <from-name> \
+ --host "<host>" \
+ --user <user> \
+ --password "<password>" \
+ --start-tls-policy <start-TLS-policy> \
+ --skip-verify <true-or-false> \
+ ```
+
+ | Parameter | Example | Description |
+ |-||-|
+ | `--resource-group` | my-resource-group | Enter the name of the Azure Managed Grafana instance's resource group. |
+ | `--name` | my-azure-managed-grafana | Enter the name of the Azure Managed Grafana instance. |
+ | `--smtp` | enabled | Enter **enabled** to disable SMTP settings. |
+ | `--from-address` | user@domain.com | Enter the email address used when sending out emails. |
+ | `--from-name` | Azure Managed Grafana Notification | Enter the name used when sending out emails. Default is "Azure Managed Grafana Notification" if parameter isn't given or empty. |
+ | `--host` | test.sendgrid.net:587 | Enter the SMTP server hostname with port. |
+ | `--user` | admin | Enter the name of the user of the SMTP authentication. |
+ | `--password` | password | Enter password of the SMTP authentication. If the password contains "#" or ";" wrap it within triple quotes. |
+ | `--start-tls-policy` | OpportunisticStartTLS | The StartTLSPolicy setting of the SMTP configuration. There are 3 options. [More information](https://pkg.go.dev/github.com/go-mail/mail#StartTLSPolicy).<br><ul><li>**OpportunisticStartTLS** means that SMTP transactions are encrypted if STARTTLS is supported by the SMTP server. Otherwise, messages are sent in the clear. This is the default setting.</li><li>**MandatoryStartTLS** means that SMTP transactions must be encrypted. SMTP transactions are aborted unless STARTTLS is supported by the SMTP server.</li><li>**NoStartTLS** means encryption is disabled and messages are sent in the clear.</li></ul> |
+ | `--skip-verify` | false |This setting controls whether a client verifies the server's certificate chain and host name. If **--skip-verify** is **true**, client accepts any certificate presented by the server and any host name in that certificate. In this mode, TLS is susceptible to machine-in-the-middle attacks unless custom verification is used. Default is **false**. [More information](https://pkg.go.dev/crypto/tls#Config). |
+++
+## Configure Grafana contact points and send a test email
+
+Configuring Grafana contact points is done in the Grafana portal:
+
+ 1. In your Azure Managed Grafana workspace, in **Overview**, select the **Endpoint** URL.
+ 1. Go to **Alerting > Contact points**.
+ 1. Select **New contact point** or **Edit contact point** to update an existing contact point.
+
+ :::image type="content" source="media/smtp-settings/contact-points.png" alt-text="Screenshot of the Grafana platform. Updating contact points.":::
+
+ 1. Add or update the **Name**, and **Contact point type**.
+ 1. Enter a destination email under **Addresses**, and select **Test**.
+ 1. Select **Send test notification** to send the notification with the predefined test message or select **Custom** to first edit the message.
+ 1. A notification "Test alert sent" is displayed, meaning that the email setup has been successfully configured. The test email has been sent to the provided email address. In case of misconfiguration, an error message will be displayed instead.
+
+## Disable SMTP settings
+
+To disable SMTP settings, follow the steps below.
+
+### [Portal](#tab/azure-portal)
+
+1. In the Azure portal, go to **Configuration > Email Settings (Preview)** and toggle **SMTP Settings** off, so that **Disable** is displayed.
+1. Select **Save** to validate and start updating the Azure Managed Grafana instance.
+
+### [Azure CLI](#tab/azure-cli)
+
+1. Azure Managed Grafana CLI extension 1.1 or above is required to disable SMTP settings. To update your extension, run `az extension update --name amg`.
+1. Run the [az grafana update](/cli/azure/grafana#az-grafana-update) command to configure SMTP settings for a given Azure Managed Grafana instance. Replace the placeholders below with information from your own instance.
+
+ ```azurecli
+ az grafana update --resource-group <resource-group> \
+ --name <azure-managed-grafana-name> \
+ --smtp disabled
+ ```
+
+ | Parameter | Example | Description |
+ |-|||
+ | `--resource-group` | my-resource-group | Enter the name of the Azure Managed Grafana instance's resource group. |
+ | `--name` | my-azure-managed-grafana | Enter the name of the Azure Managed Grafana instance. |
+ | `--smtp` | disabled | Enter **disabled** to disable SMTP settings. |
+++
+> [!NOTE]
+> When a users disables SMTP settings, all SMTP credentials are removed from the backend. Azure Managed Grafana will not persist SMTP credentials when disabled.
+
+## Grafana alerting error messages
+
+Within the Grafana portal, you can find a list of all Grafana alerting error messages that occurred in **Alerting > Notifications**.
+
+Below are some common error messages you may encounter:
+
+- "Authentication failed: The provided authorization grant is invalid, expired, or revoked". Grafana couldn't connect to the SMTP server. Check if the password entered in the SMTP settings in the Azure portal is correct.
+- "Failed to sent test alert.: SMTP not configured". SMTP is disabled. Open the Azure Managed Grafana instance in the Azure portal and enable SMTP settings.
+
+## Known limitation
+
+Due to limitation on alerting high availability configuration in Azure Managed Grafana, there could be duplicate email notifications delivered for a single firing alert.
+
+## Next steps
+
+In this how-to guide, you learned how to configure Grafana SMTP settings. To learn how to create and configure Grafana dashboards, go to:
+
+> [!div class="nextstepaction"]
+> [Create dashboards](how-to-create-dashboard.md)
network-watcher Diagnose Vm Network Routing Problem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-vm-network-routing-problem.md
Title: 'Tutorial: Diagnose a VM network routing problem - Azure portal'
description: In this tutorial, you learn how to diagnose a virtual machine network routing problem using the next hop capability of Azure Network Watcher.
-tags: azure-resource-manager
+ - Previously updated : 01/07/2021--
-# Customer intent: I need to diagnose virtual machine (VM) network routing problem that prevents communication to different destinations.
Last updated : 02/10/2023+
+# Customer intent: I want to diagnose virtual machine (VM) network routing problem that prevents communication to different destinations.
# Tutorial: Diagnose a virtual machine network routing problem using the Azure portal
-When you deploy a virtual machine (VM), Azure creates several default routes for it. You may create custom routes to override Azure's default routes. Sometimes, a custom route can result in a VM not being able to communicate with other resources. In this tutorial, you learn how to:
+When you deploy a virtual machine (VM), Azure creates several [system default routes](/articles/virtual-network/virtual-networks-udr-overview.md#system-routes?toc=%2Fazure%2Fnetwork-watcher%2Ftoc.json&tabs=json) for it. You can create [custom routes](/articles/virtual-network/virtual-networks-udr-overview.md#custom-routes?toc=%2Fazure%2Fnetwork-watcher%2Ftoc.json&tabs=json) to override some of Azure's system routes. Sometimes, a custom route can result in a VM not being able to communicate with the intended destination. You can use Azure Network Watcher to troubleshoot and diagnose the VM routing problem that's preventing it from correctly communicating with other resources.
+
+In this tutorial, you learn how to:
> [!div class="checklist"]
-> * Create a VM
-> * Test communication to a URL using the next hop capability of Network Watcher
-> * Test communication to an IP address
-> * Diagnose a routing problem, and learn how you can resolve it
+> * Create a virtual network and deploy two virtual machines in it
+> * Test communication to different IPs using the next hop capability of Azure Network Watcher
+> * View the effective routes
+> * Create a custom route
+> * Diagnose a routing problem
-If you prefer, you can diagnose a virtual machine network routing problem using the [Azure CLI](diagnose-vm-network-routing-problem-cli.md) or [Azure PowerShell](diagnose-vm-network-routing-problem-powershell.md).
+If you prefer, you can diagnose a virtual machine network routing problem using the [Azure CLI](diagnose-vm-network-routing-problem-cli.md) or [Azure PowerShell](diagnose-vm-network-routing-problem-powershell.md) tutorials.
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-## Log in to Azure
-Log in to the Azure portal at https://portal.azure.com.
+## Prerequisites
+
+- An Azure subscription
++
+## Sign in to Azure
+
+Sign in to the [Azure portal](https://portal.azure.com).
++
+## Create a virtual network
+
+In this section, you create a virtual network.
+
+1. In the search box at the top of the portal, enter *virtual network*. Select **Virtual networks** in the search results.
+
+1. Select **+ Create**. In **Create virtual network**, enter or select the following in the **Basics** tab:
+
+ | Setting | Value |
+ | | |
+ | **Project Details** | |
+ | Subscription | Select your Azure subscription. |
+ | Resource Group | Select **Create new**. </br> Enter *myResourceGroup* in **Name**. </br> Select **OK**. |
+ | **Instance details** | |
+ | Name | Enter *myVNet*. |
+ | Region | Select **East US**. |
+
+1. Select the **IP Addresses** tab, or select **Next: IP Addresses** button at the bottom of the page.
+
+1. Enter the following in the **IP Addresses** tab:
+
+ | Setting | Value |
+ | | |
+ | IPv4 address space | Enter *10.0.0.0/16*. |
+ | Subnet name | Enter *mySubnet*. |
+ | Subnet address range | Enter *10.0.0.0/24*. |
+
+1. Select the **Security** tab, or select the **Next: Security** button at the bottom of the page.
+
+1. Under **BastionHost**, select **Enable** and enter the following:
+
+ | Setting | Value |
+ | | |
+ | Bastion name | Enter *myBastionHost*. |
+ | AzureBastionSubnet address space | Enter *10.0.3.0/24*. |
+ | Public IP Address | Select **Create new**. </br> Enter *myBastionIP* for **Name**. </br> Select **OK**. |
+
+1. Select the **Review + create** tab or select the **Review + create** button.
-## Create a VM
+1. Review the settings, and then select **Create**.
-1. Select **+ Create a resource** found on the upper-left corner of the Azure portal.
-2. Select **Compute** and then select **Windows Server 2016 Datacenter** or **Ubuntu Server 17.10 VM**.
-3. Enter, or select, the following information, accept the defaults for the remaining settings, and then select **OK**:
- |Setting|Value|
- |||
- |Name|myVm|
- |User name| Enter a user name of your choosing.|
- |Password| Enter a password of your choosing. The password must be at least 12 characters long and meet the [defined complexity requirements](../virtual-machines/windows/faq.yml?toc=%2fazure%2fnetwork-watcher%2ftoc.json#what-are-the-password-requirements-when-creating-a-vm-).|
- |Subscription| Select your subscription.|
- |Resource group| Select **Create new** and enter **myResourceGroup**.|
- |Location| Select **East US**|
+## Create virtual machines
-4. Select a size for the VM and then select **Select**.
-5. Under **Settings**, accept all the defaults, and select **OK**.
-6. Under **Create** of the **Summary**, select **Create** to start VM deployment. The VM takes a few minutes to deploy. Wait for the VM to finish deploying before continuing with the remaining steps.
+In this section, you create two virtual machines: **myVM** and **myNVA**. You use **myVM** virtual machine to test the communication from. **myNVA** virtual machine is used as a network virtual appliance in the scenario.
-## Test network communication
-To test network communication with Network Watcher, you must first enable a network watcher in at least one Azure region and then use Network Watcher's next hop capability to test communication.
+### Create first virtual machine
-### Enable network watcher
+1. In the search box at the top of the portal, enter *virtual machine*. Select **Virtual machines** in the search results.
-If you already have a network watcher enabled in at least one region, skip to [Use next hop](#use-next-hop).
+2. Select **+ Create** and then select **Azure virtual machine**.
-1. In the portal, select **All services**. In the **Filter box**, enter *Network Watcher*. When **Network Watcher** appears in the results, select it.
-2. Select **Regions**, to expand it, and then select **...** to the right of **East US**, as shown in the following picture:
+3. In **Create a virtual machine**, enter or select the following in the **Basics** tab:
- ![Enable Network Watcher](./media/diagnose-vm-network-traffic-filtering-problem/enable-network-watcher.png)
+ | Setting | Value |
+ | | |
+ | **Project Details** | |
+ | Subscription | Select your Azure subscription. |
+ | Resource Group | Select **myResourceGroup**. |
+ | **Instance details** | |
+ | Virtual machine name | Enter *myVM*. |
+ | Region | Select **(US) East US**. |
+ | Availability Options | Select **No infrastructure redundancy required**. |
+ | Security type | Select **Standard**. |
+ | Image | Select **Windows Server 2022 Datacenter: Azure Edition - x64 Gen2**. |
+ | Size | Choose a size or leave the default setting. |
+ | **Administrator account** | |
+ | Username | Enter a username. |
+ | Password | Enter a password. |
+ | Confirm password | Reenter password. |
-3. Select **Enable Network Watcher**.
+4. Select the **Networking** tab, or select **Next: Disks**, then **Next: Networking**.
-### Use next hop
+5. In the Networking tab, enter or select the following information:
-Azure automatically creates routes to default destinations. You may create custom routes that override the default routes. Sometimes, custom routes can cause communication to fail. Use the next hop capability of Network Watcher to determine which route Azure is using to route traffic.
+ | Setting | Value |
+ | | |
+ | **Network interface** | |
+ | Virtual network | Select **myVNet**. |
+ | Subnet | Select **mySubnet**. |
+ | Public IP | Leave the default. |
+ | NIC network security group | Select **Basic**. |
+ | Public inbound ports | Select **None**. |
-1. In the Azure portal, select **Next hop** under **Network Watcher**.
-2. Select your subscription, enter or select the following values, and then select **Next hop**, as shown in the picture that follows:
+6. Select **Review + create**.
- |Setting |Value |
- | | |
- | Resource group | Select myResourceGroup |
- | Virtual machine | Select myVm |
- | Network interface | myvm - Your network interface name may be different. |
- | Source IP address | 10.0.0.4 |
- | Destination IP address | 13.107.21.200 - One of the addresses for <www.bing.com>. |
+7. Review the settings, and then select **Create**.
- ![Next hop](./media/diagnose-vm-network-routing-problem/next-hop.png)
+8. Select **Go to resource** to go to the **Overview** page of **myVM**.
- After a few seconds, the result informs you that the next hop type is **Internet** and that the **Route table ID** is **System Route**. This result lets you know that there is a valid system route to the destination.
+9. Select **Connect**, then select **Bastion**.
+
+10. Enter the username and password that you created in the previous steps.
+
+11. Select **Connect** button.
+
+12. Once logged in, open a web browser and go to `www.bing.com` to verify it's reachable.
+
+ :::image type="content" source="./media/diagnose-vm-network-routing-problem/bing-allowed.png" alt-text="Screenshot showing Bing page in a web browser.":::
++
+### Create second virtual machine
+
+Follow the previous steps that you used to create **myVM** virtual machine and enter *myNVA* for the virtual machine name.
++
+## Test network communication using Network Watcher next hop
+
+Use the next hop capability of Network Watcher to determine which route Azure is using to route traffic from **myVM**, which has one network interface with one IP configuration
+
+1. In the search box at the top of the portal, enter *network watcher*. Select **Network Watcher** in the search results.
+
+1. Under **Network diagnostic tools**, select **Next hop**. Enter or select the following information:
+
+ | Setting | Value |
+ | - | |
+ | Subscription | Select your Azure subscription. |
+ | Resource group | Select **myResourceGroup**. |
+ | Virtual machine | Select **myVM**. |
+ | Network interface | Leave the default. |
+ | Source IP address | Enter *10.0.0.4* or the IP of your VM if it's different. |
+ | Destination IP address | Enter *13.107.21.200* to test the communication to `www.bing.com`. |
+
+1. Select **Next hop** button to start the test. The test result shows information about the next hop like the next hop type, its IP address, and the route table ID used to route traffic. The result of testing **13.107.21.200** shows that the next hop type is **Internet** and the route table ID is **System Route** which means traffic destined to `www.bing.com` from **myVM** is routed to the internet using Azure default system route.
+
+ :::image type="content" source="./media/diagnose-vm-network-routing-problem/next-hop-internet.png" alt-text="Screenshot showing how to test communication to www.bing.com using Azure Network Watcher next hop capability.":::
+
+1. Change the **Destination IP address** to **10.0.0.5** which is the IP address of **myNVA** virtual machine, and then select **Next hop** button. The result shows that the next hop type is **VirtualNetwork** and the route table ID is **System Route** which means traffic destined to **10.0.0.5** from **myVM** is routed within **myVNet** virtual network using Azure default system route.
+
+ :::image type="content" source="./media/diagnose-vm-network-routing-problem/next-hop-virtual-network.png" alt-text="Screenshot showing Network Watcher next hop result when testing with an IP within the same virtual network.":::
+
+1. Next, change the **Destination IP address** to **10.1.0.5** which is a private IP address that isn't in the address space of **myVNet** virtual network, and then select **Next hop** button. The result shows that the next hop type is **None** which means traffic destined to **10.1.0.5** from **myVM** is dropped.
+
+ :::image type="content" source="./media/diagnose-vm-network-routing-problem/next-hop-none-system-route.png" alt-text="Screenshot showing Network Watcher next hop result when testing with a private IP outside the address space of the virtual network.":::
-3. Change the **Destination IP address** to *172.31.0.100* and select **Next hop** again. The result returned informs you that **None** is the **Next hop type** and that the **Route table ID** is also **System Route**. This result lets you know that, while there is a valid system route to the destination, there is no next hop to route the traffic to the destination.
## View details of a route
-1. To analyze routing further, review the effective routes for the network interface. In the search box at the top of the portal, enter *myvm* (or whatever the name was of the network interface you checked). When **myvm** appears in the search results, select it.
-2. Select **Effective routes** under **SUPPORT + TROUBLESHOOTING**, as shown in the following picture:
+To further analyze routing, review the effective routes for **myVM** network interface.
+
+1. In the search box at the top of the portal, enter *virtual machine*. Select **Virtual machines** in the search results.
+
+1. Under **Settings**, select **Networking**, then select the network interface.
+
+ :::image type="content" source="./media/diagnose-vm-network-routing-problem/select-network-interface.png" alt-text="Screenshot showing how to select the network interface page from the virtual machine settings in the Azure portal.":::
+
+1. Under **Help**, select **Effective routes** to see the all routes associated with the network interface of **myVM**.
+
+ :::image type="content" source="./media/diagnose-vm-network-routing-problem/effective-routes-default.png" alt-text="Screenshot showing Azure default system routes associated with the virtual machine network interface." lightbox="./media/diagnose-vm-network-routing-problem/effective-routes-default-expanded.png":::
+
+ In the previous section, when you ran the test using **13.107.21.200**, the route with 0.0.0.0/0 address prefix was used to route traffic to the address since no other route has the address. By default, all addresses not specified within the address prefix of another route are routed to the internet.
+
+ When you ran the test using **10.0.0.5**, the route with 10.0.0.0/16 address prefix was used to route traffic to it.
+
+ However, when you ran the test using **10.1.0.5**, the result was **None** for the next hop type because this IP address is in the 10.0.0.0/8 address space. Azure default route for 10.0.0.0/8 address prefix has next hope type as **None**. If you add an address prefix that contains 10.1.0.5 to the virtual network address space, then the next hop type for 10.1.0.5 will change from **None** to **VirtualNetwork**.
++
+## Test a routing problem due to custom routes
+
+Next, you'll create a static custom route to override Azure default system routes and cause a routing problem to **myVM** virtual machine that prevents it from directly communicating with `www.bing.com`. Then, you'll use Network Watcher next hop to troubleshoot and diagnose the problem.
++
+### Create a custom route
+
+In this section, you create a static custom route (user-defined route) in a route table that forces all traffic destined outside the virtual network to a specific IP address. Forcing traffic to a virtual network appliance is a common scenario.
+
+1. In the search box at the top of the portal, enter *route table*. Select **Route tables** in the search results.
+
+1. Select **+ Create** to create a new route table. In the **Create Route table** page, enter or select the following:
+
+ | Setting | Value |
+ | - | |
+ | **Project Details** | |
+ | Subscription | Select your Azure subscription. |
+ | Resource group | Select **myResourceGroup**. |
+ | **Instance Details** | |
+ | Region | Select **East US**. |
+ | Name | Enter *myRouteTable*. |
+ | Propagate gateway routes | Leave the default. |
+
+1. Select **Review + create**.
+
+1. Review the settings, and then select **Create**.
+
+1. Select **Go to resource**.
+
+1. Under **Settings**, select **Routes**, and then select **+ Add** to add a custom route.
+
+1. In the **Add route** page, enter or select the following:
+
+ | Setting | Value |
+ | - | |
+ | Route name | Enter *myRoute*. |
+ | Address prefix destination | Select **IP Addresses**. |
+ | Destination IP addresses/CIDR ranges | Enter *0.0.0.0/0*. |
+ | Next hop type | Select **Virtual appliance**. |
+ | next hop address | Enter *10.0.0.5*. |
+
+1. Select **Add**.
++
+### Associate the route table with the subnet
+
+In this section, you associate the route table that you created in the previous section with **mySubnet** subnet.
+
+1. Under **Settings**, select **Subnets**, and then select **+ Associate** to associate **myRouteTable** with **mySubnet** subnet.
+
+1. In the **Associate subnet** page, select the following:
+
+ | Setting | Value |
+ | - | |
+ | Virtual network | Select **myVNet (myResourcegroup)**. |
+ | Subnet | Select **MySubnet**. |
+
+1. Select **OK**.
++
+### Go to `www.bing.com`
+
+In **myVM**, open the web browser and go to `www.bing.com` to verify if it's still reachable. The custom route that you created and associated with subnet of **myVM** forces the traffic to go to **myNVA**. The traffic is dropped as **myNVA** isn't set up to forward the traffic for the purposes of this tutorial to demonstrate a routing problem.
+++
+### Test network communication using next hop
+
+Repeat the steps you used in [Test network communication using Network Watcher next hop](#test-network-communication-using-network-watcher-next-hop) section using **13.107.21.200** to test the communication to `www.bing.com`.
++
+## View effective routes
+
+Repeat the steps you used in [View details of a route](#view-details-of-a-route) to check the effective routes after using the custom route that caused an issue in reaching `www.bing.com`.
- ![Effective routes](./media/diagnose-vm-network-routing-problem/effective-routes.png)
+The custom route with prefix 0.0.0.0/0 overrode Azure default route and caused all traffic destined outside **myVNet** virtual machine to go to 10.0.0.5.
- When you ran the test using 13.107.21.200 in [Use next hop](#use-next-hop), the route with the address prefix 0.0.0.0/0 was used to route traffic to the address since no other route includes the address. By default, all addresses not specified within the address prefix of another route are routed to the internet.
+
+> [!NOTE]
+> In this tutorial, traffic to `www.bing.com` was dropped because **myNVA** was not set up to forward traffic. To learn how to set up a virtual machine to forward traffic, see [Turn on IP forwarding](/articles/virtual-network/tutorial-create-route-table-portal.md#turn-on-ip-forwarding).
- However, when you ran the test using 172.31.0.100, the result informed you that there was no next hop type. As you can see in the previous picture, though there is a default route to the 172.16.0.0/12 prefix, which includes the 172.31.0.100 address, the **NEXT HOP TYPE** is **None**. Azure creates a default route to 172.16.0.0/12 but doesn't specify a next hop type until there is a reason to. If, for example, you added the 172.16.0.0/12 address range to the address space of the virtual network, Azure changes the **NEXT HOP TYPE** to **Virtual network** for the route. A check would then show **Virtual network** as the **NEXT HOP TYPE**.
## Clean up resources When no longer needed, delete the resource group and all of the resources it contains:
-1. Enter *myResourceGroup* in the **Search** box at the top of the portal. When you see **myResourceGroup** in the search results, select it.
+1. Enter *Resource groups* in the **Search** box at the top of the portal, and then select **myResourceGroup**.
2. Select **Delete resource group**. 3. Enter *myResourceGroup* for **TYPE THE RESOURCE GROUP NAME:** and select **Delete**. ## Next steps
-In this tutorial, you created a VM and diagnosed network routing from the VM. You learned that Azure creates several default routes and tested routing to two different destinations. Learn more about [routing in Azure](../virtual-network/virtual-networks-udr-overview.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json) and how to [create custom routes](../virtual-network/manage-route-table.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json#create-a-route).
+In this tutorial, you created a virtual machine and used Network Watcher next hop to diagnose routing to different destinations. To learn more about routing in Azure, see [Virtual network traffic routing](../virtual-network/virtual-networks-udr-overview.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json).
-For outbound VM connections, you can also determine the latency, allowed and denied network traffic between the VM and an endpoint, and the route used to an endpoint, using Network Watcher's [connection troubleshoot](network-watcher-connectivity-portal.md) capability. Learn how you can monitor communication between a VM and an endpoint, such as an IP address or URL, over time using the Network Watcher connection monitor capability.
+For outbound VM connections, you can also determine the latency, allowed and denied network traffic between the VM and an endpoint, and the route used to an endpoint, using Network Watcher [connection troubleshoot](network-watcher-connectivity-portal.md) capability.
+To learn how to monitor communication between two virtual machines, advance to the next tutorial.
> [!div class="nextstepaction"] > [Monitor a network connection](connection-monitor.md)
postgresql Concepts Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-extensions.md
There is a tradeoff between the query execution information pg_stat_statements p
TimescaleDB is a time-series database that is packaged as an extension for PostgreSQL. TimescaleDB provides time-oriented analytical functions, optimizations, and scales Postgres for time-series workloads. [Learn more about TimescaleDB](https://docs.timescale.com/timescaledb/latest/), a registered trademark of Timescale, Inc.. Azure Database for PostgreSQL provides the TimescaleDB [Apache-2 edition](https://www.timescale.com/legal/licenses). ## Installing TimescaleDB
-To install TimescaleDB, you need to include it in the server's shared preload libraries. A change to Postgres's `shared_preload_libraries` parameter requires a **server restart** to take effect. You can change parameters using the [Azure portal](howto-configure-server-parameters-using-portal.md) or the [Azure CLI](howto-configure-server-parameters-using-cli.md).
+To install TimescaleDB, in addition to allow listing it, as shown [above](#how-to-use-postgresql-extensions), you need to include it in the server's shared preload libraries. A change to Postgres's `shared_preload_libraries` parameter requires a **server restart** to take effect. You can change parameters using the [Azure portal](howto-configure-server-parameters-using-portal.md) or the [Azure CLI](howto-configure-server-parameters-using-cli.md).
Using the [Azure portal](https://portal.azure.com/):
Example:
ORDER BY a.aid; ``` The above example will cause the planner to use the results of a `seq scan` on table a to be combined with table b as a `hash join`.+
+To install pg_hint_plan, in addition to allow listing it, as shown [above](#how-to-use-postgresql-extensions), you need to include it in the server's shared preload libraries. A change to Postgres's `shared_preload_libraries` parameter requires a **server restart** to take effect. You can change parameters using the [Azure portal](howto-configure-server-parameters-using-portal.md) or the [Azure CLI](howto-configure-server-parameters-using-cli.md).
+
+Using the [Azure portal](https://portal.azure.com/):
+
+1. Select your Azure Database for PostgreSQL server.
+
+2. On the sidebar, select **Server Parameters**.
+
+3. Search for the `shared_preload_libraries` parameter.
+
+4. Select **pg_hint_plan**.
+
+5. Select **Save** to preserve your changes. You get a notification once the change is saved.
+
+6. After the notification, **restart** the server to apply these changes.
++
+You can now enable TimescaleDB in your Postgres database. Connect to the database and issue the following command:
+```sql
+CREATE EXTENSION IF NOT EXISTS pg_hint_plan CASCADE;
+```
+> [!TIP]
+> If you see an error, confirm that you [restarted your server](how-to-restart-server-portal.md) after saving shared_preload_libraries.
+ ## Next steps If you don't see an extension that you'd like to use, let us know. Vote for existing requests or create new feedback requests in our [feedback forum](https://feedback.azure.com/d365community/forum/c5e32b97-ee24-ec11-b6e6-000d3a4f0da0).
private-5g-core Gather Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/gather-diagnostics.md
You must already have an AP5GC site deployed to collect diagnostics.
## Collect values for diagnostics package gathering
-1. Create a storage account for diagnostics.
- 1. [Create a storage account](../storage/common/storage-account-create.md) with the following additional configuration:
- 1. In the **Advanced** tab, select **Enable storage account key access**. This will allow your support representative to download traces stored in this account using the URLs you share with them.
- 1. In the **Data protection** tab, under **Access control**, select **Enable version-level immutability support**. This will allow you to specify a time-based retention policy for the account in the next step.
+1. [Create a storage account](../storage/common/storage-account-create.md) for diagnsotics with the following additional configuration:
+ 1. In the **Advanced** tab, select **Enable storage account key access**. This will allow your support representative to download traces stored in this account using the URLs you share with them.
+ 1. In the **Data protection** tab, under **Access control**, select **Enable version-level immutability support**. This will allow you to specify a time-based retention policy for the account in the next step.
1. If you would like the content of your storage account to be automatically deleted after a period of time, [configure a default time-based retention policy](../storage/blobs/immutable-policy-configure-version-scope.md#configure-a-default-time-based-retention-policy) for your storage account. 1. [Create a container](../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container) for your diagnostics.
+ 1. Make a note of the **Container blob** URL. For example:
+ `https://storageaccountname.blob.core.windows.net/diagscontainername`
+ 1. Navigate to your **Storage account**.
+ 1. Select the **...** symbol on the right side of the container blob that you want to use for diagnostics collection.
+ 1. Select **Container properties** in the context menu.
+ 1. Copy the contents of the **URL** field in the **Container properties** view.
1. Create a [User-assigned identity](../active-directory/managed-identities-azure-resources/overview.md) and assign it to the storage account created above with the **Storage Blob Data Contributor** role. > [!TIP]
- > Make sure same User-assigned identity is used during site creation.
+ > Make sure the same User-assigned identity is used during site creation.
1. Navigate to the **Packet core control plane** resource for the site. 1. Select **Identity** under **Settings** on the left side menu. 1. Toggle **Modify user assigned managed identity?** to **Yes** and select **+ Add**.
You must already have an AP5GC site deployed to collect diagnostics.
1. Sign in to the [Azure portal](https://portal.azure.com/). 1. Navigate to the **Packet Core Control Pane** overview page of the site you want to gather diagnostics for. 1. Select **Diagnostics Collection** under the **Support + Troubleshooting** section on the left side. This will open a **Diagnostics Collection** view.
-1. Enter the **Storage account blob URL** that was configured for diagnostics storage. For example:
- `https://storageaccount.blob.core.windows.net/diags/diagsPackage_1.zip`
+1. Enter the **Container URL** that was configured for diagnostics storage and append the file name that you want to give the diagnostics. For example:
+ `https://storageaccountname.blob.core.windows.net/diagscontainername/diagsPackageName.zip`
+ > [!TIP]
+ > The **Container URL** should have been noted during creation. If it wasn't:
+ >
+ > 1. Navigate to your **Storage account**.
+ > 1. Select the **...** symbol on the right side of the container blob that you want to use for diagnostics collection.
+ > 1. Select **Container properties** in the context menu.
+ > 1. Copy the contents of the **URL** field in the **Container properties** view.
+ 1. Select **Diagnostics collection**. 1. AP5GC online service will generate a package and upload it to the provided storage account URL. Once AP5GC reports that the upload has succeeded, report the SAS URL to Azure support. 1. Generate a SAS URL by selecting **Generate SAS** on the blob details blade.
You must already have an AP5GC site deployed to collect diagnostics.
## Troubleshooting - If diagnostics file collection fails, an activity log will appear in the portal allowing you to troubleshoot via ARM:
- - If an invalid storage account blob URL was passed, the request will be rejected and report **400 Bad Request**. Repeat the process with the correct storage account blob URL.
+ - If an invalid container URL was passed, the request will be rejected and report **400 Bad Request**. Repeat the process with the correct container URL.
- If the asynchronous part of the operation fails, the asynchronous operation resource is set to **Failed** and reports a failure reason. - Additionally, check that the same user-assigned identity was added to both the site and storage account. - If this does not resolve the issue, share the correlation ID of the failed request with AP5GC support for investigation.
purview How To Policies Devops Arc Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-policies-devops-arc-sql-server.md
Title: Provision access to Azure Arc-enabled SQL Server for DevOps actions
-description: Step-by-step guide on provisioning access to Azure Arc-enabled SQL Server through Microsoft Purview DevOps policies
+ Title: Manage access to SQL Server 2022 system health and performance using Microsoft Purview DevOps policies, a type of RBAC policies.
+description: Use Microsoft Purview DevOps policies to provision access to SQL Server 2022 system metadata, so IT operations personnel can monitor performance, health and audit security, while limiting the insider threat.
Previously updated : 11/16/2022 Last updated : 02/10/2023
-# Provision access to system metadata in Azure Arc-enabled SQL Server
+# Provision access to system metadata in Azure Arc-enabled SQL Server 2022
-[DevOps policies](concept-policies-devops.md) are a type of Microsoft Purview access policies. They allow you to manage access to system metadata on data sources that have been registered for *Data use management* in Microsoft Purview. These policies are configured directly in the Microsoft Purview governance portal, and after being saved they get automatically published and then get enforced by the data source.
+[DevOps policies](concept-policies-devops.md) are a type of Microsoft Purview access policies. They allow you to manage access to system metadata on data sources that have been registered for *Data use management* in Microsoft Purview. These policies are configured directly from the Microsoft Purview governance portal, and after they are saved, they get automatically published and then enforced by the data source. Microsoft Purview policies only manage access for Azure AD principals.
-This how-to guide covers how to provision access from Microsoft Purview to Azure Arc-enabled SQL Server system metadata (DMVs and DMFs) *SQL Performance Monitoring* or *SQL Security Auditing* actions. Microsoft Purview access policies apply to Azure AD Accounts only.
+This how-to guide covers how to configure SQL Server 2022 to enforce policies created in Microsoft Purview. It covers onboarding with Azure Arc, enabling Azure AD on the SQL Server, and provisioning access to its system metadata (DMVs and DMFs) using the DevOps policies actions *SQL Performance Monitoring* or *SQL Security Auditing*.
## Prerequisites [!INCLUDE [Access policies generic pre-requisites](./includes/access-policies-prerequisites-generic.md)]
purview How To Policies Devops Authoring Generic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-policies-devops-authoring-generic.md
Title: Create, list, update and delete Microsoft Purview DevOps policies
-description: Step-by-step guide on provisioning access through Microsoft Purview DevOps policies
+ Title: Create, list, update and delete Microsoft Purview DevOps policies, so you can manage access to system health and performance counters.
+description: Use Microsoft Purview DevOps policies to provision access to database system metadata, so IT operations personnel can monitor performance, health and audit security, while limiting the insider threat. This guide covers the basic operations.
Previously updated : 11/16/2022 Last updated : 02/10/2023 # Create, list, update and delete Microsoft Purview DevOps policies
-[DevOps policies](concept-policies-devops.md) are a type of Microsoft Purview access policies. They allow you to manage access to system metadata on data sources that have been registered for *Data use management* in Microsoft Purview. These policies are configured directly in the Microsoft Purview governance portal, and after being saved they get automatically published and then get enforced by the data source.
+[DevOps policies](concept-policies-devops.md) are a type of Microsoft Purview access policies. They allow you to manage access to system metadata on data sources that have been registered for *Data use management* in Microsoft Purview. These policies are configured directly from the Microsoft Purview governance portal, and after they are saved, they get automatically published and then enforced by the data source. Microsoft Purview policies only manage access for Azure AD principals.
-This how-to guide covers how to provision access from Microsoft Purview to SQL-type data sources via *SQL Performance Monitoring* or *SQL Security Auditing* actions. Microsoft Purview access policies apply to Azure AD Accounts only.
+This guide covers the configuration steps in Microsoft Purview to provision access to database system metadata using the DevOps policies actions *SQL Performance Monitoring* or *SQL Security Auditing*. It goes into detail on creating, listing, updating and deleting DevOps policies.
## Prerequisites [!INCLUDE [Access policies generic pre-requisites](./includes/access-policies-prerequisites-generic.md)]
purview How To Policies Devops Azure Sql Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-policies-devops-azure-sql-db.md
Title: Provision access to Azure SQL Database for DevOps actions (preview)
-description: Step-by-step guide on provisioning access to Azure SQL Database through Microsoft Purview DevOps policies
+ Title: Manage access to Azure SQL Database system health and performance using Microsoft Purview DevOps policies, a type of RBAC policies.
+description: Use Microsoft Purview DevOps policies to provision access to Azure SQL Database system metadata, so IT operations personnel can monitor performance, health and audit security, while limiting the insider threat.
Previously updated : 11/04/2022- Last updated : 02/10/2023+ # Provision access to system metadata in Azure SQL Database (preview) [!INCLUDE [feature-in-preview](includes/feature-in-preview.md)]
-[DevOps policies](concept-policies-devops.md) are a type of Microsoft Purview access policies. They allow you to manage access to system metadata on data sources that have been registered for *Data use management* in Microsoft Purview. These policies are configured directly in the Microsoft Purview governance portal, and after being saved they get automatically published and then get enforced by the data source.
+[DevOps policies](concept-policies-devops.md) are a type of Microsoft Purview access policies. They allow you to manage access to system metadata on data sources that have been registered for *Data use management* in Microsoft Purview. These policies are configured directly from the Microsoft Purview governance portal, and after they are saved, they get automatically published and then enforced by the data source. Microsoft Purview policies only manage access for Azure AD principals.
-This how-to guide covers how to provision access from Microsoft Purview to Azure SQL Database system metadata (DMVs and DMFs) via *SQL Performance Monitoring* or *SQL Security Auditing* actions. Microsoft Purview access policies apply to Azure AD Accounts only.
+This how-to guide covers how to configure Azure SQL Database to enforce policies created in Microsoft Purview. It covers the configuration steps for Azure SQL Database and the ones in Microsoft Purview to provision access to Azure SQL Database system metadata (DMVs and DMFs) using the DevOps policies actions *SQL Performance Monitoring* or *SQL Security Auditing*.
## Prerequisites [!INCLUDE [Access policies generic pre-requisites](./includes/access-policies-prerequisites-generic.md)]
purview How To Policies Devops Resource Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-policies-devops-resource-group.md
Title: Provision access to resource groups and subscriptions for DevOps actions
-description: Step-by-step guide showing how to provision access to entire resource groups and subscriptions through Microsoft Purview DevOps policies
+ Title: Manage access to entire resource groups or subscriptions for monitoring system health and performance using Microsoft Purview DevOps policies, a type of RBAC policies.
+description: Use Microsoft Purview DevOps policies to provision access to all data sources inside a resource group or subscription, so IT operations personnel can monitor performance, health and audit security, while limiting the insider threat.
Previously updated : 11/14/2022 Last updated : 02/10/2023 # Provision access to system metadata in resource groups or subscriptions
-[DevOps policies](concept-policies-devops.md) are a type of Microsoft Purview access policies. They allow you to manage access to system metadata (DMVs and DMFs) via *SQL Performance Monitoring* or *SQL Security Auditing* actions. They can be created only on data sources that have been registered for *Data use management* in Microsoft Purview. These policies are configured directly in the Microsoft Purview governance portal, and after being saved they get automatically published and then get enforced by the data source. Microsoft Purview access policies apply to Azure AD Accounts only.
+[DevOps policies](concept-policies-devops.md) are a type of Microsoft Purview access policies. They allow you to manage access to system metadata on data sources that have been registered for *Data use management* in Microsoft Purview. These policies are configured directly in the Microsoft Purview governance portal, and after they are saved, they get automatically published and then enforced by the data source. Microsoft Purview policies only manage access for Azure AD principals.
+
+This how-to guide covers how to register an entire resource group or subscription and then create a single policy that will provision access to **all** data sources in that resource group or subscription. That single policy will cover all existing data sources and any data sources that are created afterwards. and provisioning access to its system metadata (DMVs and DMFs) using the DevOps policies actions *SQL Performance Monitoring* or *SQL Security Auditing*.
+
-In this guide we cover how to register an entire resource group or subscription and then create a single policy that will manage access to **all** data sources in that resource group or subscription. That single policy will cover all existing data sources and any data sources that are created afterwards.
## Prerequisites [!INCLUDE [Access policies generic pre-requisites](./includes/access-policies-prerequisites-generic.md)]
purview Register Scan Sap Hana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-sap-hana.md
This article outlines how to register SAP HANA, and how to authenticate and inte
||||||||| | [Yes](#register)| [Yes](#scan)| No | [Yes](#scan) | No | No|No | No |
->[!NOTE]
->Supported version for SAP HANA is 15.
- When scanning SAP HANA source, Microsoft Purview supports extracting technical metadata including: - Server
sap Advanced State Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/bash/advanced-state-management.md
Last updated 10/21/2021 + Title: advanced_state_management description: Updates the Terraform state file using a shell script
sap Install Deployer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/bash/install-deployer.md
Last updated 10/21/2021 + Title: install_deployer.sh description: Bootstrap a new deployer in the control plane using a shell script.
sap Install Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/bash/install-library.md
Last updated 10/21/2021 + Title: install_library.sh description: Bootstrap a new SAP Library in the control plane using a shell script.
sap Install Workloadzone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/bash/install-workloadzone.md
Last updated 10/21/2021 + Title: install_workloadzone.sh description: Deploy a new SAP Workload Zone using a shell script.
sap Installer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/bash/installer.md
Last updated 10/21/2021 + Title: installer.sh description: Deploy a new SAP system using a shell script.
sap Prepare Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/bash/prepare-region.md
Last updated 10/21/2021 + Title: Prepare region description: Deploys the control plane (deployer, SAP library) using a shell script.
sap Remove Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/bash/remove-region.md
Last updated 12/10/2021 + Title: Remove_region.sh description: Removes the SAP Control Plane (Deployer, Library) using a shell script.
sap Remover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/bash/remover.md
Last updated 10/21/2021 + Title: remover.sh description: Remove a new SAP system using a shell script.
sap Set Secrets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/bash/set-secrets.md
Last updated 10/21/2021 + Title: set_secrets.sh description: Sets the SPN Secrets in Azure Key vault using a shell script.
sap Update Sas Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/bash/update-sas-token.md
Last updated 10/21/2021 + Title: update_sas_token.sh description: Updates the SAP Library SAS token in Azure Key Vault
sap Bom Get Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/bom-get-files.md
Last updated 11/17/2021 + # Acquire media for BOM creation
sap Bom Prepare https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/bom-prepare.md
Last updated 11/17/2021 + # Prepare SAP BOM
sap Bom Templates Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/bom-templates-db.md
Last updated 11/17/2021 + # Generate SAP Application templates for automation
sap Configure Control Plane https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/configure-control-plane.md
Last updated 12/28/2022 + # Configure the control plane
sap Configure Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/configure-devops.md
Last updated 12/1/2022 + # Use SAP on Azure Deployment Automation Framework from Azure DevOps Services
sap Configure Extra Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/configure-extra-disks.md
Last updated 06/09/2022 + # Change the disk configuration for the SAP deployment automation
sap Configure Sap Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/configure-sap-parameters.md
Last updated 10/19/2022 + # Configure sap-parameters file
sap Configure System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/configure-system.md
Last updated 05/03/2022 + # Configure SAP system parameters
sap Configure Webapp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/configure-webapp.md
Last updated 10/19/2022 + # Configure the Control Plane Web Application
sap Configure Workload Zone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/configure-workload-zone.md
Last updated 09/13/2022 + # Workload zone configuration in SAP automation framework
sap Deploy Control Plane https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/deploy-control-plane.md
Last updated 11/17/2021 + # Deploy the control plane
sap Deploy System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/deploy-system.md
Last updated 11/17/2021 + # SAP system deployment for the automation framework
sap Deploy Workload Zone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/deploy-workload-zone.md
Last updated 11/17/2021 + # Workload zone deployment in SAP automation framework
sap Deployment Framework https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/deployment-framework.md
Last updated 05/29/2022 + # SAP on Azure Deployment Automation Framework
sap Devops Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/devops-tutorial.md
Last updated 10/19/2022 + # SAP on Azure Deployment Automation Framework DevOps - Hands-on lab
sap Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/get-started.md
Last updated 1/2/2023 + # Get started with SAP automation framework on Azure
sap Manual Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/manual-deployment.md
Last updated 11/17/2021 + # Get started with manual deployment
sap Naming Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/naming-module.md
Last updated 10/19/2022 + # Overview
sap Naming https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/naming.md
Last updated 11/17/2021 + # Naming conventions for SAP automation framework
sap New Vs Existing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/new-vs-existing.md
Last updated 11/17/2021 + # Configuring for new and existing deployments
sap Plan Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/plan-deployment.md
Last updated 11/17/2021 + # Plan your deployment of SAP automation framework
sap Reference Bash https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/reference-bash.md
keywords: 'Azure, SAP' + Last updated 11/17/2021
sap Run Ansible https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/run-ansible.md
Last updated 11/17/2021 + # Get started Ansible configuration
sap Software https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/software.md
Last updated 11/17/2021 + # Download SAP software
sap Supportability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/supportability.md
Last updated 1/6/2023 + # Supportability matrix for the SAP Automation Framework
sap Tools Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/tools-configuration.md
Last updated 10/19/2022 + # Configuring external tools to use with the SAP on Azure Deployment Automation Framework
sap Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/tutorial.md
Last updated 12/14/2021 +
sap Ha Setup With Fencing Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/ha-setup-with-fencing-device.md
editor: + vm-linux
sap Hana Additional Network Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/hana-additional-network-requirements.md
editor: + vm-linux
sap Hana Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/hana-architecture.md
editor: '' + vm-linux
sap Hana Available Skus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/hana-available-skus.md
editor: '' keywords: 'HLI, HANA, SKUs, S896, S224, S448, S672, Optane, SAP' + vm-linux
sap Hana Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/hana-backup-restore.md
editor: + vm-linux
sap Hana Certification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/hana-certification.md
editor: '' + vm-linux
sap Hana Concept Preparation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/hana-concept-preparation.md
editor: + vm-linux
sap Hana Connect Azure Vm Large Instances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/hana-connect-azure-vm-large-instances.md
editor: ''
tags: azure-resource-manager keywords: '' + vm-linux
sap Hana Connect Vnet Express Route https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/hana-connect-vnet-express-route.md
editor: + vm-linux
sap Hana Data Tiering Extension Nodes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/hana-data-tiering-extension-nodes.md
editor: '' + vm-linux
sap Hana Example Installation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/hana-example-installation.md
editor: + vm-linux
sap Hana Failover Procedure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/hana-failover-procedure.md
editor: + vm-linux
sap Hana Installation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/hana-installation.md
editor: + vm-linux
sap Hana Know Terms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/hana-know-terms.md
editor: '' + vm-linux
sap Hana Large Instance Enable Kdump https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/hana-large-instance-enable-kdump.md
editor: + vm-linux
sap Hana Large Instance Virtual Machine Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/hana-large-instance-virtual-machine-migration.md
editor: + vm-linux
sap Hana Li Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/hana-li-portal.md
tags: azure-resource-manager + Last updated 07/01/2021
sap Hana Monitor Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/hana-monitor-troubleshoot.md
documentationcenter:
+ vm-linux
sap Hana Network Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/hana-network-architecture.md
editor: '' + vm-linux
sap Hana Onboarding Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/hana-onboarding-requirements.md
editor: '' + vm-linux
sap Hana Operations Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/hana-operations-model.md
editor: '' + vm-linux
sap Hana Overview Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/hana-overview-architecture.md
editor: '' + vm-linux
sap Hana Overview High Availability Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/hana-overview-high-availability-disaster-recovery.md
editor: + vm-linux
sap Hana Overview Infrastructure Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/hana-overview-infrastructure-connectivity.md
editor: + vm-linux
sap Hana Setup Smt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/hana-setup-smt.md
editor: + vm-linux
sap Hana Sizing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/hana-sizing.md
editor: '' + vm-linux
sap Hana Storage Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/hana-storage-architecture.md
editor: '' + vm-linux
sap Hana Supported Scenario https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/hana-supported-scenario.md
editor: + vm-linux
sap Large Instance High Availability Rhel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/large-instance-high-availability-rhel.md
description: Learn how to automate an SAP HANA database failover using a Pacemak
+ Last updated 04/19/2021
sap Large Instance Os Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/large-instance-os-backup.md
editor: + vm-linux
sap Os Backup Hli Type Ii Skus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/os-backup-hli-type-ii-skus.md
editor: + vm-linux
sap Os Compatibility Matrix Hana Large Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/os-compatibility-matrix-hana-large-instance.md
editor: + vm-linux
sap Os Upgrade Hana Large Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/os-upgrade-hana-large-instance.md
editor: + vm-linux
sap Troubleshooting Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/troubleshooting-monitoring.md
documentationcenter:
+ vm-linux
sap About Azure Monitor Sap Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/monitor/about-azure-monitor-sap-solutions.md
Title: What is Azure Monitor for SAP solutions? (preview)
description: Learn about how to monitor your SAP resources on Azure for availability, performance, and operation. + Last updated 10/27/2022
sap Data Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/monitor/data-reference.md
description: Important reference material needed when you monitor SAP on Azure.
-++ Last updated 10/27/2022
sap Enable Tls Azure Monitor Sap Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/monitor/enable-tls-azure-monitor-sap-solutions.md
Title: Enable TLS 1.2 or higher
description: Learn what is secure communication with TLS 1.2 or higher in Azure Monitor for SAP solutions. + Last updated 12/14/2022
sap Get Alerts Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/monitor/get-alerts-portal.md
+ Last updated 10/19/2022 #Customer intent: As a developer, I want to configure alerts in Azure Monitor for SAP solutions so that I can receive alerts and notifications about my SAP systems.
sap Provider Ha Pacemaker Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/monitor/provider-ha-pacemaker-cluster.md
Title: Create a High Availability Pacemaker cluster provider for Azure Monitor f
description: Learn how to configure High Availability (HA) Pacemaker cluster providers for Azure Monitor for SAP solutions. + Last updated 01/05/2023
sap Provider Hana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/monitor/provider-hana.md
Title: Configure SAP HANA provider for Azure Monitor for SAP solutions (preview)
description: Learn how to configure the SAP HANA provider for Azure Monitor for SAP solutions through the Azure portal. + Last updated 10/27/2022
sap Provider Ibm Db2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/monitor/provider-ibm-db2.md
Title: Create IBM Db2 provider for Azure Monitor for SAP solutions (preview)
description: This article provides details to configure an IBM DB2 provider for Azure Monitor for SAP solutions. + Last updated 12/03/2022
sap Provider Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/monitor/provider-linux.md
Title: Configure Linux provider for Azure Monitor for SAP solutions (preview)
description: This article explains how to configure a Linux OS provider for Azure Monitor for SAP solutions. + Last updated 01/05/2023
sap Provider Netweaver https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/monitor/provider-netweaver.md
Title: Configure SAP NetWeaver for Azure Monitor for SAP solutions (preview)
description: Learn how to configure SAP NetWeaver for use with Azure Monitor for SAP solutions. + Last updated 11/02/2022
sap Provider Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/monitor/provider-sql-server.md
Title: Configure Microsoft SQL Server provider for Azure Monitor for SAP solutio
description: Learn how to configure a Microsoft SQL Server provider for use with Azure Monitor for SAP solutions. + Last updated 10/27/2022
sap Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/monitor/providers.md
Title: What are providers in Azure Monitor for SAP solutions? (preview)
description: This article provides answers to frequently asked questions about Azure Monitor for SAP solutions providers. + Last updated 10/27/2022
sap Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/monitor/quickstart-portal.md
+ Last updated 10/19/2022 # Customer intent: As a developer, I want to deploy Azure Monitor for SAP solutions in the Azure portal so that I can configure providers.
sap Quickstart Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/monitor/quickstart-powershell.md
+ Last updated 10/19/2022 ms.devlang: azurepowershell
sap Set Up Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/monitor/set-up-network.md
Title: Set up network for Azure Monitor for SAP solutions (preview)
description: Learn how to set up an Azure virtual network for use with Azure Monitor for SAP solutions. + Last updated 10/27/2022
sap Business One Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/business-one-azure.md
Title: SAP Business One on Azure Virtual Machines | Microsoft Docs
description: SAP Business One on Azure. + Last updated 02/11/2022
sap Businessobjects Deployment Guide Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/businessobjects-deployment-guide-linux.md
tags: azure-resource-manager
keywords: '' + vm-linux
sap Businessobjects Deployment Guide Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/businessobjects-deployment-guide-windows.md
editor: ''
tags: azure-resource-manager keywords: '' + vm-windows
sap Businessobjects Deployment Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/businessobjects-deployment-guide.md
editor: ''
tags: azure-resource-manager keywords: '' + vm-windows
sap Cal Ides Erp6 Erp7 Sp3 Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/cal-ides-erp6-erp7-sp3-sql.md
Title: Deploy SAP IDES EHP7 SP3 for SAP ERP 6.0 on Azure | Microsoft Docs
description: Deploy SAP IDES EHP7 SP3 for SAP ERP 6.0 on Azure + Last updated 09/16/2016
sap Cal S4h https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/cal-s4h.md
keywords: ''
ms.assetid: 44bbd2b6-a376-4b5c-b824-e76917117fa9 + vm-linux
sap Certifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/certifications.md
tags: azure-resource-manager
keywords: '' ms.assetid: + vm-linux
sap Dbms Guide General https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/dbms-guide-general.md
Title: Considerations for Azure Virtual Machines DBMS deployment for SAP workloa
description: Considerations for Azure Virtual Machines DBMS deployment for SAP workload + Last updated 09/22/2020
sap Dbms Guide Ha Ibm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/dbms-guide-ha-ibm.md
Title: Set up IBM Db2 HADR on Azure virtual machines (VMs) | Microsoft Docs
description: Establish high availability of IBM Db2 LUW on Azure virtual machines (VMs). + Last updated 12/06/2022
sap Dbms Guide Ibm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/dbms-guide-ibm.md
tags: azure-resource-manager keywords: 'Azure, Db2, SAP, IBM' + Last updated 08/24/2022
sap Dbms Guide Maxdb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/dbms-guide-maxdb.md
tags: azure-resource-manager + Last updated 08/24/2022
sap Dbms Guide Oracle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/dbms-guide-oracle.md
tags: azure-resource-manager keywords: 'SAP, Azure, Oracle, Data Guard' + Last updated 08/24/2022
sap Dbms Guide Sapase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/dbms-guide-sapase.md
tags: azure-resource-manager + Last updated 11/30/2022
sap Dbms Guide Sapiq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/dbms-guide-sapiq.md
editor: ''
tags: azure-resource-manager keywords: '' + vm-windows
sap Dbms Guide Sqlserver https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/dbms-guide-sqlserver.md
tags: azure-resource-manager keywords: 'Azure, SQL Server, SAP, AlwaysOn, Always On' + Last updated 11/14/2022
sap Deployment Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/deployment-checklist.md
tags: azure-resource-manager + Last updated 11/21/2022
sap Deployment Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/deployment-guide.md
tags: azure-resource-manager ms.assetid: 1c4f1951-3613-4a5a-a0af-36b85750c84e + vm-linux
sap Disaster Recovery Overview Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/disaster-recovery-overview-guide.md
+ Last updated 12/06/2022
sap Disaster Recovery Sap Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/disaster-recovery-sap-guide.md
+ Last updated 01/31/2023
sap Exchange Online Integration Sap Email Outbound https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/exchange-online-integration-sap-email-outbound.md
description: Learn about Exchange Online integration for email outbound from SAP
+ Last updated 03/11/2022
sap Expose Sap Odata To Power Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/expose-sap-odata-to-power-query.md
description: Learn about configuring SAP Principal Propagation for live OData fe
+ Last updated 06/10/2022
sap Expose Sap Process Orchestration On Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/expose-sap-process-orchestration-on-azure.md
description: Learn about securely exposing SAP Process Orchestration on Azure.
+ Last updated 07/19/2022
sap Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/get-started.md
Title: Get started with SAP on Azure VMs | Microsoft Docs
description: Learn about SAP solutions that run on virtual machines (VMs) in Microsoft Azure + documentationcenter: ''
sap Hana Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/hana-get-started.md
tags: azure-resource-manager
keywords: '' ms.assetid: c51a2a06-6e97-429b-a346-b433a785c9f0 + vm-linux
sap Hana Vm Operations Netapp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/hana-vm-operations-netapp.md
tags: azure-resource-manager keywords: 'SAP, Azure, ANF, HANA, Azure NetApp Files, snapshot' + Last updated 12/28/2022
sap Hana Vm Operations Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/hana-vm-operations-storage.md
tags: azure-resource-manager keywords: 'SAP, Azure HANA, Storage Ultra disk, Premium storage' + Last updated 10/09/2022
sap Hana Vm Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/hana-vm-operations.md
tags: azure-resource-manager + Last updated 08/30/2022
sap Hana Vm Premium Ssd V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/hana-vm-premium-ssd-v1.md
tags: azure-resource-manager keywords: 'SAP, Azure HANA, Storage Ultra disk, Premium storage' + Last updated 10/07/2022
sap Hana Vm Premium Ssd V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/hana-vm-premium-ssd-v2.md
tags: azure-resource-manager keywords: 'SAP, Azure HANA, Storage Ultra disk, Premium storage, Premium SSD v2' + Last updated 12/14/2022
sap Hana Vm Troubleshoot Scale Out Ha On Sles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/hana-vm-troubleshoot-scale-out-ha-on-sles.md
+ vm-linux
sap Hana Vm Ultra Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/hana-vm-ultra-disk.md
tags: azure-resource-manager keywords: 'SAP, Azure HANA, Storage Ultra disk, Premium storage' + Last updated 10/07/2022
sap High Availability Guide Rhel Glusterfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-rhel-glusterfs.md
editor: ''
tags: azure-resource-manager keywords: '' + vm-windows
sap High Availability Guide Rhel Ibm Db2 Luw https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-rhel-ibm-db2-luw.md
tags: azure-resource-manager keywords: 'SAP' + vm-linux
sap High Availability Guide Rhel Multi Sid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-rhel-multi-sid.md
editor: ''
tags: azure-resource-manager keywords: '' + vm-windows
sap High Availability Guide Rhel Netapp Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-rhel-netapp-files.md
tags: azure-resource-manager + Last updated 12/06/2022
sap High Availability Guide Rhel Nfs Azure Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-rhel-nfs-azure-files.md
tags: azure-resource-manager + Last updated 12/06/2022
sap High Availability Guide Rhel Pacemaker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-rhel-pacemaker.md
editor: ''
tags: azure-resource-manager keywords: '' + vm-windows
sap High Availability Guide Rhel With Dialog Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-rhel-with-dialog-instance.md
tags: azure-resource-manager + vm-linux
sap High Availability Guide Rhel With Hana Ascs Ers Dialog Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-rhel-with-hana-ascs-ers-dialog-instance.md
tags: azure-resource-manager + vm-linux
sap High Availability Guide Rhel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-rhel.md
tags: azure-resource-manager + Last updated 12/06/2022
sap High Availability Guide Standard Load Balancer Outbound Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-standard-load-balancer-outbound-connections.md
editor: ''
tags: azure-resource-manager keywords: '' + vm-windows
sap High Availability Guide Suse Multi Sid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-suse-multi-sid.md
tags: azure-resource-manager ms.assetid: 5e514964-c907-4324-b659-16dd825f6f87 + vm-windows
sap High Availability Guide Suse Netapp Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-suse-netapp-files.md
tags: azure-resource-manager
keywords: '' ms.assetid: 5e514964-c907-4324-b659-16dd825f6f87 + vm-windows
sap High Availability Guide Suse Nfs Azure Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-suse-nfs-azure-files.md
tags: azure-resource-manager
keywords: '' ms.assetid: 5e514964-c907-4324-b659-16dd825f6f87 + vm-windows
sap High Availability Guide Suse Nfs Simple Mount https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-suse-nfs-simple-mount.md
tags: azure-resource-manager
keywords: '' ms.assetid: 5e514964-c907-4324-b659-16dd825f6f87 + vm-windows
sap High Availability Guide Suse Nfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-suse-nfs.md
editor: ''
tags: azure-resource-manager keywords: '' + vm-windows
sap High Availability Guide Suse Pacemaker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-suse-pacemaker.md
editor: ''
tags: azure-resource-manager keywords: '' + vm-windows
sap High Availability Guide Suse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-suse.md
tags: azure-resource-manager
keywords: '' ms.assetid: 5e514964-c907-4324-b659-16dd825f6f87 + vm-windows
sap High Availability Guide Windows Azure Files Smb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-windows-azure-files-smb.md
tags: azure-resource-manager
keywords: '' ms.assetid: 5e514964-c907-4324-b659-16dd825f6f87 + vm-windows
sap High Availability Guide Windows Dfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-windows-dfs.md
tags: azure-resource-manager
keywords: '' ms.assetid: 5e514964-c907-4324-b659-16dd825f6f87 + vm-windows
sap High Availability Guide Windows Netapp Files Smb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-windows-netapp-files-smb.md
tags: azure-resource-manager
keywords: '' ms.assetid: 5e514964-c907-4324-b659-16dd825f6f87 + vm-windows
sap High Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-zones.md
tags: azure-resource-manager ms.assetid: 887caaec-02ba-4711-bd4d-204a7d16b32b + Last updated 12/19/2022
sap Integration Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/integration-get-started.md
Title: Get started with SAP and Azure integration scenarios description: Learn about the various integration points in the Microsoft ecosystem for SAP workloads. + Last updated 12/15/2022
sap Lama Installation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/lama-installation.md
editor: ''
tags: azure-resource-manager keywords: '' + vm-linux
sap Planning Guide Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/planning-guide-storage.md
tags: azure-resource-manager ms.assetid: d7c59cc1-b2d0-4d90-9126-628f9c7a5538 + Last updated 12/28/2022
sap Planning Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/planning-guide.md
tags: azure-resource-manager + vm-linux
sap Planning Supported Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/planning-supported-configurations.md
tags: azure-resource-manager ms.assetid: d7c59cc1-b2d0-4d90-9126-628f9c7a5538 + Last updated 01/27/2022
sap Proximity Placement Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/proximity-placement-scenarios.md
tags: azure-resource-manager + Last updated 12/18/2022
sap Rise Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/rise-integration.md
editor: ''
tags: azure-resource-manager keywords: '' + vm-linux
sap Sap Ascs Ha Multi Sid Wsfc Azure Shared Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-ascs-ha-multi-sid-wsfc-azure-shared-disk.md
tags: azure-resource-manager
keywords: '' ms.assetid: cbf18abe-41cb-44f7-bdec-966f32c89325 + vm-windows
sap Sap Ascs Ha Multi Sid Wsfc File Share https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-ascs-ha-multi-sid-wsfc-file-share.md
tags: azure-resource-manager ms.assetid: cbf18abe-41cb-44f7-bdec-966f32c89325 + vm-windows
sap Sap Ascs Ha Multi Sid Wsfc Shared Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-ascs-ha-multi-sid-wsfc-shared-disk.md
tags: azure-resource-manager
keywords: '' ms.assetid: cbf18abe-41cb-44f7-bdec-966f32c89325 + vm-windows
sap Sap Hana Availability Across Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-availability-across-regions.md
tags: azure-resource-manager + Last updated 09/12/2018
sap Sap Hana Availability One Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-availability-one-region.md
tags: azure-resource-manager + Last updated 07/27/2018
sap Sap Hana Availability Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-availability-overview.md
tags: azure-resource-manager + Last updated 03/05/2018
sap Sap Hana High Availability Netapp Files Red Hat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability-netapp-files-red-hat.md
documentationcenter:
+ vm-linux
sap Sap Hana High Availability Netapp Files Suse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability-netapp-files-suse.md
tags: azure-resource-manager + vm-linux
sap Sap Hana High Availability Rhel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability-rhel.md
editor: + vm-linux
sap Sap Hana High Availability Scale Out Hsr Rhel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability-scale-out-hsr-rhel.md
tags: azure-resource-manager ms.assetid: 5e514964-c907-4324-b659-16dd825f6f87 + vm-windows
sap Sap Hana High Availability Scale Out Hsr Suse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability-scale-out-hsr-suse.md
tags: azure-resource-manager ms.assetid: 5e514964-c907-4324-b659-16dd825f6f87 + vm-windows
sap Sap Hana High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability.md
editor: + vm-linux
sap Sap Hana Scale Out Standby Netapp Files Rhel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-scale-out-standby-netapp-files-rhel.md
tags: azure-resource-manager ms.assetid: 5e514964-c907-4324-b659-16dd825f6f87 + vm-windows
sap Sap Hana Scale Out Standby Netapp Files Suse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-scale-out-standby-netapp-files-suse.md
tags: azure-resource-manager ms.assetid: 5e514964-c907-4324-b659-16dd825f6f87 + vm-windows
sap Sap High Availability Architecture Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-high-availability-architecture-scenarios.md
tags: azure-resource-manager ms.assetid: 887caaec-02ba-4711-bd4d-204a7d16b32b + vm-windows
sap Sap High Availability Guide Start https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-high-availability-guide-start.md
tags: azure-resource-manager
keywords: '' ms.assetid: 1cfcc14a-6795-4cfd-a740-aa09d6d2b817 + vm-windows
sap Sap High Availability Guide Wsfc File Share https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-high-availability-guide-wsfc-file-share.md
tags: azure-resource-manager
keywords: '' ms.assetid: 5e514964-c907-4324-b659-16dd825f6f87 + vm-windows
sap Sap High Availability Guide Wsfc Shared Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-high-availability-guide-wsfc-shared-disk.md
tags: azure-resource-manager
keywords: '' ms.assetid: f6fb85f8-c77a-4af1-bde8-1de7e4425d2e + vm-windows
sap Sap High Availability Infrastructure Wsfc File Share https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-high-availability-infrastructure-wsfc-file-share.md
tags: azure-resource-manager ms.assetid: 2ce38add-1078-4bb9-a1da-6f407a9bc910 + vm-windows
sap Sap High Availability Infrastructure Wsfc Shared Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-high-availability-infrastructure-wsfc-shared-disk.md
tags: azure-resource-manager
keywords: '' ms.assetid: ec976257-396b-42a0-8ea1-01c97f820fa6 + vm-windows
sap Sap High Availability Installation Wsfc File Share https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-high-availability-installation-wsfc-file-share.md
tags: azure-resource-manager ms.assetid: 71296618-673b-4093-ab17-b7a80df6e9ac + vm-windows
sap Sap High Availability Installation Wsfc Shared Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-high-availability-installation-wsfc-shared-disk.md
tags: azure-resource-manager
keywords: '' ms.assetid: 6209bcb3-5b20-4845-aa10-1475c576659f + vm-windows
sap Sap Higher Availability Architecture Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-higher-availability-architecture-scenarios.md
tags: azure-resource-manager
keywords: '' ms.assetid: f0b2f8f0-e798-4176-8217-017afe147917 + vm-windows
sap Sap Information Lifecycle Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-information-lifecycle-management.md
editor: ''
tags: azure-resource-manager keywords: '' + vm-linux
sap Supported Product On Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/supported-product-on-azure.md
tags: azure-resource-manager ms.assetid: d7c59cc1-b2d0-4d90-9126-628f9c7a5538 + Last updated 02/02/2022
sap Vm Extension For Sap New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/vm-extension-for-sap-new.md
tags: azure-resource-manager
keywords: '' ms.assetid: 1c4f1951-3613-4a5a-a0af-36b85750c84e + vm-linux
sap Vm Extension For Sap Standard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/vm-extension-for-sap-standard.md
tags: azure-resource-manager
keywords: '' ms.assetid: 1c4f1951-3613-4a5a-a0af-36b85750c84e + vm-linux
sap Vm Extension For Sap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/vm-extension-for-sap.md
tags: azure-resource-manager
keywords: '' ms.assetid: 1c4f1951-3613-4a5a-a0af-36b85750c84e + vm-linux
security Infrastructure Components https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/infrastructure-components.md
description: This article provides a general description of the Microsoft Azure
documentationcenter: na -+ ms.assetid: 61e95a87-39c5-48f5-aee6-6f90ddcd336e--++ na Previously updated : 06/28/2018 Last updated : 02/09/2023 # Azure information system components and boundaries+ This article provides a general description of the Azure architecture and management. The Azure system environment is made up of the following networks: - Microsoft Azure production network (Azure network)
This article provides a general description of the Azure architecture and manage
Separate IT teams are responsible for operations and maintenance of these networks. ## Azure architecture
-Azure is a cloud computing platform and infrastructure for building, deploying, and managing applications and services through a network of datacenters. Microsoft manages these datacenters. Based on the number of resources you specify, Azure creates virtual machines (VMs) based on resource need. These VMs run on an Azure hypervisor, which is designed for use in the cloud and is not accessible to the public.
-On each Azure physical server node, there is a hypervisor that runs directly over the hardware. The hypervisor divides a node into a variable number of guest VMs. Each node also has one root VM, which runs the host operating system. Windows Firewall is enabled on each VM. You define which ports are addressable by configuring the service definition file. These ports are the only ones open and addressable, internally or externally. All traffic and access to the disk and network is mediated by the hypervisor and root operating system.
+Azure is a cloud computing platform and infrastructure for building, deploying, and managing applications and services through a network of datacenters. Microsoft manages these datacenters. Based on the number of resources you specify, Azure creates virtual machines (VMs) based on resource need. These VMs run on an Azure hypervisor, which is designed for use in the cloud and isn't accessible to the public.
+
+On each Azure physical server node, there's a hypervisor that runs directly over the hardware. The hypervisor divides a node into a variable number of guest VMs. Each node also has one root VM, which runs the host operating system. Windows Firewall is enabled on each VM. You define which ports are addressable by configuring the service definition file. These ports are the only ones open and addressable, internally or externally. All traffic and access to the disk and network is mediated by the hypervisor and root operating system.
-At the host layer, Azure VMs run a customized and hardened version of the latest Windows Server. Azure uses a version of Windows Server that includes only those components necessary to host VMs. This improves performance and reduces attack surface. Machine boundaries are enforced by the hypervisor, which doesnΓÇÖt depend on the operating system security.
+At the host layer, Azure VMs run a customized and hardened version of the latest Windows Server. Azure uses a version of Windows Server that includes only those components necessary to host VMs. This improves performance and reduces attack surface. Machine boundaries are enforced by the hypervisor, which doesn't depend on the operating system security.
### Azure management by fabric controllers
The operating system team provides images, in the form of Virtual Hard Disks, de
There are three types of fabric-managed operating system images: - Host: A customized operating system that runs on host VMs.-- Native: A native operating system that runs on tenants (for example, Azure Storage). This operating system does not have any hypervisor.
+- Native: A native operating system that runs on tenants (for example, Azure Storage). This operating system doesn't have any hypervisor.
- Guest: A guest operating system that runs on guest VMs.
-The host and native FC-managed operating systems are designed for use in the cloud, and are not publicly accessible.
+The host and native FC-managed operating systems are designed for use in the cloud, and aren't publicly accessible.
#### Host and native operating systems
Host and native are hardened operating system images that host the fabric agents
Azure internal components running on guest operating system VMs have no opportunity to run Remote Desktop Protocol. Any changes to baseline configuration settings must go through the change and release management process. ## Azure datacenters+ The Microsoft Cloud Infrastructure and Operations (MCIO) team manages the physical infrastructure and datacenter facilities for all Microsoft online services. MCIO is primarily responsible for managing the physical and environmental controls within the datacenters, as well as managing and supporting outer perimeter network devices (such as edge routers and datacenter routers). MCIO is also responsible for setting up the bare minimum server hardware on racks in the datacenter. Customers have no direct interaction with Azure. ## Service management and service teams
-Various engineering groups, known as service teams, manage the support of the Azure service. Each service team is responsible for an area of support for Azure. Each service team must make an engineer available 24x7 to investigate and resolve failures in the service. Service teams do not, by default, have physical access to the hardware operating in Azure.
+
+Various engineering groups, known as service teams, manage the support of the Azure service. Each service team is responsible for an area of support for Azure. Each service team must make an engineer available 24x7 to investigate and resolve failures in the service. Service teams don't, by default, have physical access to the hardware operating in Azure.
The service teams are:
The service teams are:
- Storage ## Types of users+ Employees (or contractors) of Microsoft are considered to be internal users. All other users are considered to be external users. All Azure internal users have their employee status categorized with a sensitivity level that defines their access to customer data (access or no access). User privileges to Azure (authorization permission after authentication takes place) are described in the following table: | Role | Internal or external | Sensitivity level | Authorized privileges and functions performed | Access type | | | | | |
-| Azure datacenter engineer | Internal | No access to customer data | Manage the physical security of the premises. Conduct patrols in and out of the datacenter, and monitor all entry points. Escort into and out of the datacenter certain non-cleared personnel who provide general services (such as dining or cleaning) or IT work within the datacenter. Conduct routine monitoring and maintenance of network hardware. Perform incident management and break-fix work by using a variety of tools. Conduct routine monitoring and maintenance of the physical hardware in the datacenters. Access to environment on demand from property owners. Capable to perform forensic investigations, log incident reports, and require mandatory security training and policy requirements. Operational ownership and maintenance of critical security tools, such as scanners and log collection. | Persistent access to the environment. |
+| Azure datacenter engineer | Internal | No access to customer data | Manage the physical security of the premises. Conduct patrols in and out of the datacenter, and monitor all entry points. Escort into and out of the datacenter certain non-cleared personnel who provide general services (such as dining or cleaning) or IT work within the datacenter. Conduct routine monitoring and maintenance of network hardware. Perform incident management and break-fix work by using various tools. Conduct routine monitoring and maintenance of the physical hardware in the datacenters. Access to environment on demand from property owners. Capable to perform forensic investigations, log incident reports, and require mandatory security training and policy requirements. Operational ownership and maintenance of critical security tools, such as scanners and log collection. | Persistent access to the environment. |
| Azure incident triage (rapid response engineers) | Internal | Access to customer data | Manage communications among MCIO, support, and engineering teams. Triage platform incidents, deployment issues, and service requests. | Just-in-time access to the environment, with limited persistent access to non-customer systems. | | Azure deployment engineers | Internal | Access to customer data | Deploy and upgrade platform components, software, and scheduled configuration changes in support of Azure. | Just-in-time access to the environment, with limited persistent access to non-customer systems. | | Azure customer outage support (tenant) | Internal | Access to customer data | Debug and diagnose platform outages and faults for individual compute tenants and Azure accounts. Analyze faults. Drive critical fixes to the platform or customer, and drive technical improvements across support. | Just-in-time access to the environment, with limited persistent access to non-customer systems. |
Communications between Azure internal components are protected with TLS encrypti
The FC maintains a set of credentials (keys and/or passwords) used to authenticate itself to various hardware devices under its control. Microsoft uses a system to prevent access to these credentials. Specifically, the transport, persistence, and use of these credentials is designed to prevent Azure developers, administrators, and backup services and personnel access to sensitive, confidential, or private information.
-Microsoft uses encryption based on the FCΓÇÖs master identity public key. This occurs at FC setup and FC reconfiguration times, to transfer the credentials used to access networking hardware devices. When the FC needs the credentials, the FC retrieves and decrypts them.
+Microsoft uses encryption based on the FC's master identity public key. This occurs at FC setup and FC reconfiguration times, to transfer the credentials used to access networking hardware devices. When the FC needs the credentials, the FC retrieves and decrypts them.
### Network devices The Azure networking team configures network service accounts to enable an Azure client to authenticate to network devices (routers, switches, and load balancers). ## Secure service administration+ Azure operations personnel are required to use secure admin workstations (SAWs). Customers can implement similar controls by using privileged access workstations. With SAWs, administrative personnel use an individually assigned administrative account that is separate from the user's standard user account. The SAW builds on that account separation practice by providing a trustworthy workstation for those sensitive accounts. ## Next steps+ To learn more about what Microsoft does to help secure the Azure infrastructure, see: - [Azure facilities, premises, and physical security](physical-security.md)
security Paas Deployments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/paas-deployments.md
Following are best practices for using App Service.
[Azure Cloud Services](../../cloud-services/cloud-services-choose-me.md) is an example of a PaaS. Like Azure App Service, this technology is designed to support applications that are scalable, reliable, and inexpensive to operate. In the same way that App Service is hosted on virtual machines (VMs), so too is Azure Cloud Services. However, you have more control over the VMs. You can install your own software on VMs that use Azure Cloud Services, and you can access them remotely. ## Install a web application firewall
-Web applications are increasingly targets of malicious attacks that exploit common known vulnerabilities. Common among these exploits are SQL injection attacks, cross site scripting attacks to name a few. Preventing such attacks in application code can be challenging and may require rigorous maintenance, patching and monitoring at many layers of the application topology. A centralized web application firewall helps make security management much simpler and gives better assurance to application administrators against threats or intrusions. A WAF solution can also react to a security threat faster by patching a known vulnerability at a central location versus securing each of individual web applications. Existing application gateways can be converted to a web application firewall enabled application gateway easily.
+Web applications are increasingly targets of malicious attacks that exploit common known vulnerabilities. Common among these exploits are SQL injection attacks, cross site scripting attacks to name a few. Preventing such attacks in application code can be challenging and may require rigorous maintenance, patching and monitoring at many layers of the application topology. A centralized web application firewall helps make security management much simpler and gives better assurance to application administrators against threats or intrusions. A WAF solution can also react to a security threat faster by patching a known vulnerability at a central location versus securing each of individual web applications.
-[Web application firewall (WAF)](../../web-application-firewall/afds/afds-overview.md) is a feature of Application Gateway that provides centralized protection of your web applications from common exploits and vulnerabilities. WAF is based on rules from the [Open Web Application Security Project (OWASP) core rule sets](https://owasp.org/www-project-modsecurity-core-rule-set/) 3.0 or 2.2.9.
+[Web Application Firewall (WAF)](../../web-application-firewall/overview.md) provides centralized protection of your web applications from common exploits and vulnerabilities.
## DDoS protection
In this article, we focused on security advantages of an Azure PaaS deployment a
- [Azure Cloud Services](../../cloud-services/security-baseline.md) - Azure Cache for Redis - Azure Service Bus-- Web Application Firewalls
+- [Web Application Firewall](../../web-application-firewall/overview.md)
See [Develop secure applications on Azure](https://azure.microsoft.com/resources/develop-secure-applications-on-azure/) for security questions and controls you should consider at each phase of the software development lifecycle when developing applications for the cloud.
service-bus-messaging Service Bus Java How To Use Queues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-java-how-to-use-queues.md
In this quickstart, you'll create a Java app to send messages to and receive mes
> [!NOTE] > This quick start provides step-by-step instructions for a simple scenario of sending messages to a Service Bus queue and receiving them. You can find pre-built Java samples for Azure Service Bus in the [Azure SDK for Java repository on GitHub](https://github.com/azure/azure-sdk-for-java/tree/main/sdk/servicebus/azure-messaging-servicebus/src/samples).
+> [!TIP]
+> If you're working with Azure Service Bus resources in a Spring application, we recommend that you consider [Spring Cloud Azure](/azure/developer/java/spring-framework/) as an alternative. Spring Cloud Azure is an open-source project that provides seamless Spring integration with Azure services. To learn more about Spring Cloud Azure, and to see an example using Service Bus, see [Spring Cloud Stream with Azure Service Bus](/azure/developer/java/spring-framework/configure-spring-cloud-stream-binder-java-app-with-service-bus).
+ ## Prerequisites - An Azure subscription. To complete this tutorial, you need an Azure account. You can activate your [MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/credit-for-visual-studio-subscribers/?WT.mc_id=A85619ABF) or sign up for a [free account](https://azure.microsoft.com/free/?WT.mc_id=A85619ABF). - If you don't have a queue to work with, follow steps in the [Use Azure portal to create a Service Bus queue](service-bus-quickstart-portal.md) article to create a queue. Note down the **connection string** for your Service Bus namespace and the name of the **queue** you created.
service-bus-messaging Service Bus Java How To Use Topics Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-java-how-to-use-topics-subscriptions.md
In this quickstart, you write Java code using the azure-messaging-servicebus pac
> [!NOTE] > This quick start provides step-by-step instructions for a simple scenario of sending a batch of messages to a Service Bus topic and receiving those messages from a subscription of the topic. You can find pre-built Java samples for Azure Service Bus in the [Azure SDK for Java repository on GitHub](https://github.com/azure/azure-sdk-for-java/tree/main/sdk/servicebus/azure-messaging-servicebus/src/samples).
+> [!TIP]
+> If you're working with Azure Service Bus resources in a Spring application, we recommend that you consider [Spring Cloud Azure](/azure/developer/java/spring-framework/) as an alternative. Spring Cloud Azure is an open-source project that provides seamless Spring integration with Azure services. To learn more about Spring Cloud Azure, and to see an example using Service Bus, see [Spring Cloud Stream with Azure Service Bus](/azure/developer/java/spring-framework/configure-spring-cloud-stream-binder-java-app-with-service-bus).
+ ## Prerequisites - An Azure subscription. To complete this tutorial, you need an Azure account. You can activate your [Visual Studio or MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/?WT.mc_id=A85619ABF) or sign-up for a [free account](https://azure.microsoft.com/free/?WT.mc_id=A85619ABF).
service-bus-messaging Service Bus Resource Manager Exceptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-resource-manager-exceptions.md
Here are the various exceptions/errors that are surfaced through the Azure Resou
| Bad Request | 40000 | Sub code=40000. Both DelayedPersistence and RequiresDuplicateDetection property can't be enabled together. | Entities with Duplicate detection enabled on them must be persistent, so persistence can't be delayed. | Learn more about [Duplicate Detection](duplicate-detection.md) | | Bad Request | 40000 | Sub code=40000. The value for RequiresSession property of an existing Queue can't be changed. | Support for sessions should be enabled at the time of entity creation. Once created, you can't enable/disable sessions on an existing entity (queue or subscription) | Delete and recreate a new queue (or subscription) with the "RequiresSession" property enabled. | | Bad Request | 40000 | Sub code=40000. 'URI_PATH' contains character(s) that isn't allowed by Service Bus. Entity segments can contain only letters, numbers, periods(.), hyphens(-), and underscores(_). | Entity segments can contain only letters, numbers, periods(.), hyphens(-), and underscores(_). Any other characters cause the request to fail. | Ensure that there are no invalid characters in the URI Path. |
+| Bad Request | 40000 | Sub code=40000. Bad request. To know more visit `https://aka.ms/sbResourceMgrExceptions`. TrackingId:00000000-0000-0000-0000-00000000000000_000, SystemTracker:contososbusnamesapce.servicebus.windows.net:myqueue, Timestamp:yyyy-mm-ddThh:mm:ss | This error occurs when you try to create a queue in a non-premium tier namespace with a value set to the property `maxMessageSizeInKilobytes`. This property can only be set for queues in the premium namespace. |
| Bad Request | 40300 | Sub code=40300. The maximum number of resources of type `EnablePartioning == true` has been reached or exceeded. | There's a limit on number of partitioned entities per namespace. See [Quotas and limits](service-bus-quotas.md). | | | Bad Request | 40400 | Sub code=40400. The auto forwarding destination entity doesn't exist. | The destination for the autoforwarding destination entity doesn't exist. | The destination entity (queue or topic), must exist before the source is created. Retry after creating the destination entity. |
service-fabric Service Fabric Concept Resource Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-concept-resource-model.md
To delete an application that was deployed by using the application resource mod
Remove-AzResource -ResourceId <String> [-Force] [-ApiVersion <String>] ```
+## Common questions and answers
+
+Error: "Application name must be a prefix of service name"
+ Answer: Make sure the service name is formatted as follows: ProfileVetSF~CallTicketDataWebApi.
+ ## Next steps Get information about the application resource model:
site-recovery Site Recovery Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-whats-new.md
For Site Recovery components, we support N-4 versions, where N is the latest rel
**Update** | **Unified Setup** | **Configuration server/Replication appliance** | **Mobility service agent** | **Site Recovery Provider** | **Recovery Services agent** | | | | |
+[Rollup 66](https://support.microsoft.com/en-us/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5) | 9.53.6594.1 | 5.1.8095.0 | 9.53.6594.1 | 5.1.8103.0 (VMware) & 5.1.8095.0 (Hyper-V) | 2.0.9260.0
[Rollup 65](https://support.microsoft.com/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0) | 9.52.6522.1 | 5.1.7870.0 | 9.52.6522.1 | 5.1.7870.0 (VMware) & 5.1.7882.0 (Hyper-V) | 2.0.9259.0 [Rollup 64](https://support.microsoft.com/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) | 9.51.6477.1 | 5.1.7802.0 | 9.51.6477.1 | 5.1.7802.0 | 2.0.9257.0 [Rollup 63](https://support.microsoft.com/topic/update-rollup-63-for-azure-site-recovery-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | 9.50.6419.1 | 5.1.7626.0 | 9.50.6419.1 | 5.1.7626.0 | 2.0.9249.0 [Rollup 62](https://support.microsoft.com/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | 9.49.6395.1 | 5.1.7418.0 | 9.49.6395.1 | 5.1.7418.0 | 2.0.9248.0
-[Rollup 61](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 9.48.6349.1 | 5.1.7387.0 | 9.48.6349.1 | 5.1.7387.0 | 2.0.9245.0
+ [Learn more](service-updates-how-to.md) about update installation and support. +
+## Updates (February 2023)
+
+### Update Rollup 66
+
+[Update rollup 66](https://support.microsoft.com/en-us/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5) provides the following updates:
+
+**Update** | **Details**
+ |
+**Providers and agents** | Updates to Site Recovery agents and providers as detailed in the rollup KB article.
+**Issue fixes/improvements** | A number of fixes and improvement as detailed in the rollup KB article.
+**Azure VM disaster recovery** | Added support for Ubuntu 22.04, RHEL 8.7 and Cent OS 8.7 Linux distro.
+**VMware VM/physical disaster recovery to Azure** | Added support for Ubuntu 22.04, RHEL 8.7 and Cent OS 8.7 Linux distro.
+ ## Updates (November 2022) ### Update Rollup 65
For Site Recovery components, we support N-4 versions, where N is the latest rel
**Providers and agents** | Updates to Site Recovery agents and providers as detailed in the rollup KB article. **Issue fixes/improvements** | A number of fixes and improvement as detailed in the rollup KB article. **Azure VM disaster recovery** | Added support for Debian 11 and SUSE Linux Enterprise Server 15 SP 4 Linux distro.
-**VMware VM/physical disaster recovery to Azure** | Added support for Debian 11 and SUSE Linux Enterprise Server 15 SP 4 Linux distro.<br/><br/> Added Modernized VMware to Azure DR support for government clouds. [Learn more](deploy-vmware-azure-replication-appliance-modernized.md#allow-urls-for-government-clouds).
+**VMware VM/physical disaster recovery to Azure** | Added support for Debian 11 and SUSE Linux Enterprise Server 15 SP 4 Linux distro.<br/><br/> Added Modernized VMware to Azure DR support for government clouds. [Learn more](deploy-vmware-azure-replication-appliance-modernized.md#allow-urls-for-government-clouds).
## Updates (October 2022)
static-web-apps Deploy Nextjs Hybrid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/deploy-nextjs-hybrid.md
Begin by adding an API route.
:::image type="content" source="media/deploy-nextjs/nextjs-api-route-display.png" alt-text="Display the output from the API route":::
+## Enable standalone feature
+
+When your application size exceeds 100Mb, the Next.js [Output File Tracing](https://nextjs.org/docs/advanced-features/output-file-tracing) feature helps optimize the app size and enhance performance.
+
+Output File Tracing creates a compressed version of the whole application with necessary package dependencies built into a folder named *.next/standalone*. This folder is meant to deploy on its own without additional *node_modules* dependencies.
+
+In order to enable the `standalone` feature, add the following additional property to your `next.config.js`:
+```bash
+module.exports ={
+ output:"standalone",
+}
+```
+ ## Enable logging for Next.js Following best practices for Next.js server API troubleshooting, add logging to the API to catch these errors. Logging on Azure uses **Application Insights**. In order to preload this SDK, you need to create a custom start up script. To learn more:
static-web-apps Nextjs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/nextjs.md
Key features that are available in the preview are:
- [Internationalization](https://nextjs.org/docs/advanced-features/i18n-routing) - [Middleware](https://nextjs.org/docs/advanced-features/middleware) - [Authentication](https://nextjs.org/docs/authentication)
+- [Output File Tracing](https://nextjs.org/docs/advanced-features/output-file-tracing)
Follow the [deploy hybrid Next.js applications](deploy-nextjs-hybrid.md) tutorial to learn how to deploy a hybrid Next.js application to Azure.
storage Storage Blob Client Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-client-management.md
Working with any Azure resource using the SDK begins with creating a client obje
### Create a BlobServiceClient object
-An authorized `BlobServiceClient` object allows your app to interact with resources at the storage account level.
+An authorized `BlobServiceClient` object allows your app to interact with resources at the storage account level. `BlobServiceClient` provides methods to retrieve and configure account properties, as well as list, create, and delete containers within the storage account. This client object is the starting point for interacting with resources in the storage account.
-A common scenario is to instantiate a single service client, then create container clients and blob clients from the service client, as needed. `BlobServiceClient` provides methods to retrieve and configure account properties, as well as list, create, and delete containers within the storage account. This client object is the starting point for interacting with resources in the storage account.
-
-To work with a specific container or blob, you can use the `BlobServiceClient` object to create a [container client](#create-a-blobcontainerclient-object) or [blob client](#create-a-blobclient-object). Clients created from a `BlobServiceClient` will inherit its client configuration, including client options and credentials.
+A common scenario is to instantiate a single service client, then create container clients and blob clients from the service client, as needed. To work with a specific container or blob, you can use the `BlobServiceClient` object to create a [container client](#create-a-blobcontainerclient-object) or [blob client](#create-a-blobclient-object). Clients created from a `BlobServiceClient` will inherit its client configuration, including client options and credentials.
The following examples show how to create a `BlobServiceClient` object:
storage Storage Blob Dotnet Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-dotnet-get-started.md
using Azure.Storage.Blobs.Specialized;
## Authorize access and connect to Blob Storage
-To connect to Blob Storage, create an instance of the [BlobServiceClient](/dotnet/api/azure.storage.blobs.blobserviceclient) class. This object is your starting point. You can use it to operate on the blob service instance and its containers. You can authorize access and create a [BlobServiceClient](/dotnet/api/azure.storage.blobs.blobserviceclient) object by using an Azure Active Directory (Azure AD) authorization token, an account access key, or a shared access signature (SAS).
+To connect an application to Blob Storage, create an instance of the [BlobServiceClient](/dotnet/api/azure.storage.blobs.blobserviceclient) class. This object is your starting point to interact with data resources at the storage account level. You can use it to operate on the storage account and its containers. You can also use the service client to create container clients or blob clients, depending on the resource you need to work with.
+
+To learn more about creating and managing client objects, see [Create and manage client objects that interact with data resources](storage-blob-client-management.md).
+
+You can authorize a `BlobServiceClient` object by using an Azure Active Directory (Azure AD) authorization token, an account access key, or a shared access signature (SAS).
## [Azure AD](#tab/azure-ad)
To authorize with Azure AD, you'll need to use a security principal. The type of
| Where the application runs | Security principal | Guidance | | | | | | Local machine (developing and testing) | Service principal | In this method, dedicated **application service principal** objects are set up using the App registration process for use during local development. The identity of the service principal is then stored as environment variables to be accessed by the app when it's run in local development.<br><br>This method allows you to assign the specific resource permissions needed by the app to the service principal objects used by developers during local development. This approach ensures the application only has access to the specific resources it needs and replicates the permissions the app will have in production.<br><br>The downside of this approach is the need to create separate service principal objects for each developer that works on an application.<br><br>[Authorize access using developer service principals](/dotnet/azure/sdk/authentication-local-development-service-principal?toc=/azure/storage/blobs/toc.json&bc=/azure/storage/blobs/breadcrumb/toc.json) |
-| Local machine (developing and testing) | User identity | In this method, a developer must be signed-in to Azure from either Visual Studio, the Azure Tools extension for VS Code, the Azure CLI, or Azure PowerShell on their local workstation. The application then can access the developer's credentials from the credential store and use those credentials to access Azure resources from the app.<br><br>This method has the advantage of easier setup since a developer only needs to sign in to their Azure account from Visual Studio, VS Code or the Azure CLI. The disadvantage of this approach is that the developer's account likely has more permissions than required by the application, therefore not properly replicating the permissions the app will run with in production.<br><br>[Authorize access using developer credentials](/dotnet/azure/sdk/authentication-local-development-dev-accounts?toc=/azure/storage/blobs/toc.json&bc=/azure/storage/blobs/breadcrumb/toc.json) |
+| Local machine (developing and testing) | User identity | In this method, a developer must be signed-in to Azure from either Visual Studio, the Azure Tools extension for Visual Studio Code, the Azure CLI, or Azure PowerShell on their local workstation. The application then can access the developer's credentials from the credential store and use those credentials to access Azure resources from the app.<br><br>This method has the advantage of easier setup since a developer only needs to sign in to their Azure account from Visual Studio, Visual Studio Code or the Azure CLI. The disadvantage of this approach is that the developer's account likely has more permissions than required by the application, therefore not properly replicating the permissions the app will run with in production.<br><br>[Authorize access using developer credentials](/dotnet/azure/sdk/authentication-local-development-dev-accounts?toc=/azure/storage/blobs/toc.json&bc=/azure/storage/blobs/breadcrumb/toc.json) |
| Hosted in Azure | Managed identity | Apps hosted in Azure should use a **managed identity service principal**. Managed identities are designed to represent the identity of an app hosted in Azure and can only be used with Azure hosted apps.<br><br>For example, a .NET web app hosted in Azure App Service would be assigned a managed identity. The managed identity assigned to the app would then be used to authenticate the app to other Azure services.<br><br>[Authorize access from Azure-hosted apps using a managed identity](/dotnet/azure/sdk/authentication-azure-hosted-apps?toc=/azure/storage/blobs/toc.json&bc=/azure/storage/blobs/breadcrumb/toc.json) | | Hosted outside of Azure (for example, on-premises apps) | Service principal | Apps hosted outside of Azure (for example on-premises apps) that need to connect to Azure services should use an **application service principal**. An application service principal represents the identity of the app in Azure and is created through the application registration process.<br><br>For example, consider a .NET web app hosted on-premises that makes use of Azure Blob Storage. You would create an application service principal for the app using the App registration process. The `AZURE_CLIENT_ID`, `AZURE_TENANT_ID`, and `AZURE_CLIENT_SECRET` would all be stored as environment variables to be read by the application at runtime and allow the app to authenticate to Azure using the application service principal.<br><br>[Authorize access from on-premises apps using an application service principal](/dotnet/azure/sdk/authentication-on-premises-apps?toc=/azure/storage/blobs/toc.json&bc=/azure/storage/blobs/breadcrumb/toc.json) |
-The easiest way to authorize access and connect to Blob Storage is to obtain an OAuth token by creating a [DefaultAzureCredential](/dotnet/api/azure.identity.defaultazurecredential) instance. You can then use that credential to create a [BlobServiceClient](/dotnet/api/azure.storage.blobs.blobserviceclient) object.
+#### Authorize access using DefaultAzureCredential
+
+An easy and secure way to authorize access and connect to Blob Storage is to obtain an OAuth token by creating a [DefaultAzureCredential](/dotnet/api/azure.identity.defaultazurecredential) instance. You can then use that credential to create a [BlobServiceClient](/dotnet/api/azure.storage.blobs.blobserviceclient) object.
+
+The following example creates a `BlobServiceClient` object using `DefaultAzureCredential`:
```csharp public static void GetBlobServiceClient(ref BlobServiceClient blobServiceClient, string accountName)
public static void GetBlobServiceClient(ref BlobServiceClient blobServiceClient,
If you know exactly which credential type you'll use to authenticate users, you can obtain an OAuth token by using other classes in the [Azure Identity client library for .NET](/dotnet/api/overview/azure/identity-readme). These classes derive from the [TokenCredential](/dotnet/api/azure.core.tokencredential) class.
+## [SAS token](#tab/sas-token)
+
+Create a [Uri](/dotnet/api/system.uri) by using the blob service endpoint and SAS token. Then, create a [BlobServiceClient](/dotnet/api/azure.storage.blobs.blobserviceclient) by using the [Uri](/dotnet/api/system.uri).
+
+```csharp
+public static void GetBlobServiceClientSAS(ref BlobServiceClient blobServiceClient,
+ string accountName, string sasToken)
+{
+ string blobUri = "https://" + accountName + ".blob.core.windows.net";
+
+ blobServiceClient = new BlobServiceClient
+ (new Uri($"{blobUri}?{sasToken}"), null);
+}
+```
+
+To learn more about generating and managing SAS tokens, see the following articles:
+
+- [Grant limited access to Azure Storage resources using shared access signatures (SAS)](../common/storage-sas-overview.md?toc=/azure/storage/blobs/toc.json)
+- [Create an account SAS with .NET](../common/storage-account-sas-create-dotnet.md)
+- [Create a service SAS for a container or blob](sas-service-create.md)
+- [Create a user delegation SAS for a container, directory, or blob with .NET](storage-blob-user-delegation-sas-create-dotnet.md)
+ ## [Account key](#tab/account-key) Create a [StorageSharedKeyCredential](/dotnet/api/azure.storage.storagesharedkeycredential) by using the storage account name and account key. Then use that object to initialize a [BlobServiceClient](/dotnet/api/azure.storage.blobs.blobserviceclient).
BlobServiceClient blobServiceClient = new BlobServiceClient(connectionString);
For information about how to obtain account keys and best practice guidelines for properly managing and safeguarding your keys, see [Manage storage account access keys](../common/storage-account-keys-manage.md).
-## [SAS token](#tab/sas-token)
-
-Create a [Uri](/dotnet/api/system.uri) by using the blob service endpoint and SAS token. Then, create a [BlobServiceClient](/dotnet/api/azure.storage.blobs.blobserviceclient) by using the [Uri](/dotnet/api/system.uri).
-
-```csharp
-public static void GetBlobServiceClientSAS(ref BlobServiceClient blobServiceClient,
- string accountName, string sasToken)
-{
- string blobUri = "https://" + accountName + ".blob.core.windows.net";
-
- blobServiceClient = new BlobServiceClient
- (new Uri($"{blobUri}?{sasToken}"), null);
-}
-```
-
-To generate and manage SAS tokens, see any of these articles:
--- [Grant limited access to Azure Storage resources using shared access signatures (SAS)](../common/storage-sas-overview.md?toc=/azure/storage/blobs/toc.json)--- [Create an account SAS with .NET](../common/storage-account-sas-create-dotnet.md)--- [Create a service SAS for a container or blob](sas-service-create.md)--- [Create a user delegation SAS for a container, directory, or blob with .NET](storage-blob-user-delegation-sas-create-dotnet.md)
+> [!IMPORTANT]
+> The account access key should be used with caution. If your account access key is lost or accidentally placed in an insecure location, your service may become vulnerable. Anyone who has the access key is able to authorize requests against the storage account, and effectively has access to all the data. `DefaultAzureCredential` provides enhanced security features and benefits and is the recommended approach for managing authorization to Azure services.
storage Storage Blob Java Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-java-get-started.md
Blob client library information:
## Authorize access and connect to Blob Storage
-To connect to Blob Storage, create an instance of the [BlobServiceClient](/java/api/com.azure.storage.blob.blobserviceclient) class. This object is your starting point. You can use it to operate on the blob service instance and its containers. You can create a `BlobServiceClient` object by using an Azure Active Directory (Azure AD) authorization token, an account access key, or a shared access signature (SAS).
+To connect an application to Blob Storage, create an instance of the [BlobServiceClient](/java/api/com.azure.storage.blob.blobserviceclient) class. This object is your starting point to interact with data resources at the storage account level. You can use it to operate on the storage account and its containers. You can also use the service client to create container clients or blob clients, depending on the resource you need to work with.
-To learn more about each of these authorization mechanisms, see [Authorize access to data in Azure Storage](../common/authorize-data-access.md).
+To learn more about creating and managing client objects, see [Create and manage client objects that interact with data resources](storage-blob-client-management.md).
+
+You can authorize a `BlobServiceClient` object by using an Azure Active Directory (Azure AD) authorization token, an account access key, or a shared access signature (SAS).
## [Azure AD (Recommended)](#tab/azure-ad)
To authorize with Azure AD, you'll need to use a [security principal](../../acti
| Where the application runs | Security principal | Guidance | | | | | | Local machine (developing and testing) | Service principal | In this method, dedicated **application service principal** objects are set up using the App registration process for use during local development. The identity of the service principal is then stored as environment variables to be accessed by the app when it's run in local development.<br><br>This method allows you to assign the specific resource permissions needed by the app to the service principal objects used by developers during local development. This approach ensures the application only has access to the specific resources it needs and replicates the permissions the app will have in production.<br><br>The downside of this approach is the need to create separate service principal objects for each developer that works on an application.<br><br>To learn how to register the app, set up an Azure AD group, assign roles, and configure environment variables, see [Authorize access using developer service principals](/dotnet/azure/sdk/authentication-local-development-service-principal?toc=/azure/storage/blobs/toc.json&bc=/azure/storage/blobs/breadcrumb/toc.json). To authorize access and connect to Blob Storage using `DefaultAzureCredential`, see the code example in the [next section](#authorize-access-using-defaultazurecredential). |
-| Local machine (developing and testing) | User identity | In this method, a developer must be signed-in to Azure from either the Azure Tools extension for VS Code, the Azure CLI, or Azure PowerShell on their local workstation. The application then can access the developer's credentials from the credential store and use those credentials to access Azure resources from the app.<br><br>This method has the advantage of easier setup since a developer only needs to sign in to their Azure account from VS Code or the Azure CLI. The disadvantage of this approach is that the developer's account likely has more permissions than required by the application, therefore not properly replicating the permissions the app will run with in production.<br><br>To learn how to set up an Azure AD group, assign roles, and sign in to Azure, see [Authorize access using developer credentials](/dotnet/azure/sdk/authentication-local-development-dev-accounts?toc=/azure/storage/blobs/toc.json&bc=/azure/storage/blobs/breadcrumb/toc.json). To authorize access and connect to Blob Storage using `DefaultAzureCredential`, see the code example in the [next section](#authorize-access-using-defaultazurecredential). |
+| Local machine (developing and testing) | User identity | In this method, a developer must be signed-in to Azure from either the Azure Tools extension for Visual Studio Code, the Azure CLI, or Azure PowerShell on their local workstation. The application then can access the developer's credentials from the credential store and use those credentials to access Azure resources from the app.<br><br>This method has the advantage of easier setup since a developer only needs to sign in to their Azure account from Visual Studio Code or the Azure CLI. The disadvantage of this approach is that the developer's account likely has more permissions than required by the application, therefore not properly replicating the permissions the app will run with in production.<br><br>To learn how to set up an Azure AD group, assign roles, and sign in to Azure, see [Authorize access using developer credentials](/dotnet/azure/sdk/authentication-local-development-dev-accounts?toc=/azure/storage/blobs/toc.json&bc=/azure/storage/blobs/breadcrumb/toc.json). To authorize access and connect to Blob Storage using `DefaultAzureCredential`, see the code example in the [next section](#authorize-access-using-defaultazurecredential). |
| Hosted in Azure | Managed identity | Apps hosted in Azure should use a **managed identity service principal**. Managed identities are designed to represent the identity of an app hosted in Azure and can only be used with Azure hosted apps.<br><br>For example, a Java app hosted in Azure App Service would be assigned a managed identity. The managed identity assigned to the app would then be used to authenticate the app to other Azure services.<br><br>To learn how to enable managed identity and assign roles, see [Authorize access from Azure-hosted apps using a managed identity](/dotnet/azure/sdk/authentication-azure-hosted-apps?toc=/azure/storage/blobs/toc.json&bc=/azure/storage/blobs/breadcrumb/toc.json). To authorize access and connect to Blob Storage using `DefaultAzureCredential`, see the code example in the [next section](#authorize-access-using-defaultazurecredential). | | Hosted outside of Azure (for example, on-premises apps) | Service principal | Apps hosted outside of Azure (for example on-premises apps) that need to connect to Azure services should use an **application service principal**. An application service principal represents the identity of the app in Azure and is created through the application registration process.<br><br>For example, consider a Java app hosted on-premises that makes use of Azure Blob Storage. You would create an application service principal for the app using the App registration process. The `AZURE_CLIENT_ID`, `AZURE_TENANT_ID`, and `AZURE_CLIENT_SECRET` would all be stored as environment variables to be read by the application at runtime and allow the app to authenticate to Azure using the application service principal.<br><br>To learn how to register the app, assign roles, and configure environment variables, see [Authorize access from on-premises apps using an application service principal](/dotnet/azure/sdk/authentication-on-premises-apps?toc=/azure/storage/blobs/toc.json&bc=/azure/storage/blobs/breadcrumb/toc.json). To authorize access and connect to Blob Storage using `DefaultAzureCredential`, see the code example in the [next section](#authorize-access-using-defaultazurecredential). | #### Authorize access using DefaultAzureCredential
-The easiest way to authorize access and connect to Blob Storage is to obtain an OAuth token by creating a [DefaultAzureCredential](/java/api/com.azure.identity.defaultazurecredential) instance. You can then use that credential to create a [BlobServiceClient](/java/api/com.azure.storage.blob.blobserviceclient) object.
+An easy and secure way to authorize access and connect to Blob Storage is to obtain an OAuth token by creating a [DefaultAzureCredential](/java/api/com.azure.identity.defaultazurecredential) instance. You can then use that credential to create a [BlobServiceClient](/java/api/com.azure.storage.blob.blobserviceclient) object.
Make sure you have the correct dependencies in pom.xml and the necessary import directives, as described in [Set up your project](#set-up-your-project).
-The following example uses [BlobServiceClientBuilder](/java/api/com.azure.storage.blob.blobserviceclientbuilder) to build a [BlobServiceClient](/java/api/com.azure.storage.blob.blobserviceclient) object using `DefaultAzureCredential`:
+The following example uses [BlobServiceClientBuilder](/java/api/com.azure.storage.blob.blobserviceclientbuilder) to build a `BlobServiceClient` object using `DefaultAzureCredential`:
:::code language="java" source="~/azure-storage-snippets/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/App.java" id="Snippet_GetServiceClientAzureAD":::
The following guides show you how to use each of these classes to build your app
| [List blobs](storage-blobs-list-java.md) | List blobs in different ways. | | [Delete and restore](storage-blob-delete-java.md) | Delete blobs, and if soft-delete is enabled, restore deleted blobs. | | [Find blobs using tags](storage-blob-tags-java.md) | Set and retrieve tags as well as use tags to find blobs. |
-| [Manage properties and metadata (blobs)](storage-blob-properties-metadata-java.md) | Get and set properties and metadata for blobs. |
+| [Manage properties and metadata (blobs)](storage-blob-properties-metadata-java.md) | Get and set properties and metadata for blobs. |
storage Storage Blob Python Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-python-get-started.md
Blob client library information:
## Authorize access and connect to Blob Storage
-To connect to Blob Storage, create an instance of the [BlobServiceClient](/python/api/azure-storage-blob/azure.storage.blob.blobserviceclient) class. This object is your starting point. You can use it to operate on the blob service instance and its containers. You can create a `BlobServiceClient` object by using an Azure Active Directory (Azure AD) authorization token, an account access key, or a shared access signature (SAS).
+To connect an application to Blob Storage, create an instance of the [BlobServiceClient](/python/api/azure-storage-blob/azure.storage.blob.blobserviceclient) class. This object is your starting point to interact with data resources at the storage account level. You can use it to operate on the storage account and its containers. You can also use the service client to create container clients or blob clients, depending on the resource you need to work with.
-To learn more about each of these authorization mechanisms, see [Authorize access to data in Azure Storage](../common/authorize-data-access.md).
+To learn more about creating and managing client objects, see [Create and manage client objects that interact with data resources](storage-blob-client-management.md).
+
+You can authorize a `BlobServiceClient` object by using an Azure Active Directory (Azure AD) authorization token, an account access key, or a shared access signature (SAS).
## [Azure AD](#tab/azure-ad)
To authorize with Azure AD, you'll need to use a [security principal](/azure/act
#### Authorize access using DefaultAzureCredential
-The easiest way to authorize access and connect to Blob Storage is to obtain an OAuth token by creating a [DefaultAzureCredential](/python/api/azure-identity/azure.identity.defaultazurecredential) instance. You can then use that credential to create a [BlobServiceClient](/python/api/azure-storage-blob/azure.storage.blob.blobserviceclient) object.
+An easy and secure way to authorize access and connect to Blob Storage is to obtain an OAuth token by creating a [DefaultAzureCredential](/python/api/azure-identity/azure.identity.defaultazurecredential) instance. You can then use that credential to create a [BlobServiceClient](/python/api/azure-storage-blob/azure.storage.blob.blobserviceclient) object.
+
+The following example creates a `BlobServiceClient` object using `DefaultAzureCredential`:
:::code language="python" source="~/azure-storage-snippets/blobs/howto/python/blob-devguide-py/blob-devguide-auth.py" id="Snippet_get_service_client_DAC":::
+## [SAS token](#tab/sas-token)
+
+To use a shared access signature (SAS) token, provide the token as a string and initialize a [BlobServiceClient](/python/api/azure-storage-blob/azure.storage.blob.blobserviceclient) object. If your account URL includes the SAS token, omit the credential parameter.
++
+To learn more about generating and managing SAS tokens, see the following article:
+
+- [Grant limited access to Azure Storage resources using shared access signatures (SAS)](../common/storage-sas-overview.md?toc=/azure/storage/blobs/toc.json)
+ ## [Account key](#tab/account-key) To use a storage account shared key, provide the key as a string and initialize a [BlobServiceClient](/python/api/azure-storage-blob/azure.storage.blob.blobserviceclient) object.
You can also create a `BlobServiceClient` object using a connection string.
For information about how to obtain account keys and best practice guidelines for properly managing and safeguarding your keys, see [Manage storage account access keys](../common/storage-account-keys-manage.md).
-## [SAS token](#tab/sas-token)
-
-To use a shared access signature (SAS) token, provide the token as a string and initialize a [BlobServiceClient](/python/api/azure-storage-blob/azure.storage.blob.blobserviceclient) object. If your account URL includes the SAS token, omit the credential parameter.
--
-To generate and manage SAS tokens, see [Grant limited access to Azure Storage resources using shared access signatures (SAS)](../common/storage-sas-overview.md?toc=/azure/storage/blobs/toc.json).
+> [!IMPORTANT]
+> The account access key should be used with caution. If your account access key is lost or accidentally placed in an insecure location, your service may become vulnerable. Anyone who has the access key is able to authorize requests against the storage account, and effectively has access to all the data. `DefaultAzureCredential` provides enhanced security features and benefits and is the recommended approach for managing authorization to Azure services.
storage Shared Key Authorization Prevent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/shared-key-authorization-prevent.md
Azure Storage supports Azure AD authorization for requests to blob, table and qu
Microsoft recommends that you either migrate any Azure Files data to a separate storage account before you disallow access to an account via Shared Key, or do not apply this setting to storage accounts that support Azure Files workloads.
-Disallowing Shared Key access for a storage account does not affect SMB connections to Azure Files.
- ## Identify storage accounts that allow Shared Key access There are two ways to identify storage accounts that allow Shared Key access:
synapse-analytics Apache Spark What Is Delta Lake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-what-is-delta-lake.md
Title: What is Delta Lake
+ Title: What is Delta Lake?
description: Overview of Delta Lake and how it works as part of Azure Synapse Analytics
Last updated 12/06/2022
-# What is Delta Lake
+# What is Delta Lake?
Delta Lake is an open-source storage layer that brings ACID (atomicity, consistency, isolation, and durability) transactions to Apache Spark and big data workloads.
The current version of Delta Lake included with Azure Synapse has language suppo
| Feature | Description | | | |
-| **ACID Transactions** | Data lakes are typically populated via multiple processes and pipelines, some of which are writing data concurrently with reads. Prior to Delta Lake and the addition of transactions, data engineers had to go through a manual error prone process to ensure data integrity. Delta Lake brings familiar ACID transactions to data lakes. It provides serializability, the strongest level of isolation level. Learn more at [Diving into Delta Lake: Unpacking the Transaction Log](https://databricks.com/blog/2019/08/21/diving-into-delta-lake-unpacking-the-transaction-log.html).|
-| **Scalable Metadata Handling** | In big data, even the metadata itself can be "big data". Delta Lake treats metadata just like data, leveraging Spark's distributed processing power to handle all its metadata. As a result, Delta Lake can handle petabyte-scale tables with billions of partitions and files at ease. |
+| **ACID Transactions** | Data lakes are typically populated through multiple processes and pipelines, some of which are writing data concurrently with reads. Prior to Delta Lake and the addition of transactions, data engineers had to go through a manual error prone process to ensure data integrity. Delta Lake brings familiar ACID transactions to data lakes. It provides serializability, the strongest level of isolation level. Learn more at [Diving into Delta Lake: Unpacking the Transaction Log](https://databricks.com/blog/2019/08/21/diving-into-delta-lake-unpacking-the-transaction-log.html).|
+| **Scalable Metadata Handling** | In big data, even the metadata itself can be "big data." Delta Lake treats metadata just like data, leveraging Spark's distributed processing power to handle all its metadata. As a result, Delta Lake can handle petabyte-scale tables with billions of partitions and files at ease. |
| **Time Travel (data versioning)** | The ability to "undo" a change or go back to a previous version is one of the key features of transactions. Delta Lake provides snapshots of data enabling you to revert to earlier versions of data for audits, rollbacks or to reproduce experiments. Learn more in [Introducing Delta Lake Time Travel for Large Scale Data Lakes](https://databricks.com/blog/2019/02/04/introducing-delta-time-travel-for-large-scale-data-lakes.html). | | **Open Format** | Apache Parquet is the baseline format for Delta Lake, enabling you to leverage the efficient compression and encoding schemes that are native to the format. | | **Unified Batch and Streaming Source and Sink** | A table in Delta Lake is both a batch table, as well as a streaming source and sink. Streaming data ingest, batch historic backfill, and interactive queries all just work out of the box. |
The current version of Delta Lake included with Azure Synapse has language suppo
| **Schema Evolution** | Delta Lake enables you to make changes to a table schema that can be applied automatically, without having to write migration DDL. For more information, see [Diving Into Delta Lake: Schema Enforcement & Evolution](https://databricks.com/blog/2019/09/24/diving-into-delta-lake-schema-enforcement-evolution.html) | | **Audit History** | Delta Lake transaction log records details about every change made to data providing a full audit trail of the changes. | | **Updates and Deletes** | Delta Lake supports Scala / Java / Python and SQL APIs for a variety of functionality. Support for merge, update, and delete operations helps you to meet compliance requirements. For more information, see [Announcing the Delta Lake 0.6.1 Release](https://github.com/delta-io/delta/releases/tag/v0.6.1), [Announcing the Delta Lake 0.7 Release](https://github.com/delta-io/delta/releases/tag/v0.7.0) and [Simple, Reliable Upserts and Deletes on Delta Lake Tables using Python APIs](https://databricks.com/blog/2019/10/03/simple-reliable-upserts-and-deletes-on-delta-lake-tables-using-python-apis.html), which includes code snippets for merge, update, and delete DML commands. |
-| **100% Compatible with Apache Spark API** | Developers can use Delta Lake with their existing data pipelines with minimal change as it is fully compatible with existing Spark implementations. |
+| **100 percent compatible with Apache Spark API** | Developers can use Delta Lake with their existing data pipelines with minimal change as it is fully compatible with existing Spark implementations. |
For full documentation, see the [Delta Lake Documentation Page](https://docs.delta.io/latest/delta-intro.html)
virtual-desktop Private Link Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/private-link-setup.md
Title: Set up Private Link for Azure Virtual Desktop preview - Azure
description: How to set up Private Link for Azure Virtual Desktop (preview). Previously updated : 01/12/2023 Last updated : 02/09/2023
In order to use Private Link in your Azure Virtual Desktop deployment, you'll ne
- An Azure Virtual Desktop deployment with service objects, such as host pools, app groups, and [workspaces](environment-setup.md#workspaces). - The [required permissions to use Private Link](../private-link/rbac-permissions.md).
+>[!IMPORTANT]
+>There's currently a bug in version 1.2.3918 of the Remote Desktop client for Windows that causes a client regression when you use Private Link. In order to use Private Link in your deployment, you must use [version 1.2.3667](whats-new-client-windows.md#updates-for-version-123667) until we can resolve the bug.
+ ### Re-register your resource provider In the public preview version of Private Link, after you create your resources, you'll need to re-register them to your resource provider before you can start using Private Link. Re-registering allows the service to download and assign the new roles that will let you use this feature.
virtual-desktop Proxy Server Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/proxy-server-support.md
The Azure Virtual Desktop agent automatically tries to locate a proxy server on
To configure your network to use DNS resolution for WPAD, follow the instructions in [Auto detect settings Internet Explorer 11](/internet-explorer/ie11-deploy-guide/auto-detect-settings-for-ie11). Make sure the DNS server global query blocklist allows the WPAD resolution by following the directions in [Set-DnsServerGlobalQueryBlockList](/powershell/module/dnsserver/set-dnsserverglobalqueryblocklist?view=windowsserver2019-ps&preserve-view=true).
-### Manually set a device-wide Internet Explorer proxy
+### Manually set a device-wide proxy for Windows services
-You can set a device-wide proxy or Proxy Auto Configuration (.PAC) file that applies to all interactive, LocalSystem, and NetworkService users with the [Network Proxy CSP](/windows/client-management/mdm/networkproxy-csp).
+You can set a device-wide proxy or Proxy Auto Configuration (.PAC) file that applies to all interactive, Local System, and Network Service users with the [Network Proxy CSP](/windows/client-management/mdm/networkproxy-csp).
-You can also configure the proxy server for the local system account by running the following **bitsadmin** command, as shown in the following example:
+In addition you will need to set a proxy for the Windows services *RDAgent* and *Remote Desktop Services*. RDAgent runs with the account *Local System* and Remote Desktop Services runs with the account *Network Service*. You can set a proxy for these accounts by running the following commands, changing the placeholder value for `<server>` with your own address:
```console
-bitsadmin /util /setieproxy LOCALSYSTEM AUTOSCRIPT http://server/proxy.pac
+bitsadmin /util /setieproxy LOCALSYSTEM AUTOSCRIPT http://<server>/proxy.pac
+bitsadmin /util /setieproxy NETWORKSERVICE AUTOSCRIPT http://<server>/proxy.pac
``` ## Client-side proxy support
virtual-desktop Rdp Shortpath https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/rdp-shortpath.md
All connections begin by establishing a TCP-based [reverse connect transport](ne
1. While the client is probing the provided IP addresses, it continues to establish the initial connection over the reverse connect transport to ensure there's no delay in the user connection.
-1. If the client has a direct connection to the session host, the client establishes a secure TLS connection.
+1. If the client has a direct connection to the session host, the client establishes a secure connection using TLS over reliable UDP.
1. After establishing the RDP Shortpath transport, all Dynamic Virtual Channels (DVCs), including remote graphics, input, and device redirection, are moved to the new transport. However, if a firewall or network topology prevents the client from establishing direct UDP connectivity, RDP continues with a reverse connect transport.
All connections begin by establishing a TCP-based [reverse connect transport](ne
1. After the session host and client exchange their candidate lists, both parties attempt to connect with each other using all the gathered candidates. This connection attempt is simultaneous on both sides. Many NAT gateways are configured to allow the incoming traffic to the socket as soon as the outbound data transfer initializes it. This behavior of NAT gateways is the reason the simultaneous connection is essential. If STUN fails because it's blocked, an indirect connection attempt is made using TURN.
-1. After the initial packet exchange, the client and session host may establish one or many data flows. From these data flows, RDP chooses the fastest network path. The client then establishes a secure TLS connection with the session host and initiates RDP Shortpath transport.
+1. After the initial packet exchange, the client and session host may establish one or many data flows. From these data flows, RDP chooses the fastest network path. The client then establishes a secure connection using TLS over reliable UDP with the session host and initiates RDP Shortpath transport.
1. After RDP establishes the RDP Shortpath transport, all Dynamic Virtual Channels (DVCs), including remote graphics, input, and device redirection move to the new transport.
TURN is available in the following Azure regions:
| Name | Source | Source Port | Destination | Destination Port | Protocol | Action | |||::||::|::|::|
-| STUN/TURN UDP | VM subnet | Any | 20.202.0.0/16 | 3478-3481 | UDP | Allow |
+| RDP Shortpath Server Endpoint | VM subnet | Any | Any | 1024-65535<br />(*default 49152-65535*) | UDP | Allow |
+| STUN/TURN UDP | VM subnet | Any | 20.202.0.0/16 | 3478 | UDP | Allow |
| STUN/TURN TCP | VM subnet | Any | 20.202.0.0/16 | 443 | TCP | Allow | #### Client network | Name | Source | Source Port | Destination | Destination Port | Protocol | Action | |||::||::|::|::|
-| STUN/TURN UDP | Client network | Any | 20.202.0.0/16 | 3478-3481 | UDP | Allow |
+| RDP Shortpath Server Endpoint | Client network | Any | Public IP addresses assigned to NAT Gateway or Azure Firewall (provided by the STUN endpoint) | 1024-65535<br />(*default 49152-65535*) | UDP | Allow |
+| STUN/TURN UDP | Client network | Any | 20.202.0.0/16 | 3478 | UDP | Allow |
| STUN/TURN TCP | Client network | Any | 20.202.0.0/16 | 443 | TCP | Allow | ### Teredo support
The port used for each RDP session depends on whether RDP Shortpath is being use
- **Public networks**: each RDP session uses a dynamically assigned UDP port from an ephemeral port range (49152ΓÇô65535 by default) that accepts the RDP Shortpath traffic. You can also use a smaller, predictable port range. For more information, see [Limit the port range used by clients for public networks](configure-rdp-shortpath-limit-ports-public-networks.md).
-RDP Shortpath uses a TLS connection between the client and the session host using the session host's certificates. By default, the certificate used for RDP encryption is self-generated by the operating system during the deployment. RDP Shortpath uses a TLS connection between the client and the session host using the session host's certificates. By default, the certificate used for RDP encryption is self-generated by the operating system during the deployment. You can also deploy centrally managed certificates issued by an enterprise certification authority. For more information about certificate configurations, see [Remote Desktop listener certificate configurations](/troubleshoot/windows-server/remote/remote-desktop-listener-certificate-configurations).
+RDP Shortpath uses a secure connection using TLS over reliable UDP between the client and the session host using the session host's certificates. By default, the certificate used for RDP encryption is self-generated by the operating system during the deployment. You can also deploy centrally managed certificates issued by an enterprise certification authority. For more information about certificate configurations, see [Remote Desktop listener certificate configurations](/troubleshoot/windows-server/remote/remote-desktop-listener-certificate-configurations).
> [!NOTE]
-> The security offered by RDP Shortpath is the same as that offered by reverse connect transport.
+> The security offered by RDP Shortpath is the same as that offered by TCP reverse connect transport.
## Example scenarios
virtual-desktop Create Service Principal Role Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/create-service-principal-role-powershell.md
$creds = New-Object System.Management.Automation.PSCredential($svcPrincipal.AppI
Add-RdsAccount -DeploymentUrl "https://rdbroker.wvd.microsoft.com" -Credential $creds -ServicePrincipal -AadTenantId $aadContext.Tenant.Id ```
-After you've signed in, make sure everything works by testing a few Azure Virtual Desktop PowerShell cmdlets with the service principal.
+If you can sign in successfully, your service principal is configured correctly.
## Next steps
virtual-machines Convert Disk Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/convert-disk-storage.md
Previously updated : 01/18/2023 Last updated : 02/09/2023
-# Convert Azure managed disks storage from Standard to Premium or Premium to Standard
+# Change the disk type of an Azure managed disk - CLI
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
virtual-machines Convert Disk Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/convert-disk-storage.md
Previously updated : 01/18/2023 Last updated : 02/09/2023
-# Update the storage type of a managed disk
+# Change the disk type of an Azure managed disk - PowerShell
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows
virtual-network Accelerated Networking How It Works https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/accelerated-networking-how-it-works.md
Title: How Accelerated Networking works in Linux and FreeBSD VMs description: How Accelerated Networking Works in Linux and FreeBSD VMs - ms.devlang: na
virtual-network Accelerated Networking Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/accelerated-networking-overview.md
Title: Accelerated Networking overview description: Accelerated Networking to improves networking performance of Azure VMs. - ms.devlang: na
virtual-network Application Security Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/application-security-groups.md
Title: Azure application security groups overview
description: Learn about the use of application security groups. Last updated 02/27/2020
virtual-network Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/cli-samples.md
documentationcenter: virtual-network
-tags:
virtual-network Concepts And Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/concepts-and-best-practices.md
Title: Azure Virtual Network - Concepts and best practices description: Learn about Azure Virtual Network concepts and best practices. Last updated 12/03/2020
virtual-network Container Networking Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/container-networking-overview.md
Title: Container networking with Azure Virtual Network | Microsoft Docs
+ Title: Container networking with Azure Virtual Network
description: Learn about the Azure Virtual Network container network interface (CNI) plug-in and how to enable containers to use an Azure Virtual Network. tags: azure-resource-manager- Last updated 9/18/2018 -- # Enable containers to use Azure Virtual Network capabilities
virtual-network Create Peering Different Deployment Models Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/create-peering-different-deployment-models-subscriptions.md
Title: Create an Azure virtual network peering - different deployment models -different subscriptions
+ Title: Create an Azure virtual network peering - different deployment models - different subscriptions
description: Learn how to create a virtual network peering between virtual networks created through different Azure deployment models that exist in different Azure subscriptions. Last updated 06/25/2020 - + # Create a virtual network peering - different deployment models and subscriptions In this tutorial, you learn to create a virtual network peering between virtual networks created through different deployment models. The virtual networks exist in different subscriptions. Peering two virtual networks enables resources in different virtual networks to communicate with each other with the same bandwidth and latency as though the resources were in the same virtual network. Learn more about [Virtual network peering](virtual-network-peering-overview.md).
virtual-network Create Peering Different Deployment Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/create-peering-different-deployment-models.md
Title: Create an Azure virtual network peering - different deployment models - same subscription | Microsoft Docs
+ Title: Create an Azure virtual network peering - different deployment models - same subscription
description: Learn how to create a virtual network peering between virtual networks created through different Azure deployment models that exist in the same Azure subscription. tags: azure-resource-manager- Last updated 11/15/2018 + # Create a virtual network peering - different deployment models, same subscription In this tutorial, you learn to create a virtual network peering between virtual networks created through different deployment models. Both virtual networks exist in the same subscription. Peering two virtual networks enables resources in different virtual networks to communicate with each other with the same bandwidth and latency as though the resources were in the same virtual network. Learn more about [Virtual network peering](virtual-network-peering-overview.md).
virtual-network Create Vm Accelerated Networking Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/create-vm-accelerated-networking-cli.md
Title: Create an Azure VM with Accelerated Networking using Azure CLI description: Learn how to create a Linux virtual machine with Accelerated Networking enabled. tags: azure-resource-manager- Last updated 03/24/2022
virtual-network Create Vm Accelerated Networking Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/create-vm-accelerated-networking-powershell.md
Title: Create Windows VM with accelerated networking - Azure PowerShell description: Create a Windows virtual machine (VM) with Accelerated Networking for improved network performance - vm-windows
virtual-network Deploy Container Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/deploy-container-networking.md
Title: Deploy Azure virtual network container networking | Microsoft Docs
+ Title: Deploy Azure virtual network container networking
description: Learn how to deploy the Azure Virtual Network container network interface (CNI) plug-in for Kubernetes clusters. tags: azure-resource-manager- Last updated 9/18/2018 -- # Deploy the Azure Virtual Network container network interface plug-in
virtual-network Diagnose Network Routing Problem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/diagnose-network-routing-problem.md
Title: Diagnose an Azure virtual machine routing problem | Microsoft Docs
+ Title: Diagnose an Azure virtual machine routing problem
description: Learn how to diagnose a virtual machine routing problem by viewing the effective routes for a virtual machine. tags: azure-resource-manager- Last updated 05/30/2018
virtual-network Diagnose Network Traffic Filter Problem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/diagnose-network-traffic-filter-problem.md
Title: Diagnose a virtual machine network traffic filter problem | Microsoft Docs
+ Title: Diagnose a virtual machine network traffic filter problem
description: Learn how to diagnose a virtual machine network traffic filter problem by viewing the effective security rules for a virtual machine. tags: azure-resource-manager- ms.assetid: a54feccf-0123-4e49-a743-eb8d0bdd1ebc Last updated 05/29/2018
virtual-network Configure Public Ip Vpn Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/configure-public-ip-vpn-gateway.md
In this section, you'll create a VPN gateway. You'll select the IP address you c
7. Select **Create**. > [!NOTE]
-> This is a simple deployment of a VPN Gateway. For advanced configuration and setup, see [Tutorial: Create and manage a VPN gateway using Azure portal](../../vpn-gateway/tutorial-create-gateway-portal.md).
+> This is a simple deployment of a VPN gateway. For advanced configuration and setup, see [Tutorial: Create and manage a VPN gateway using Azure portal](../../vpn-gateway/tutorial-create-gateway-portal.md).
>
-> For more information on Azure VPN Gateway, see [What is VPN Gateway?](../../vpn-gateway/vpn-gateway-about-vpngateways.md).
+> For more information on Azure VPN Gateway, see [What is VPN Gateway?](../../vpn-gateway/vpn-gateway-about-vpngateways.md)
## Change or remove public IP address
-VPN gateway doesn't support changing the public IP address after creation.
+VPN Gateway doesn't support changing the primary public IP address after creation.
## Caveats
VPN gateway doesn't support changing the public IP address after creation.
## Next steps
-In this article, you learned how to create a VPN Gateway using an existing public IP address.
+In this article, you learned how to create a VPN gateway using an existing public IP address.
- To learn more about public IP addresses in Azure, see [Public IP addresses](./public-ip-addresses.md).-- To learn more about VPN gateways, see [What is VPN Gateway?](../../vpn-gateway/vpn-gateway-about-vpngateways.md).
+- To learn more about VPN gateways, see [What is VPN Gateway?](../../vpn-gateway/vpn-gateway-about-vpngateways.md)
virtual-network Kubernetes Network Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/kubernetes-network-policies.md
Title: Azure Kubernetes network policies | Microsoft Docs
+ Title: Azure Kubernetes network policies
description: Learn about Kubernetes network policies to secure your Kubernetes cluster. tags: azure-resource-manager- -+ Last updated 9/25/2018 -- # Azure Kubernetes Network Policies
virtual-network Manage Route Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/manage-route-table.md
Title: Create, change, or delete an Azure route table
description: Learn where to find information about virtual network traffic routing, and how to create, change, or delete a route table. Last updated 12/13/2022
virtual-network Move Across Regions Nsg Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/move-across-regions-nsg-portal.md
Title: Move Azure network security group (NSG) to another Azure region using the Azure portal
+ Title: Move Azure network security group (NSG) to another Azure region - Azure portal
description: Use Azure Resource Manager template to move Azure network security group from one Azure region to another using the Azure portal.
virtual-network Move Across Regions Nsg Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/move-across-regions-nsg-powershell.md
Title: Move Azure network security group (NSG) to another Azure region using Azure PowerShell
+ Title: Move Azure network security group (NSG) to another Azure region - Azure PowerShell
description: Use Azure Resource Manager template to move Azure network security group from one Azure region to another using Azure PowerShell.
virtual-network Move Across Regions Publicip Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/move-across-regions-publicip-portal.md
Title: Move Azure Public IP configuration to another Azure region Azure portal
+ Title: Move Azure Public IP configuration to another Azure region - Azure portal
description: Use a template to move Azure Public IP configuration from one Azure region to another using the Azure portal.
virtual-network Move Across Regions Publicip Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/move-across-regions-publicip-powershell.md
Title: Move Azure Public IP configuration to another Azure region using Azure PowerShell
+ Title: Move Azure Public IP configuration to another Azure region - Azure PowerShell
description: Use Azure Resource Manager template to move Azure Public IP configuration from one Azure region to another using Azure PowerShell.
virtual-network Move Across Regions Vnet Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/move-across-regions-vnet-portal.md
Title: Move an Azure virtual network to another Azure region using the Azure portal.
+ Title: Move an Azure virtual network to another Azure region - Azure portal.
description: Move an Azure virtual network from one Azure region to another by using a Resource Manager template and the Azure portal.
virtual-network Move Across Regions Vnet Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/move-across-regions-vnet-powershell.md
Title: Move an Azure virtual network to another Azure region by using Azure PowerShell
+ Title: Move an Azure virtual network to another Azure region - Azure PowerShell
description: Move an Azure virtual network from one Azure region to another by using a Resource Manager template and Azure PowerShell.
virtual-network Nat Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/nat-availability-zones.md
- Title: NAT gateway and availability zones description: Key concepts and design guidance on using NAT gateway with availability zones.
virtual-network Quickstart Create Nat Gateway Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/quickstart-create-nat-gateway-bicep.md
Title: 'Create a NAT gateway - Bicep'
description: This quickstart shows how to create a NAT gateway using Bicep. # Customer intent: I want to create a NAT gateway using Bicep so that I can provide outbound connectivity for my virtual machines. Last updated 04/08/2022
virtual-network Quickstart Create Nat Gateway Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/quickstart-create-nat-gateway-portal.md
Previously updated : 11/11/2022 Last updated : 02/09/2023
For information about public IP prefixes and a NAT gateway, see [Manage NAT gate
1. Sign in to the [Azure portal](https://portal.azure.com).
-2. In the search box at the top of the portal, enter **NAT gateway**. Select **NAT gateways** in the search results.
+1. In the search box at the top of the portal, enter **NAT gateway**. Select **NAT gateways** in the search results.
-3. Select **+ Create**.
+1. Select **+ Create**.
-4. In **Create network address translation (NAT) gateway**, enter or select this information in the **Basics** tab:
+1. In **Create network address translation (NAT) gateway**, enter or select this information in the **Basics** tab:
| **Setting** | **Value** | ||--|
For information about public IP prefixes and a NAT gateway, see [Manage NAT gate
| NAT gateway name | Enter **myNATgateway** | | Region | Select **West Europe** | | Availability Zone | Select **No Zone**. |
- | Idle timeout (minutes) | Enter **10**. |
+ | TCP idle timeout (minutes) | Enter **10**. |
For information about availability zones and NAT gateway, see [NAT gateway and availability zones](./nat-availability-zones.md).
-5. Select the **Outbound IP** tab, or select the **Next: Outbound IP** button at the bottom of the page.
+1. Select the **Outbound IP** tab, or select the **Next: Outbound IP** button at the bottom of the page.
-6. In the **Outbound IP** tab, enter or select the following information:
+1. In the **Outbound IP** tab, enter or select the following information:
| **Setting** | **Value** | | -- | | | Public IP addresses | Select **Create a new public IP address**. </br> In **Name**, enter **myPublicIP**. </br> Select **OK**. |
-7. Select the **Review + create** tab, or select the blue **Review + create** button at the bottom of the page.
+1. Select the **Review + create** tab, or select the blue **Review + create** button at the bottom of the page.
-8. Select **Create**.
+1. Select **Create**.
## Virtual network
Before you deploy a virtual machine and can use your NAT gateway, you need to cr
1. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results.
-2. Select **Create**.
+1. Select **+ Create**.
-3. In **Create virtual network**, enter or select this information in the **Basics** tab:
+1. In **Create virtual network**, enter or select this information in the **Basics** tab:
| **Setting** | **Value** | ||--|
Before you deploy a virtual machine and can use your NAT gateway, you need to cr
| Name | Enter **myVNet** | | Region | Select **(Europe) West Europe** |
-4. Select the **IP Addresses** tab or select the **Next: IP Addresses** button at the bottom of the page.
+1. Select the **Security** tab or select the **Next: Security** button at the bottom of the page.
-5. Accept the default IPv4 address space of **10.1.0.0/16**.
+1. Under **Azure Bastion**, select **Enable Azure Bastion**. Enter this information:
-6. In the subnet section in **Subnet name**, select the **default** subnet.
+ | Setting | Value |
+ |--|-|
+ | Azure Bastion name | Enter **myBastionHost** |
+ | Azure Bastion public IP address | Select **New(myVNet-publicipAddress1)** |
+
+1. Select the **IP Addresses** tab or select the **Next: IP Addresses** button at the bottom of the page.
+
+1. Accept the default IPv4 address space of **10.0.0.0/16**.
-7. In **Edit subnet**, enter this information:
+1. In the subnet section in **Subnet name**, select the **default** subnet, then select **Save**.
+
+1. In **Edit subnet**, enter this information:
| Setting | Value | |--|-|
- | Subnet name | Enter **mySubnet** |
- | Subnet address range | Enter **10.1.0.0/24** |
- | **NAT GATEWAY** |
+ | Name| Enter **mySubnet** |
+ | Starting address | Enter **10.0.0.0** |
+ | Subnet size | Select **/24** |
+ | **Security** |
| NAT gateway | Select **myNATgateway**. |
-8. Select **Save**.
-
-9. Select the **Security** tab.
-
-10. Under **BastionHost**, select **Enable**. Enter this information:
+1. Select **Add a subnet** and enter the following information, then select **Add**.
| Setting | Value | |--|-|
- | Bastion name | Enter **myBastionHost** |
- | AzureBastionSubnet address space | Enter **10.1.1.0/26** |
- | Public IP Address | Select **Create new**. </br> For **Name**, enter **myBastionIP**. </br> Select **OK**. |
+ | Subnet template | Select **Azure Bastion** |
+ | Starting address | Enter **10.0.1.0** |
+ | Subnet size | Select **/26** |
-11. Select the **Review + create** tab or select the **Review + create** button.
+1. Select the **Review + create** tab or select the **Review + create** button.
-12. Select **Create**.
+1. Select **Create**.
It can take a few minutes for the deployment of the virtual network to complete. Proceed to the next steps when the deployment completes.
In this section, you'll create a virtual machine to test the NAT gateway and ver
1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
-2. Select **+ Create** > **Azure virtual machine**.
+1. Select **+ Create** > **Azure virtual machine**.
-2. In the **Create a virtual machine** page in the **Basics** tab, enter, or select the following information:
+1. In the **Create a virtual machine** page in the **Basics** tab, enter, or select the following information:
| **Setting** | **Value** | | -- | |
In this section, you'll create a virtual machine to test the NAT gateway and ver
| **Inbound port rules** | | | Public inbound ports | Select **None**. |
-3. Select the **Disks** tab, or select the **Next: Disks** button at the bottom of the page.
+1. Select the **Disks** tab, or select the **Next: Disks** button at the bottom of the page.
-4. Leave the default in the **Disks** tab.
+1. Leave the default in the **Disks** tab.
-5. Select the **Networking** tab, or select the **Next: Networking** button at the bottom of the page.
+1. Select the **Networking** tab, or select the **Next: Networking** button at the bottom of the page.
-6. In the **Networking** tab, enter or select the following information:
+1. In the **Networking** tab, enter or select the following information:
| **Setting** | **Value** | | -- | |
In this section, you'll create a virtual machine to test the NAT gateway and ver
| NIC network security group | Select **Basic**. | | Public inbound ports | Select **None**. |
-7. Select the **Review + create** tab, or select the blue **Review + create** button at the bottom of the page.
+1. Select the **Review + create** tab, or select the blue **Review + create** button at the bottom of the page.
-8. Select **Create**.
+1. Select **Create**.
## Test NAT gateway
In this section, you'll test the NAT gateway. You'll first discover the public I
1. In the search box at the top of the portal, enter **Public IP**. Select **Public IP addresses** in the search results.
-2. Select **myPublicIP**.
+1. Select **myPublicIP**.
-3. Make note of the public IP address:
+1. Make note of the public IP address:
:::image type="content" source="./media/quickstart-create-nat-gateway-portal/find-public-ip.png" alt-text="Discover public IP address of NAT gateway" border="true":::
-4. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
+1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
-5. Select **myVM**.
+1. Select **myVM**.
-4. On the **Overview** page, select **Connect**, then **Bastion**.
+1. On the **Overview** page, select **Connect**, then **Bastion**.
-6. Enter the username and password entered during VM creation. Select **Connect**.
+1. Enter the username and password entered during VM creation. Select **Connect**.
-7. Open **Microsoft Edge** on **myTestVM**.
+1. Open **Microsoft Edge** on **myTestVM**.
-8. Enter **https://whatsmyip.com** in the address bar.
+1. Enter **https://whatsmyip.com** in the address bar.
-9. Verify the IP address displayed matches the NAT gateway address you noted in the previous step:
+1. Verify the IP address displayed matches the NAT gateway address you noted in the previous step:
:::image type="content" source="./media/quickstart-create-nat-gateway-portal/my-ip.png" alt-text="Internet Explorer showing external outbound IP" border="true":::
the virtual network, virtual machine, and NAT gateway with the following steps:
1. From the left-hand menu, select **Resource groups**.
-2. Select the **myResourceGroupNAT** resource group.
+1. Select the **myResourceGroupNAT** resource group.
-3. Select **Delete resource group**.
+1. Select **Delete resource group**.
-4. Enter **myResourceGroupNAT** and select **Delete**.
+1. Enter **myResourceGroupNAT** and select **Delete**.
## Next steps
virtual-network Quickstart Create Nat Gateway Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/quickstart-create-nat-gateway-template.md
Title: 'Create a NAT gateway - Resource Manager Template'
description: This quickstart shows how to create a NAT gateway by using the Azure Resource Manager template (ARM template). # Customer intent: I want to create a NAT gateway by using an Azure Resource Manager template so that I can provide outbound connectivity for my virtual machines. - Last updated 10/27/2020
virtual-network Network Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/network-overview.md
Last updated 08/17/2021 - # Virtual networks and virtual machines in Azure
virtual-network Network Security Group How It Works https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/network-security-group-how-it-works.md
Title: Network security group - how it works
description: Learn how network security groups help you filter network traffic between Azure resources. Last updated 08/24/2020
virtual-network Network Security Groups Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/network-security-groups-overview.md
Title: Azure network security groups overview
description: Learn about network security groups. Network security groups help you filter network traffic between Azure resources. Last updated 11/10/2022
virtual-network Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/policy-reference.md
+ # Azure Policy built-in definitions for Azure Virtual Network This page is an index of [Azure Policy](../governance/policy/overview.md) built-in policy
virtual-network Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/powershell-samples.md
documentationcenter: virtual-network
-tags:
Last updated 07/15/2019
virtual-network Quick Create Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/quick-create-bicep.md
Title: 'Quickstart: Create a virtual network using Bicep'
+ Title: 'Quickstart: Create a virtual network - Bicep'
description: Learn how to use Bicep to create an Azure virtual network.
virtual-network Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/quick-create-cli.md
Title: Create a virtual network - quickstart - Azure CLI
+ Title: 'Quickstart: Create a virtual network - Azure CLI'
description: In this quickstart, learn to create a virtual network using the Azure CLI. A virtual network lets Azure resources communicate with each other and with the internet.
virtual-network Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/quick-create-powershell.md
Title: Create a virtual network - quickstart - Azure PowerShell
+ Title: 'Quickstart: Create a virtual network - Azure PowerShell'
description: In this quickstart, you create a virtual network using the Azure portal. A virtual network lets Azure resources communicate with each other and with the internet.
virtual-network Quick Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/quick-create-template.md
Title: 'Quickstart: Create a virtual network using a Resource Manager template'
+ Title: 'Quickstart: Create a virtual network - Resource Manager template'
description: Learn how to use a Resource Manager template to create an Azure virtual network.
virtual-network Virtual Network Cli Sample Filter Network Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/scripts/virtual-network-cli-sample-filter-network-traffic.md
ms.devlang: azurecli Last updated 02/03/2022
virtual-network Virtual Network Cli Sample Ipv6 Dual Stack Standard Load Balancer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/scripts/virtual-network-cli-sample-ipv6-dual-stack-standard-load-balancer.md
Title: Azure CLI script sample - Configure IPv6 frontend - Standard Load Balance
description: Learn how to configure IPv6 endpoints in a virtual network script sample using Standard Load Balancer.
virtual-network Virtual Network Cli Sample Ipv6 Dual Stack https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/scripts/virtual-network-cli-sample-ipv6-dual-stack.md
Title: Azure CLI script sample - Configure IPv6 frontend
description: Use an Azure CLI script sample to configure IPv6 endpoints and deploy a dual stack (IPv4 + IPv6) application in Azure. -+ Last updated 02/03/2022
virtual-network Virtual Network Cli Sample Route Traffic Through Nva https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/scripts/virtual-network-cli-sample-route-traffic-through-nva.md
documentationcenter: virtual-network
-tags:
ms.devlang: azurecli Last updated 02/03/2022
virtual-network Virtual Network Powershell Sample Filter Network Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/scripts/virtual-network-powershell-sample-filter-network-traffic.md
documentationcenter: virtual-network
-tags:
- ms.devlang: powershell Last updated 03/20/2018 - # Filter inbound and outbound VM network traffic script sample
virtual-network Virtual Network Powershell Sample Ipv6 Dual Stack Standard Load Balancer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/scripts/virtual-network-powershell-sample-ipv6-dual-stack-standard-load-balancer.md
Title: Azure PowerShell script sample - Configure IPv6 frontend with Standard Load Balancer(preview)
+ Title: Azure PowerShell script sample - Configure IPv6 frontend with Standard Load Balancer (preview)
description: Learn about configuring an IPv6 frontend in a virtual network script sample with Standard Load Balancer. -+ Last updated 07/15/2019
-# Configure IPv6 frontend in virtual network script sample with Standard Load Balancer(preview)
+# Configure IPv6 frontend in virtual network script sample with Standard Load Balancer (preview)
This article shows you how to deploy a dual stack (IPv4 + IPv6) application in Azure that includes a dual stack virtual network with a dual stack subnet, a load balancer with dual (IPv4 + IPv6) front-end configurations, VMs with NICs that have a dual IP configuration, dual network security group rules, and dual public IPs.
virtual-network Virtual Network Powershell Sample Ipv6 Dual Stack https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/scripts/virtual-network-powershell-sample-ipv6-dual-stack.md