Updates from: 10/31/2022 02:06:38
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/service-limits.md
Previously updated : 04/15/2022 Last updated : 10/27/2022 zone_pivot_groups: b2c-policy-type
The following table lists the administrative configuration limits in the Azure A
|Levels of [inheritance](custom-policy-overview.md#inheritance-model) in custom policies |10 | |Number of policies per Azure AD B2C tenant (user flows + custom policies) |200 | |Maximum policy file size |1024 KB |
+|Number of API connectors per tenant |19 |
<sup>1</sup> See also [Azure AD service limits and restrictions](../active-directory/enterprise-users/directory-service-limits-restrictions.md).
active-directory Concept Mfa Data Residency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-mfa-data-residency.md
Previously updated : 08/01/2022 Last updated : 10/29/2022
For Microsoft Azure Government, Microsoft Azure operated by 21Vianet, Azure AD B
If you use MFA Server, the following personal data is stored. > [!IMPORTANT]
-> As of July 1, 2019, Microsoft no longer offers MFA Server for new deployments. New customers who want to require multifactor authentication from their users should use cloud-based Azure AD multifactor authentication. Existing customers who activated Multifactor Authentication Server before July 1, 2019, can download the latest version and updates, and generate activation credentials as usual.
+> In September 2022, Microsoft announced deprecation of Azure Multi-Factor Authentication Server. Beginning September 30, 2024, Azure Multi-Factor Authentication Server deployments will no longer service multifactor authentication (MFA) requests, which could cause authentications to fail for your organization. To ensure uninterrupted authentication services and to remain in a supported state, organizations should [migrate their usersΓÇÖ authentication data](how-to-migrate-mfa-server-to-azure-mfa-user-authentication.md) to the cloud-based Azure MFA service by using the latest Migration Utility included in the most recent [Azure MFA Server update](https://www.microsoft.com/download/details.aspx?id=55849). For more information, see [Azure MFA Server Migration](how-to-migrate-mfa-server-to-azure-mfa.md).
| Event type | Data store type | |--|--|
active-directory Howto Authentication Passwordless Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-deployment.md
Previously updated : 06/23/2022 Last updated : 10/29/2022
This method can also be used for easy recovery when the user has lost or forgott
**MFA server** - End users enabled for multi-factor authentication through an organization's on-premises MFA server can create and use a single passwordless phone sign-in credential. If the user attempts to upgrade multiple installations (5 or more) of the Authenticator app with the credential, this change may result in an error. > [!IMPORTANT]
-> As of July 1, 2019, Microsoft no longer offers MFA Server for new deployments. New customers that want to require multi-factor authentication (MFA) during sign-in events should use cloud-based Azure AD Multi-Factor Authentication. Existing customers that activated MFA Server before July 1, 2019 can download the latest version, future updates, and generate activation credentials as usual. We recommend moving from MFA Server to Azure AD MFA.
+> In September 2022, Microsoft announced deprecation of Azure Multi-Factor Authentication Server. Beginning September 30, 2024, Azure Multi-Factor Authentication Server deployments will no longer service multifactor authentication (MFA) requests, which could cause authentications to fail for your organization. To ensure uninterrupted authentication services and to remain in a supported state, organizations should [migrate their usersΓÇÖ authentication data](how-to-migrate-mfa-server-to-azure-mfa-user-authentication.md) to the cloud-based Azure MFA service by using the latest Migration Utility included in the most recent [Azure MFA Server update](https://www.microsoft.com/download/details.aspx?id=55849). For more information, see [Azure MFA Server Migration](how-to-migrate-mfa-server-to-azure-mfa.md).
**Device registration** - To use the Authenticator app for passwordless authentication, the device must be registered in the Azure AD tenant and can't be a shared device. A device can only be registered in a single tenant. This limit means that only one work or school account is supported for phone sign-in using the Authenticator app.
active-directory Howto Mfa Server Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-server-settings.md
Previously updated : 06/05/2020 Last updated : 10/29/2022
This article helps you to manage Azure MFA Server settings in the Azure portal. > [!IMPORTANT]
-> As of July 1, 2019, Microsoft will no longer offer MFA Server for new deployments. New customers who would like to require multi-factor authentication from their users should use cloud-based Azure AD Multi-Factor Authentication. Existing customers who have activated MFA Server prior to July 1 will be able to download the latest version, future updates and generate activation credentials as usual.
+> In September 2022, Microsoft announced deprecation of Azure Multi-Factor Authentication Server. Beginning September 30, 2024, Azure Multi-Factor Authentication Server deployments will no longer service multifactor authentication (MFA) requests, which could cause authentications to fail for your organization. To ensure uninterrupted authentication services and to remain in a supported state, organizations should [migrate their usersΓÇÖ authentication data](how-to-migrate-mfa-server-to-azure-mfa-user-authentication.md) to the cloud-based Azure MFA service by using the latest Migration Utility included in the most recent [Azure MFA Server update](https://www.microsoft.com/download/details.aspx?id=55849). For more information, see [Azure MFA Server Migration](how-to-migrate-mfa-server-to-azure-mfa.md).
The following MFA Server settings are available:
active-directory Howto Mfaserver Adfs 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfaserver-adfs-2.md
Title: Use Azure MFA Server with AD FS 2.0 - Azure Active Directory
-description: This is the Azure Multi-Factor authentication page that describes how to get started with Azure MFA and AD FS 2.0.
+description: Describes how to get started with Azure MFA and AD FS 2.0.
Previously updated : 08/27/2021 Last updated : 10/29/2022
This article is for organizations that are federated with Azure Active Directory
This documentation covers using the Azure Multi-Factor Authentication Server with AD FS 2.0. For information about AD FS, see [Securing cloud and on-premises resources using Azure Multi-Factor Authentication Server with Windows Server](howto-mfaserver-adfs-windows-server.md). > [!IMPORTANT]
-> As of July 1, 2019, Microsoft no longer offers MFA Server for new deployments. New customers that want to require multi-factor authentication (MFA) during sign-in events should use cloud-based Azure AD Multi-Factor Authentication.
+> In September 2022, Microsoft announced deprecation of Azure Multi-Factor Authentication Server. Beginning September 30, 2024, Azure Multi-Factor Authentication Server deployments will no longer service multifactor authentication (MFA) requests, which could cause authentications to fail for your organization. To ensure uninterrupted authentication services and to remain in a supported state, organizations should [migrate their usersΓÇÖ authentication data](how-to-migrate-mfa-server-to-azure-mfa-user-authentication.md) to the cloud-based Azure MFA service by using the latest Migration Utility included in the most recent [Azure MFA Server update](https://www.microsoft.com/download/details.aspx?id=55849). For more information, see [Azure MFA Server Migration](how-to-migrate-mfa-server-to-azure-mfa.md).
>
-> To get started with cloud-based MFA, see [Tutorial: Secure user sign-in events with Azure AD Multi-Factor Authentication](tutorial-enable-azure-mfa.md).
+> To get started with cloud-based MFA, see [Tutorial: Secure user sign-in events with Azure Multi-Factor Authentication](tutorial-enable-azure-mfa.md).
>
-> If you use cloud-based MFA, see [Securing cloud resources with Azure AD Multi-Factor Authentication and AD FS](howto-mfa-adfs.md).
+> If you use cloud-based MFA, see [Securing cloud resources with Azure Multi-Factor Authentication and AD FS](howto-mfa-adfs.md).
> > Existing customers that activated MFA Server before July 1, 2019 can download the latest version, future updates, and generate activation credentials as usual.
To secure AD FS 2.0 with a proxy, install the Azure Multi-Factor Authentication
![MFA Server IIS Authentication window](./media/howto-mfaserver-adfs-2/setup1.png) 4. To detect username, password, and domain variables automatically, enter the login URL (like `https://sso.contoso.com/adfs/ls`) within the Auto-Configure Form-Based Website dialog box and click **OK**.
-5. Check the **Require Azure Multi-Factor Authentication user match** box if all users have been or will be imported into the Server and subject to two-step verification. If a significant number of users have not yet been imported into the Server and/or will be exempt from two-step verification, leave the box unchecked.
-6. If the page variables cannot be detected automatically, click the **Specify Manually…** button in the Auto-Configure Form-Based Website dialog box.
+5. Check the **Require Azure Multi-Factor Authentication user match** box if all users have been or will be imported into the Server and subject to two-step verification. If a significant number of users haven't yet been imported into the Server and/or will be exempt from two-step verification, leave the box unchecked.
+6. If the page variables can't be detected automatically, click the **Specify Manually…** button in the Auto-Configure Form-Based Website dialog box.
7. In the Add Form-Based Website dialog box, enter the URL to the AD FS login page in the Submit URL field (like `https://sso.contoso.com/adfs/ls`) and enter an Application name (optional). The Application name appears in Azure Multi-Factor Authentication reports and may be displayed within SMS or Mobile App authentication messages. 8. Set the Request format to **POST or GET**. 9. Enter the Username variable (ctl00$ContentPlaceHolder1$UsernameTextBox) and Password variable (ctl00$ContentPlaceHolder1$PasswordTextBox). If your form-based login page displays a domain textbox, enter the Domain variable as well. To find the names of the input boxes on the login page, go to the login page in a web browser, right-click on the page and select **View Source**.
-10. Check the **Require Azure Multi-Factor Authentication user match** box if all users have been or will be imported into the Server and subject to two-step verification. If a significant number of users have not yet been imported into the Server and/or will be exempt from two-step verification, leave the box unchecked.
+10. Check the **Require Azure Multi-Factor Authentication user match** box if all users have been or will be imported into the Server and subject to two-step verification. If a significant number of users haven't yet been imported into the Server and/or will be exempt from two-step verification, leave the box unchecked.
![Add form-based website to MFA Server](./media/howto-mfaserver-adfs-2/manual.png)
To secure AD FS 2.0 with a proxy, install the Azure Multi-Factor Authentication
- Cache successful authentications to the website using cookies - Select how to authenticate the primary credentials
-12. Since the AD FS proxy server is not likely to be joined to the domain, you can use LDAP to connect to your domain controller for user import and pre-authentication. In the Advanced Form-Based Website dialog box, click the **Primary Authentication** tab and select **LDAP Bind** for the Pre-authentication Authentication type.
+12. Since the AD FS proxy server isn't likely to be joined to the domain, you can use LDAP to connect to your domain controller for user import and pre-authentication. In the Advanced Form-Based Website dialog box, click the **Primary Authentication** tab and select **LDAP Bind** for the Pre-authentication Authentication type.
13. When complete, click **OK** to return to the Add Form-Based Website dialog box. 14. Click **OK** to close the dialog box. 15. Once the URL and page variables have been detected or entered, the website data displays in the Form-Based panel.
You enabled IIS authentication, but to perform the pre-authentication to your Ac
1. Next, click the **Company Settings** icon and select the **Username Resolution** tab. 2. Select the **Use LDAP unique identifier attribute for matching usernames** radio button.
-3. If users enter their username in "domain\username" format, the Server needs to be able to strip the domain off the username when it creates the LDAP query. That can be done through a registry setting.
+3. If users enter their username in "domain\username" format, the Server needs to be able to strip the domain off the username when it creates the LDAP query, which can be done through a registry setting.
4. Open the registry editor and go to HKEY_LOCAL_MACHINE/SOFTWARE/Wow6432Node/Positive Networks/PhoneFactor on a 64-bit server. If on a 32-bit server, take the "Wow6432Node" out of the path. Create a DWORD registry key called "UsernameCxz_stripPrefixDomain" and set the value to 1. Azure Multi-Factor Authentication is now securing the AD FS proxy.
-Ensure that users have been imported from Active Directory into the Server. See the [Trusted IPs section](#trusted-ips) if you would like to allow internal IP addresses so that two-step verification is not required when signing in to the website from those locations.
+Make sure users are imported from Active Directory into the Server. To allow users to skip two-step verification from internal IP addresses, see the [Trusted IPs](#trusted-ips).
![Registry editor to configure company settings](./media/howto-mfaserver-adfs-2/reg.png) ## AD FS 2.0 Direct without a proxy
-You can secure AD FS when the AD FS proxy is not used. Install the Azure Multi-Factor Authentication Server on the AD FS server and configure the Server per the following steps:
+You can secure AD FS when the AD FS proxy isn't used. Install the Azure Multi-Factor Authentication Server on the AD FS server and configure the Server per the following steps:
1. Within the Azure Multi-Factor Authentication Server, click the **IIS Authentication** icon in the left menu. 2. Click the **HTTP** tab. 3. Click **Add**. 4. In the Add Base URL dialogue box, enter the URL for the AD FS website where HTTP authentication is performed (like `https://sso.domain.com/adfs/ls/auth/integrated`) into the Base URL field. Then, enter an Application name (optional). The Application name appears in Azure Multi-Factor Authentication reports and may be displayed within SMS or Mobile App authentication messages. 5. If desired, adjust the Idle timeout and Maximum session times.
-6. Check the **Require Azure Multi-Factor Authentication user match** box if all users have been or will be imported into the Server and subject to two-step verification. If a significant number of users have not yet been imported into the Server and/or will be exempt from two-step verification, leave the box unchecked.
+6. Check the **Require Azure Multi-Factor Authentication user match** box if all users have been or will be imported into the Server and subject to two-step verification. If a significant number of users haven't yet been imported into the Server and/or will be exempt from two-step verification, leave the box unchecked.
7. Check the cookie cache box if desired. ![AD FS 2.0 Direct without a proxy](./media/howto-mfaserver-adfs-2/noproxy.png)
You can secure AD FS when the AD FS proxy is not used. Install the Azure Multi-F
Azure Multi-Factor Authentication is now securing AD FS.
-Ensure that users have been imported from Active Directory into the Server. See the Trusted IPs section if you would like to allow internal IP addresses so that two-step verification is not required when signing in to the website from those locations.
+Ensure that users have been imported from Active Directory into the Server. See the Trusted IPs section if you would like to allow internal IP addresses so that two-step verification isn't required when signing in to the website from those locations.
## Trusted IPs
active-directory Howto Mfaserver Adfs Windows Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfaserver-adfs-windows-server.md
Previously updated : 08/25/2021 Last updated : 10/29/2022
If you use Active Directory Federation Services (AD FS) and want to secure cloud
In this article, we discuss using Azure Multi-Factor Authentication Server with AD FS beginning with Windows Server 2016. For more information, read about how to [secure cloud and on-premises resources by using Azure Multi-Factor Authentication Server with AD FS 2.0](howto-mfaserver-adfs-2.md). > [!IMPORTANT]
-> As of July 1, 2019, Microsoft no longer offers MFA Server for new deployments. New customers that want to require multi-factor authentication (MFA) during sign-in events should use cloud-based Azure AD Multi-Factor Authentication.
+> In September 2022, Microsoft announced deprecation of Azure Multi-Factor Authentication Server. Beginning September 30, 2024, Azure Multi-Factor Authentication Server deployments will no longer service multifactor authentication (MFA) requests, which could cause authentications to fail for your organization. To ensure uninterrupted authentication services and to remain in a supported state, organizations should [migrate their usersΓÇÖ authentication data](how-to-migrate-mfa-server-to-azure-mfa-user-authentication.md) to the cloud-based Azure MFA service by using the latest Migration Utility included in the most recent [Azure MFA Server update](https://www.microsoft.com/download/details.aspx?id=55849). For more information, see [Azure MFA Server Migration](how-to-migrate-mfa-server-to-azure-mfa.md).
> > To get started with cloud-based MFA, see [Tutorial: Secure user sign-in events with Azure Multi-Factor Authentication](tutorial-enable-azure-mfa.md). >
active-directory Howto Mfaserver Deploy Ha https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfaserver-deploy-ha.md
Previously updated : 11/21/2019 Last updated : 10/29/2022
# Configure Azure Multi-Factor Authentication Server for high availability
-To achieve high-availability with your Azure Server MFA deployment, you need to deploy multiple MFA servers. This section provides information on a load-balanced design to achieve your high availability targets in you Azure MFS Server deployment.
+To achieve high-availability with your Azure Server MFA deployment, you need to deploy multiple MFA servers. This section provides information on a load-balanced design to achieve your high availability targets in your Azure MFS Server deployment.
> [!IMPORTANT]
-> As of July 1, 2019, Microsoft no longer offers MFA Server for new deployments. New customers that want to require multi-factor authentication (MFA) during sign-in events should use cloud-based Azure AD Multi-Factor Authentication.
+> In September 2022, Microsoft announced deprecation of Azure Multi-Factor Authentication Server. Beginning September 30, 2024, Azure Multi-Factor Authentication Server deployments will no longer service multifactor authentication (MFA) requests, which could cause authentications to fail for your organization. To ensure uninterrupted authentication services and to remain in a supported state, organizations should [migrate their usersΓÇÖ authentication data](how-to-migrate-mfa-server-to-azure-mfa-user-authentication.md) to the cloud-based Azure MFA service by using the latest Migration Utility included in the most recent [Azure MFA Server update](https://www.microsoft.com/download/details.aspx?id=55849). For more information, see [Azure MFA Server Migration](how-to-migrate-mfa-server-to-azure-mfa.md).
> > To get started with cloud-based MFA, see [Tutorial: Secure user sign-in events with Azure AD Multi-Factor Authentication](tutorial-enable-azure-mfa.md). >
-> Existing customers that activated MFA Server before July 1, 2019 can download the latest version, future updates, and generate activation credentials as usual.
## MFA Server overview
Both MFA primary and subordinate MFA Servers communicate with the MFA Service wh
After successful authentication with AD, the MFA Server will communicate with the MFA Service. The MFA Server waits for notification from the MFA Service to allow or deny the user access to the application.
-If the MFA primary server goes offline, authentications can still be processed, but operations that require changes to the MFA database cannot be processed. (Examples include: the addition of users, self-service PIN changes, changing user information, or access to the user portal)
+If the MFA primary server goes offline, authentications can still be processed, but operations that require changes to the MFA database can't be processed. (Examples include: the addition of users, self-service PIN changes, changing user information, or access to the user portal)
## Deployment Consider the following important points for load balancing Azure MFA Server and its related components. * **Using RADIUS standard to achieve high availability**. If you are using Azure MFA Servers as RADIUS servers, you can potentially configure one MFA Server as a primary RADIUS authentication target and other Azure MFA Servers as secondary authentication targets. However, this method to achieve high availability may not be practical because you must wait for a time-out period to occur when authentication fails on the primary authentication target before you can be authenticated against the secondary authentication target. It is more efficient to load balance the RADIUS traffic between the RADIUS client and the RADIUS Servers (in this case, the Azure MFA Servers acting as RADIUS servers) so that you can configure the RADIUS clients with a single URL that they can point to.
-* **Need to manually promote MFA subordinates**. If the primary Azure MFA server goes offline, the secondary Azure MFA Servers continue to process MFA requests. However, until a primary MFA server is available, admins can not add users or modify MFA settings, and users can not make changes using the user portal. Promoting an MFA subordinate to the primary role is always a manual process.
+* **Need to manually promote MFA subordinates**. If the primary Azure MFA server goes offline, the secondary Azure MFA Servers continue to process MFA requests. However, until a primary MFA server is available, admins can't add users or modify MFA settings, and users can't make changes using the user portal. Promoting an MFA subordinate to the primary role is always a manual process.
* **Separability of components**. The Azure MFA Server comprises several components that can be installed on the same Windows Server instance or on different instances. These components include the User Portal, Mobile App Web Service, and the ADFS adapter (agent). This separability makes it possible to use the Web Application Proxy to publish the User Portal and Mobile App Web Server from the perimeter network. Such a configuration adds to the overall security of your design, as shown in the following diagram. The MFA User Portal and Mobile App Web Server may also be deployed in HA load-balanced configurations. ![MFA Server with a Perimeter Network](./media/howto-mfaserver-deploy-ha/mfasecurity.png)
Note the following items for the correspondingly numbered area of the preceding
![Azure MFA Server - App server HA](./media/howto-mfaserver-deploy-ha/mfaapp.png) > [!NOTE]
- > Because RPC uses dynamic ports, it is not recommended to open firewalls up to the range of dynamic ports that RPC can potentially use. If you have a firewall **between** your MFA application servers, you should configure the MFA Server to communicate on a static port for the replication traffic between subordinate and primary servers and open that port on your firewall. You can force the static port by creating a DWORD registry value at ```HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Positive Networks\PhoneFactor``` called ```Pfsvc_ncan_ip_tcp_port``` and setting the value to an available static port. Connections are always initiated by the subordinate MFA Servers to the primary, the static port is only required on the primary, but since you can promote a subordinate to be the primary at any time, you should set the static port on all MFA Servers.
+ > Because RPC uses dynamic ports, it isn't recommended to open firewalls up to the range of dynamic ports that RPC can potentially use. If you have a firewall **between** your MFA application servers, you should configure the MFA Server to communicate on a static port for the replication traffic between subordinate and primary servers and open that port on your firewall. You can force the static port by creating a DWORD registry value at ```HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Positive Networks\PhoneFactor``` called ```Pfsvc_ncan_ip_tcp_port``` and setting the value to an available static port. Connections are always initiated by the subordinate MFA Servers to the primary, the static port is only required on the primary, but since you can promote a subordinate to be the primary at any time, you should set the static port on all MFA Servers.
2. The two User Portal/MFA Mobile App servers (MFA-UP-MAS1 and MFA-UP-MAS2) are load balanced in a **stateful** configuration (mfa.contoso.com). Recall that sticky sessions are a requirement for load balancing the MFA User Portal and Mobile App Service. ![Azure MFA Server - User Portal and Mobile App Service HA](./media/howto-mfaserver-deploy-ha/mfaportal.png)
active-directory Howto Mfaserver Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfaserver-deploy.md
![Getting started with MFA Server on-premises](./media/howto-mfaserver-deploy/server2.png)</center>
-This page covers a new installation of the server and setting it up with on-premises Active Directory. If you already have the MFA server installed and are looking to upgrade, see [Upgrade to the latest Azure AD Multi-Factor Authentication Server](howto-mfaserver-deploy-upgrade.md). If you're looking for information on installing just the web service, see [Deploying the Azure AD Multi-Factor Authentication Server Mobile App Web Service](howto-mfaserver-deploy-mobileapp.md).
+This page covers a new installation of the server and setting it up with on-premises Active Directory. If you already have the MFA server installed and are looking to upgrade, see [Upgrade to the latest Azure Multi-Factor Authentication Server](howto-mfaserver-deploy-upgrade.md). If you're looking for information on installing just the web service, see [Deploying the Azure Multi-Factor Authentication Server Mobile App Web Service](howto-mfaserver-deploy-mobileapp.md).
> [!IMPORTANT]
-> In September 2022, Microsoft announced deprecation of Azure AD Multi-Factor Authentication Server. Beginning September 30, 2024, Azure AD Multi-Factor Authentication Server deployments will no longer service multifactor authentication (MFA) requests, which could cause authentications to fail for your organization. To ensure uninterrupted authentication services and to remain in a supported state, organizations should [migrate their usersΓÇÖ authentication data](how-to-migrate-mfa-server-to-azure-mfa-user-authentication.md) to the cloud-based Azure MFA service by using the latest Migration Utility included in the most recent [Azure MFA Server update](https://www.microsoft.com/download/details.aspx?id=55849). For more information, see [Azure MFA Server Migration](how-to-migrate-mfa-server-to-azure-mfa.md).
+> In September 2022, Microsoft announced deprecation of Azure Multi-Factor Authentication Server. Beginning September 30, 2024, Azure Multi-Factor Authentication Server deployments will no longer service multifactor authentication (MFA) requests, which could cause authentications to fail for your organization. To ensure uninterrupted authentication services and to remain in a supported state, organizations should [migrate their usersΓÇÖ authentication data](how-to-migrate-mfa-server-to-azure-mfa-user-authentication.md) to the cloud-based Azure MFA service by using the latest Migration Utility included in the most recent [Azure MFA Server update](https://www.microsoft.com/download/details.aspx?id=55849). For more information, see [Azure MFA Server Migration](how-to-migrate-mfa-server-to-azure-mfa.md).
-> To get started with cloud-based MFA, see [Tutorial: Secure user sign-in events with Azure AD Multi-Factor Authentication](tutorial-enable-azure-mfa.md).
+> To get started with cloud-based MFA, see [Tutorial: Secure user sign-in events with Azure Multi-Factor Authentication](tutorial-enable-azure-mfa.md).
## Plan your deployment
-Before you download the Azure AD Multi-Factor Authentication Server, think about what your load and high availability requirements are. Use this information to decide how and where to deploy.
+Before you download the Azure Multi-Factor Authentication Server, think about what your load and high availability requirements are. Use this information to decide how and where to deploy.
A good guideline for the amount of memory you need is the number of users you expect to authenticate regularly.
When a master Azure MFA Server goes offline, the subordinate servers can still p
### Prepare your environment
-Make sure the server that you're using for Azure AD Multi-Factor Authentication meets the following requirements:
+Make sure the server that you're using for Azure Multi-Factor Authentication meets the following requirements:
-| Azure AD Multi-Factor Authentication Server Requirements | Description |
+| Azure Multi-Factor Authentication Server Requirements | Description |
|: |: | | Hardware |<li>200 MB of hard disk space</li><li>x32 or x64 capable processor</li><li>1 GB or greater RAM</li> | | Software |<li>Windows Server 2016</li><li>Windows Server 2012 R2</li><li>Windows Server 2012</li><li>Windows Server 2008/R2 (with [ESU](/lifecycle/faq/extended-security-updates) only)</li><li>Windows 10</li><li>Windows 8.1, all editions</li><li>Windows 8, all editions</li><li>Windows 7, all editions (with [ESU](/lifecycle/faq/extended-security-updates) only)</li><li>Microsoft .NET 4.0 Framework</li><li>IIS 7.0 or greater if installing the user portal or web service SDK</li> |
Make sure the server that you're using for Azure AD Multi-Factor Authentication
There are three web components that make up Azure MFA Server: * Web Service SDK - Enables communication with the other components and is installed on the Azure MFA application server
-* User portal - An IIS web site that allows users to enroll in Azure AD Multi-Factor Authentication (MFA) and maintain their accounts.
+* User portal - An IIS web site that allows users to enroll in Azure Multi-Factor Authentication (MFA) and maintain their accounts.
* Mobile App Web Service - Enables using a mobile app like the Microsoft Authenticator app for two-step verification. All three components can be installed on the same server if the server is internet-facing. If breaking up the components, the Web Service SDK is installed on the Azure MFA application server and the User portal and Mobile App Web Service are installed on an internet-facing server.
If you aren't using the Event Confirmation feature, and your users aren't using
Follow these steps to download the Azure AD Multi-Factor Authentication Server from the Azure portal: > [!IMPORTANT]
-> As of July 1, 2019, Microsoft no longer offers MFA Server for new deployments. New customers who would like to require multi-factor authentication (MFA) from their users should use cloud-based Azure AD Multi-Factor Authentication.
+> In September 2022, Microsoft announced deprecation of Azure Multi-Factor Authentication Server. Beginning September 30, 2024, Azure Multi-Factor Authentication Server deployments will no longer service multifactor authentication (MFA) requests, which could cause authentications to fail for your organization. To ensure uninterrupted authentication services and to remain in a supported state, organizations should [migrate their usersΓÇÖ authentication data](how-to-migrate-mfa-server-to-azure-mfa-user-authentication.md) to the cloud-based Azure MFA service by using the latest Migration Utility included in the most recent [Azure MFA Server update](https://www.microsoft.com/download/details.aspx?id=55849). For more information, see [Azure MFA Server Migration](how-to-migrate-mfa-server-to-azure-mfa.md).
>
-> To get started with cloud-based MFA, see [Tutorial: Secure user sign-in events with Azure AD Multi-Factor Authentication](tutorial-enable-azure-mfa.md).
+> To get started with cloud-based MFA, see [Tutorial: Secure user sign-in events with Azure Multi-Factor Authentication](tutorial-enable-azure-mfa.md).
> > Existing customers that activated MFA Server before July 1, 2019 can download the latest version, future updates, and generate activation credentials as usual. The following steps only work if you were an existing MFA Server customer.
Now that you have downloaded the server you can install and configure it. Be sur
To ease rollout, allow MFA Server to communicate with your users. MFA Server can send an email to inform them that they have been enrolled for two-step verification.
-The email you send should be determined by how you configure your users for two-step verification. For example, if you are able to import phone numbers from the company directory, the email should include the default phone numbers so that users know what to expect. If you do not import phone numbers, or your users are going to use the mobile app, send them an email that directs them to complete their account enrollment. Include a hyperlink to the Azure AD Multi-Factor Authentication User portal in the email.
+The email you send should be determined by how you configure your users for two-step verification. For example, if you are able to import phone numbers from the company directory, the email should include the default phone numbers so that users know what to expect. If you do not import phone numbers, or your users are going to use the mobile app, send them an email that directs them to complete their account enrollment. Include a hyperlink to the Azure Multi-Factor Authentication User portal in the email.
The content of the email also varies depending on the method of verification that has been set for the user (phone call, SMS, or mobile app). For example, if the user is required to use a PIN when they authenticate, the email tells them what their initial PIN has been set to. Users are required to change their PIN during their first verification.
Once you have upgraded to or installed MFA Server version 8.x or higher, it is r
- Set up and configure the [User portal](howto-mfaserver-deploy-userportal.md) for user self-service. - Set up and configure the Azure MFA Server with [Active Directory Federation Service](multi-factor-authentication-get-started-adfs.md), [RADIUS Authentication](howto-mfaserver-dir-radius.md), or [LDAP Authentication](howto-mfaserver-dir-ldap.md).-- Set up and configure [Remote Desktop Gateway and Azure AD Multi-Factor Authentication Server using RADIUS](howto-mfaserver-nps-rdg.md).-- [Deploy the Azure AD Multi-Factor Authentication Server Mobile App Web Service](howto-mfaserver-deploy-mobileapp.md).-- [Advanced scenarios with Azure AD Multi-Factor Authentication and third-party VPNs](howto-mfaserver-nps-vpn.md).
+- Set up and configure [Remote Desktop Gateway and Azure Multi-Factor Authentication Server using RADIUS](howto-mfaserver-nps-rdg.md).
+- [Deploy the Azure Multi-Factor Authentication Server Mobile App Web Service](howto-mfaserver-deploy-mobileapp.md).
+- [Advanced scenarios with Azure Multi-Factor Authentication and third-party VPNs](howto-mfaserver-nps-vpn.md).
active-directory Howto Mfaserver Dir Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfaserver-dir-ad.md
Previously updated : 11/21/2019 Last updated : 10/30/2022
Use the Directory Integration section of the Azure MFA Server to integrate with Active Directory or another LDAP directory. You can configure attributes to match the directory schema and set up automatic user synchronization. > [!IMPORTANT]
-> As of July 1, 2019, Microsoft no longer offers MFA Server for new deployments. New customers that want to require multi-factor authentication (MFA) during sign-in events should use cloud-based Azure AD Multi-Factor Authentication.
+> In September 2022, Microsoft announced deprecation of Azure Multi-Factor Authentication Server. Beginning September 30, 2024, Azure Multi-Factor Authentication Server deployments will no longer service multifactor authentication (MFA) requests, which could cause authentications to fail for your organization. To ensure uninterrupted authentication services and to remain in a supported state, organizations should [migrate their usersΓÇÖ authentication data](how-to-migrate-mfa-server-to-azure-mfa-user-authentication.md) to the cloud-based Azure MFA service by using the latest Migration Utility included in the most recent [Azure MFA Server update](https://www.microsoft.com/download/details.aspx?id=55849). For more information, see [Azure MFA Server Migration](how-to-migrate-mfa-server-to-azure-mfa.md).
>
-> To get started with cloud-based MFA, see [Tutorial: Secure user sign-in events with Azure AD Multi-Factor Authentication](tutorial-enable-azure-mfa.md).
+> To get started with cloud-based MFA, see [Tutorial: Secure user sign-in events with Azure Multi-Factor Authentication](tutorial-enable-azure-mfa.md).
>
-> Existing customers that activated MFA Server before July 1, 2019 can download the latest version, future updates, and generate activation credentials as usual.
## Settings
active-directory Howto Mfaserver Dir Radius https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfaserver-dir-radius.md
Previously updated : 07/29/2021 Last updated : 10/30/2022
RADIUS is a standard protocol to accept authentication requests and to process those requests. The Azure Multi-Factor Authentication Server can act as a RADIUS server. Insert it between your RADIUS client (VPN appliance) and your authentication target to add two-step verification. Your authentication target could be Active Directory, an LDAP directory, or another RADIUS server. For Azure Multi-Factor Authentication (MFA) to function, you must configure the Azure MFA Server so that it can communicate with both the client servers and the authentication target. The Azure MFA Server accepts requests from a RADIUS client, validates credentials against the authentication target, adds Azure Multi-Factor Authentication, and sends a response back to the RADIUS client. The authentication request only succeeds if both the primary authentication and the Azure Multi-Factor Authentication succeed. > [!IMPORTANT]
-> As of July 1, 2019, Microsoft no longer offers MFA Server for new deployments. New customers that want to require multi-factor authentication (MFA) during sign-in events should use cloud-based Azure AD Multi-Factor Authentication.
+> In September 2022, Microsoft announced deprecation of Azure Multi-Factor Authentication Server. Beginning September 30, 2024, Azure Multi-Factor Authentication Server deployments will no longer service multifactor authentication (MFA) requests, which could cause authentications to fail for your organization. To ensure uninterrupted authentication services and to remain in a supported state, organizations should [migrate their usersΓÇÖ authentication data](how-to-migrate-mfa-server-to-azure-mfa-user-authentication.md) to the cloud-based Azure MFA service by using the latest Migration Utility included in the most recent [Azure MFA Server update](https://www.microsoft.com/download/details.aspx?id=55849). For more information, see [Azure MFA Server Migration](how-to-migrate-mfa-server-to-azure-mfa.md).
> > To get started with cloud-based MFA, see [Tutorial: Secure user sign-in events with Azure AD Multi-Factor Authentication](tutorial-enable-azure-mfa.md). > > If you use cloud-based MFA, see [Integrate your existing NPS infrastructure with Azure Multi-Factor Authentication](howto-mfa-nps-extension.md).
->
-> Existing customers that activated MFA Server before July 1, 2019 can download the latest version, future updates, and generate activation credentials as usual.
> [!NOTE] > The MFA Server only supports PAP (password authentication protocol) and MSCHAPv2 (Microsoft's Challenge-Handshake Authentication Protocol) RADIUS protocols when acting as a RADIUS server. Other protocols, like EAP (extensible authentication protocol), can be used when the MFA server acts as a RADIUS proxy to another RADIUS server that supports that protocol.
active-directory Howto Mfaserver Nps Rdg https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfaserver-nps-rdg.md
Previously updated : 07/11/2018 Last updated : 10/30/2022
Since Windows Authentication for terminal services is not supported for Server 2
Install the Azure Multi-Factor Authentication Server on a separate server, which proxies the RADIUS request back to the NPS on the Remote Desktop Gateway Server. After NPS validates the username and password, it returns a response to the Multi-Factor Authentication Server. Then, the MFA Server performs the second factor of authentication and returns a result to the gateway. > [!IMPORTANT]
-> As of July 1, 2019, Microsoft no longer offers MFA Server for new deployments. New customers that want to require multi-factor authentication (MFA) during sign-in events should use cloud-based Azure AD Multi-Factor Authentication.
+> In September 2022, Microsoft announced deprecation of Azure Multi-Factor Authentication Server. Beginning September 30, 2024, Azure Multi-Factor Authentication Server deployments will no longer service multifactor authentication (MFA) requests, which could cause authentications to fail for your organization. To ensure uninterrupted authentication services and to remain in a supported state, organizations should [migrate their usersΓÇÖ authentication data](how-to-migrate-mfa-server-to-azure-mfa-user-authentication.md) to the cloud-based Azure MFA service by using the latest Migration Utility included in the most recent [Azure MFA Server update](https://www.microsoft.com/download/details.aspx?id=55849). For more information, see [Azure MFA Server Migration](how-to-migrate-mfa-server-to-azure-mfa.md).
>
-> To get started with cloud-based MFA, see [Tutorial: Secure user sign-in events with Azure AD Multi-Factor Authentication](tutorial-enable-azure-mfa.md).
+> To get started with cloud-based MFA, see [Tutorial: Secure user sign-in events with Azure Multi-Factor Authentication](tutorial-enable-azure-mfa.md).
> > If you use cloud-based MFA, see how to [integrate with RADIUS authentication for Azure Multi-Factor Authentication](howto-mfa-nps-extension.md).
->
-> Existing customers that activated MFA Server before July 1, 2019 can download the latest version, future updates, and generate activation credentials as usual.
## Prerequisites
active-directory Workload Identities Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/workload-identities-overview.md
At a high level, there are two types of identities: human and machine/non-human
## Supported scenarios + Here are some ways you can use workload identities:+
+- Access Azure AD protected resources without needing to manage secrets for workloads that run on Azure using [managed identity](../managed-identities-azure-resources/overview.md).
+- Access Azure AD protected resources without needing to manage secrets for supported scenarios such as GitHub Actions, workloads running on Kubernetes, or workloads running in compute platforms outside of Azure using [workload identity federation](workload-identity-federation.md).
- Review service principals and applications that are assigned to privileged directory roles in Azure AD using [access reviews for service principals](../privileged-identity-management/pim-create-azure-ad-roles-and-resource-roles-review.md).-- Access Azure AD protected resources without needing to manage secrets (for supported scenarios) using [workload identity federation](workload-identity-federation.md). - Apply Conditional Access policies to service principals owned by your organization using [Conditional Access for workload identities](../conditional-access/workload-identity.md). - Secure workload identities with [Identity Protection](../identity-protection/concept-workload-identity-risk.md). + ## Next steps Learn how to [secure access of workload identities](../conditional-access/workload-identity.md) with adaptive policies.
active-directory How Managed Identities Work Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-managed-identities-work-vm.md
ms.devlang: Previously updated : 02/17/2022 Last updated : 10/30/2022
Your code can use a managed identity to request access tokens for services that
The following diagram shows how managed service identities work with Azure virtual machines (VMs):
-![Managed service identities and Azure VMs](media/how-managed-identities-work-vm/data-flow.png)
+[![Managed service identities and Azure VMs](media/how-managed-identities-work-vm/data-flow.png)](media/how-managed-identities-work-vm/data-flow.png#lightbox)
+
+The following table shows the differences between the system-assigned and user-assigned managed identities:
| Property | System-assigned managed identity | User-assigned managed identity | ||-|--|
The following diagram shows how managed service identities work with Azure virtu
2. Azure Resource Manager creates a service principal in Azure AD for the identity of the VM. The service principal is created in the Azure AD tenant that's trusted by the subscription.
-3. Azure Resource Manager updates the VM identity using the Azure Instance Metadata Service identity endpoint, providing the endpoint with the service principal client ID and certificate.
+3. Azure Resource Manager updates the VM identity using the Azure Instance Metadata Service identity endpoint (for [Windows](/azure/virtual-machines/windows/instance-metadata-service) and [Linux](/azure/virtual-machines/linux/instance-metadata-service)), providing the endpoint with the service principal client ID and certificate.
4. After the VM has an identity, use the service principal information to grant the VM access to Azure resources. To call Azure Resource Manager, use Azure role-based access control (Azure RBAC) to assign the appropriate role to the VM service principal. To call Key Vault, grant your code access to the specific secret or key in Key Vault.
active-directory Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/overview.md
ms.devlang: Previously updated : 06/24/2022 Last updated : 10/30/2022
Here are some of the benefits of using managed identities:
There are two types of managed identities: -- **System-assigned**. Some Azure services allow you to enable a managed identity directly on a service instance. When you enable a system-assigned managed identity, an identity is created in Azure AD. The identity is tied to the lifecycle of that service instance. When the resource is deleted, Azure automatically deletes the identity for you. By design, only that Azure resource can use this identity to request tokens from Azure AD.-- **User-assigned**. You may also create a managed identity as a standalone Azure resource. You can [create a user-assigned managed identity](how-to-manage-ua-identity-portal.md) and assign it to one or more instances of an Azure service. For user-assigned managed identities, the identity is managed separately from the resources that use it. </br></br>
+- **System-assigned**. Some Azure resources, such as virtual machines allow you to enable a managed identity directly on the resource. When you enable a system-assigned managed identity:
+ - A service principal of a special type is created in Azure AD for the identity. The service principal is tied to the lifecycle of that Azure resource. When the Azure resource is deleted, Azure automatically deletes the service principal for you.
+ - By design, only that Azure resource can use this identity to request tokens from Azure AD.
+ - You authorize the managed identity to have access to one or more services.
+- **User-assigned**. You may also create a managed identity as a standalone Azure resource. You can [create a user-assigned managed identity](how-to-manage-ua-identity-portal.md) and assign it to one or more Azure Resources. When you enable a user-assigned managed identity:
+ - A service principal of a special type is created in Azure AD for the identity. The service principal is managed separately from the resources that use it.
+ - User-assigned identities can be used by multiple resources.
+ - You authorize the managed identity to have access to one or more services.
+
The following table shows the differences between the two types of managed identities:
The following table shows the differences between the two types of managed ident
| Sharing across Azure resources | CanΓÇÖt be shared. <br/> It can only be associated with a single Azure resource. | Can be shared. <br/> The same user-assigned managed identity can be associated with more than one Azure resource. | | Common use cases | Workloads that are contained within a single Azure resource. <br/> Workloads for which you need independent identities. <br/> For example, an application that runs on a single virtual machine. | Workloads that run on multiple resources and can share a single identity. <br/> Workloads that need pre-authorization to a secure resource, as part of a provisioning flow. <br/> Workloads where resources are recycled frequently, but permissions should stay consistent. <br/> For example, a workload where multiple virtual machines need to access the same resource. |
-> [!IMPORTANT]
-> Regardless of the type of identity chosen, a managed identity is a service principal of a special type that can only be used with Azure resources. When the managed identity is deleted, the corresponding service principal is automatically removed.
-
-<br/>
- ## How can I use managed identities for Azure resources? You can use managed identities by following the steps below: 1. Create a managed identity in Azure. You can choose between system-assigned managed identity or user-assigned managed identity.
-2. When working with a user-assigned managed identity, assign the managed identity to the "source" Azure Resource, such as an Azure Logic App or an Azure Web App.
+ 1. When using a user-assigned managed identity, you assign the managed identity to the "source" Azure Resource, such as a Virtual Machine, Azure Logic App or an Azure Web App.
3. Authorize the managed identity to have access to the "target" service. 4. Use the managed identity to access a resource. In this step, you can use the Azure SDK with the Azure.Identity library. Some "source" resources offer connectors that know how to use Managed identities for the connections. In that case, you use the identity as a feature of that "source" resource.
Operations on managed identities can be performed by using an Azure Resource Man
* [How to use managed identities for App Service and Azure Functions](../../app-service/overview-managed-identity.md) * [How to use managed identities with Azure Container Instances](../../container-instances/container-instances-managed-identity.md) * [Implementing managed identities for Microsoft Azure Resources](https://www.pluralsight.com/courses/microsoft-azure-resources-managed-identities-implementing)
-* Use [workload identity federation for managed identities](../develop/workload-identity-federation.md) to access Azure Active Directory (Azure AD) protected resources without managing secrets
+* Use [workload identity federation for managed identities](../develop/workload-identity-federation.md) to access Azure Active Directory (Azure AD) protected resources without managing secrets
active-directory Tutorial Linux Vm Access Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-linux-vm-access-arm.md
You learn how to:
## Grant access
-Using managed identities for Azure resources, your code can get access tokens to authenticate to resources that support Azure AD authentication. The Azure Resource Manager API supports Azure AD authentication. First, we need to grant this VM's identity access to a resource in Azure Resource Manager, in this case the Resource Group in which the VM is contained.
+Using managed identities for Azure resources, your code can get access tokens to authenticate to resources that support Azure AD authentication. The Azure Resource Manager API supports Azure AD authentication. First, we need to grant this VM's identity access to a resource in Azure Resource Manager, in this case, the Resource Group in which the VM is contained.
+1. Sign in to the [Azure portal](https://portal.azure.com) with your administrator account.
1. Navigate to the tab for **Resource Groups**.
-2. Select the specific **Resource Group** you used for your virtual machine.
-3. Go to **Access control(IAM)** in the left panel.
-4. Click to **Add** a new role assignment for your VM. Choose **Role** as **Reader**.
-5. In the next dropdown, **Assign access to** the resource **Virtual Machine**.
-6. Next, ensure the proper subscription is listed in the **Subscription** dropdown. And for **Resource Group**, select **All resource groups**.
-7. Finally, in **Select** choose your Linux Virtual Machine in the dropdown and click **Save**.
+1. Select the **Resource Group** you want to grant the VM's managed identity access.
+1. In the left panel, select **Access control (IAM)**.
+1. Select **Add**, and then select **Add role assignment**.
+1. In the **Role** tab, select **Reader**. This role allows view all resources, but doesn't allow you to make any changes.
+1. In the **Members** tab, for the **Assign access to**, select **Managed identity**. Then, select **+ Select members**.
+1. Ensure the proper subscription is listed in the **Subscription** dropdown. And for **Resource Group**, select **All resource groups**.
+1. For the **Manage identity** dropdown, select **Virtual Machine**.
+1. Finally, in **Select** choose your Windows Virtual Machine in the dropdown and select **Save**.
![Alt image text](media/msi-tutorial-linux-vm-access-arm/msi-permission-linux.png) ## Get an access token using the VM's system-assigned managed identity and use it to call Resource Manager
-To complete these steps, you will need an SSH client. If you are using Windows, you can use the SSH client in the [Windows Subsystem for Linux](/windows/wsl/about). If you need assistance configuring your SSH client's keys, see [How to Use SSH keys with Windows on Azure](../../virtual-machines/linux/ssh-from-windows.md), or [How to create and use an SSH public and private key pair for Linux VMs in Azure](../../virtual-machines/linux/mac-create-ssh-keys.md).
+To complete these steps, you'll need an SSH client. If you're using Windows, you can use the SSH client in the [Windows Subsystem for Linux](/windows/wsl/about). If you need assistance configuring your SSH client's keys, see [How to Use SSH keys with Windows on Azure](../../virtual-machines/linux/ssh-from-windows.md), or [How to create and use an SSH public and private key pair for Linux VMs in Azure](../../virtual-machines/linux/mac-create-ssh-keys.md).
-1. In the portal, navigate to your Linux VM and in the **Overview**, click **Connect**.  
+1. In the portal, navigate to your Linux VM and in the **Overview**, select **Connect**.  
2. **Connect** to the VM with the SSH client of your choice.  3. In the terminal window, using `curl`, make a request to the local managed identities for Azure resources endpoint to get an access token for Azure Resource Manager.    
active-directory Tutorial Windows Vm Access Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-arm.md
na Previously updated : 01/11/2022 Last updated : 10/30/2022
This tutorial shows you how to access the Azure Resource Manager API using a Win
- You also need a Windows Virtual machine that has system assigned managed identities enabled. - If you need to create a virtual machine for this tutorial, you can follow the article titled [Create a virtual machine with system-assigned identity enabled](./qs-configure-portal-windows-vm.md#system-assigned-managed-identity)
+## Enable
++ ## Grant your VM access to a resource group in Resource Manager
-Using managed identities for Azure resources, your code can get access tokens to authenticate to resources that support Azure AD authentication and Azure Resource Manager supports Azure AD authentication. We need to grant this VMΓÇÖs system-assigned managed identity access to a resource in Resource Manager, in this case the Resource Group where you created the VM. Assign the [Reader](../../role-based-access-control/built-in-roles.md#reader) role to the managed-identity at the scope of the resource group we created for your **Windows VM**.
-
-For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+Using managed identities for Azure resources, your application can get access tokens to authenticate to resources that support Azure AD authentication. The Azure Resource Manager API supports Azure AD authentication. We grant this VM's identity access to a resource in Azure Resource Manager, in this case a Resource Group. We assign the [Reader](../../role-based-access-control/built-in-roles.md#reader) role to the managed-identity at the scope of the resource group.
+
+1. Sign in to the [Azure portal](https://portal.azure.com) with your administrator account.
+1. Navigate to the tab for **Resource Groups**.
+1. Select the **Resource Group** you want to grant the VM's managed identity access.
+1. In the left panel, select **Access control (IAM)**.
+1. Select **Add**, and then select **Add role assignment**.
+1. In the **Role** tab, select **Reader**. This role allows view all resources, but doesn't allow you to make any changes.
+1. In the **Members** tab, for the **Assign access to**, select **Managed identity**. Then, select **+ Select members**.
+1. Ensure the proper subscription is listed in the **Subscription** dropdown. And for **Resource Group**, select **All resource groups**.
+1. For the **Manage identity** dropdown, select **Virtual Machine**.
+1. Finally, in **Select** choose your Windows Virtual Machine in the dropdown and select **Save**.
## Get an access token using the VM's system-assigned managed identity and use it to call Azure Resource Manager
-You will need to use **PowerShell** in this portion. If you donΓÇÖt have **PowerShell** installed, download it [here](/powershell/azure/).
+You'll need to use **PowerShell** in this portion. If you donΓÇÖt have **PowerShell** installed, download it [here](/powershell/azure/).
-1. In the portal, navigate to **Virtual Machines** and go to your Windows virtual machine and in the **Overview**, click **Connect**.
+1. In the portal, navigate to **Virtual Machines** and go to your Windows virtual machine and in the **Overview**, select **Connect**.
2. Enter in your **Username** and **Password** for which you added when you created the Windows VM.
-3. Now that you have created a **Remote Desktop Connection** with the virtual machine, open **PowerShell** in the remote session.
+3. Now that you've created a **Remote Desktop Connection** with the virtual machine, open **PowerShell** in the remote session.
4. Using the Invoke-WebRequest cmdlet, make a request to the local managed identity for Azure resources endpoint to get an access token for Azure Resource Manager. ```powershell
active-directory Permissions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/permissions-reference.md
Previously updated : 09/26/2022 Last updated : 10/30/2022
This article lists the Azure AD built-in roles you can assign to allow managemen
> | [Partner Tier1 Support](#partner-tier1-support) | Do not use - not intended for general use. | 4ba39ca4-527c-499a-b93d-d9b492c50246 | > | [Partner Tier2 Support](#partner-tier2-support) | Do not use - not intended for general use. | e00e864a-17c5-4a4b-9c06-f5b95a8d5bd8 | > | [Password Administrator](#password-administrator) | Can reset passwords for non-administrators and Password Administrators. | 966707d0-3269-4727-9be2-8c3a10f19b9d |
-> [Permissions Management Administrator](#permissions-management-administrator) | Can manage all aspects of Permissions Management. | af78dc32-cf4d-46f9-ba4e-4428526346b5 |
+> | [Permissions Management Administrator](#permissions-management-administrator) | Manage all aspects of Entra Permissions Management. | af78dc32-cf4d-46f9-ba4e-4428526346b5 |
> | [Power BI Administrator](#power-bi-administrator) | Can manage all aspects of the Power BI product. | a9ea8996-122f-4c74-9520-8edcd192826c | > | [Power Platform Administrator](#power-platform-administrator) | Can create and manage all aspects of Microsoft Dynamics 365, Power Apps and Power Automate. | 11648597-926c-4cf3-9c36-bcebb0ba8dcc | > | [Printer Administrator](#printer-administrator) | Can manage all aspects of printers and printer connectors. | 644ef478-e28f-4e28-b9dc-3fdde9aa0b1f |
Users with this role have access to all administrative features in Azure Active
> | microsoft.office365.messageCenter/messages/read | Read messages in Message Center in the Microsoft 365 admin center, excluding security messages | > | microsoft.office365.messageCenter/securityMessages/read | Read security messages in Message Center in the Microsoft 365 admin center | > | microsoft.office365.network/performance/allProperties/read | Read all network performance properties in the Microsoft 365 admin center |
+> | microsoft.office365.organizationalMessages/allEntities/allProperties/allTasks | Manage all aspects of Microsoft 365 organizational message center |
> | microsoft.office365.protectionCenter/allEntities/allProperties/allTasks | Manage all aspects of the Security and Compliance centers | > | microsoft.office365.search/content/manage | Create and delete content, and read and update all properties in Microsoft Search | > | microsoft.office365.securityComplianceCenter/allEntities/allTasks | Create and delete all resources, and read and update standard properties in the Office 365 Security & Compliance Center |
Users with this role have access to all administrative features in Azure Active
Users in this role can read settings and administrative information across Microsoft 365 services but can't take management actions. Global Reader is the read-only counterpart to Global Administrator. Assign Global Reader instead of Global Administrator for planning, audits, or investigations. Use Global Reader in combination with other limited admin roles like Exchange Administrator to make it easier to get work done without the assigning the Global Administrator role. Global Reader works with Microsoft 365 admin center, Exchange admin center, SharePoint admin center, Teams admin center, Security center, Compliance center, Azure AD admin center, and Device Management admin center.
+Users with this role **cannot** do the following:
+
+- Cannot access the Purchase Services area in the Microsoft 365 admin center.
+ > [!NOTE]
-> Global Reader role has a few limitations right now -
+> Global Reader role has the following limitations:
> >- [OneDrive admin center](https://admin.onedrive.com/) - OneDrive admin center does not support the Global Reader role >- [Microsoft 365 admin center](https://admin.microsoft.com/Adminportal/Home#/homepage) - Global Reader can't read integrated apps. You won't find the **Integrated apps** tab under **Settings** in the left pane of Microsoft 365 admin center.
Users in this role can read settings and administrative information across Micro
> - [SharePoint](https://admin.microsoft.com/sharepoint) - Global Reader currently can't access SharePoint using PowerShell. > - [Power Platform admin center](https://admin.powerplatform.microsoft.com) - Global Reader is not yet supported in the Power Platform admin center. > - Microsoft Purview doesn't support the Global Reader role.
->
-> These features are currently in development.
->
> [!div class="mx-tableFixed"] > | Actions | Description |
Users in this role can read settings and administrative information across Micro
> | microsoft.commerce.billing/allEntities/allProperties/read | Read all resources of Office 365 billing | > | microsoft.edge/allEntities/allProperties/read | Read all aspects of Microsoft Edge | > | microsoft.insights/allEntities/allProperties/read | Read all aspects of Viva Insights |
-> | microsoft.office365.exchange/allEntities/standard/read | Read all resources of Exchange Online |
> | microsoft.office365.messageCenter/messages/read | Read messages in Message Center in the Microsoft 365 admin center, excluding security messages | > | microsoft.office365.messageCenter/securityMessages/read | Read security messages in Message Center in the Microsoft 365 admin center | > | microsoft.office365.network/performance/allProperties/read | Read all network performance properties in the Microsoft 365 admin center |
+> | microsoft.office365.organizationalMessages/allEntities/allProperties/read | Read all aspects of Microsoft 365 organizational message center |
> | microsoft.office365.protectionCenter/allEntities/allProperties/read | Read all properties in the Security and Compliance centers | > | microsoft.office365.securityComplianceCenter/allEntities/read | Read standard properties in Microsoft 365 Security and Compliance Center | > | microsoft.office365.usageReports/allEntities/allProperties/read | Read Office 365 usage reports |
This role can create and manage all security groups. However, Intune Administrat
> | microsoft.azure.supportTickets/allEntities/allTasks | Create and manage Azure support tickets | > | microsoft.cloudPC/allEntities/allProperties/allTasks | Manage all aspects of Windows 365 | > | microsoft.intune/allEntities/allTasks | Manage all aspects of Microsoft Intune |
+> | microsoft.office365.organizationalMessages/allEntities/allProperties/read | Read all aspects of Microsoft 365 organizational message center |
> | microsoft.office365.supportTickets/allEntities/allTasks | Create and manage Microsoft 365 service requests | > | microsoft.office365.webPortal/allEntities/standard/read | Read basic properties on all resources in the Microsoft 365 admin center |
Users with this role have global permissions within Microsoft SharePoint Online,
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
+> | microsoft.directory/groups/hiddenMembers/read | Read hidden members of Security groups and Microsoft 365 groups, including role-assignable groups |
> | microsoft.directory/groups.unified/create | Create Microsoft 365 groups, excluding role-assignable groups | > | microsoft.directory/groups.unified/delete | Delete Microsoft 365 groups, excluding role-assignable groups | > | microsoft.directory/groups.unified/restore | Restore Microsoft 365 groups from soft-deleted container, excluding role-assignable groups |
Users with this role can access tenant level aggregated data and associated insi
## User Administrator
-Assign the User Administrator role to users who need to do the following:
+Assign the User Administrator role to users who need to do the following:
| Permission | More information | | | |
azure-functions Durable Functions Event Publishing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-event-publishing.md
Following are some scenarios where this feature is useful:
## Prerequisites * Install [Microsoft.Azure.WebJobs.Extensions.DurableTask](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.DurableTask) in your Durable Functions project.
-* Install an [Azure Storage Emulator](../../storage/common/storage-use-emulator.md) or use an existing Azure Storage account.
+* Install the [Azurite storage emulator](../../storage/common/storage-use-azurite.md) or use an existing Azure Storage account.
* Install [Azure CLI](/cli/azure/) or use [Azure Cloud Shell](../../cloud-shell/overview.md) ## Create a custom Event Grid topic
azure-functions Durable Functions Storage Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-storage-providers.md
There are many significant tradeoffs between the various supported storage provi
|- |- |- |- | | Official support status | ✅ Generally available (GA) | ⚠ Public preview | ⚠ Public preview | | External dependencies | Azure Storage account (general purpose v1) | Azure Event Hubs<br/>Azure Storage account (general purpose) | [SQL Server 2019](https://www.microsoft.com/sql-server/sql-server-2019) or Azure SQL Database |
-| Local development and emulation options | [Azurite v3.12+](../../storage/common/storage-use-azurite.md) (cross platform)<br/>[Azure Storage Emulator](../../storage/common/storage-use-emulator.md) (Windows only) | Supports in-memory emulation of task hubs ([more information](https://microsoft.github.io/durabletask-netherite/#/emulation)) | SQL Server Developer Edition (supports [Windows](/sql/database-engine/install-windows/install-sql-server), [Linux](/sql/linux/sql-server-linux-setup), and [Docker containers](/sql/linux/sql-server-linux-docker-container-deployment)) |
+| Local development and emulation options | [Azurite v3.12+](../../storage/common/storage-use-azurite.md) (cross platform) | Supports in-memory emulation of task hubs ([more information](https://microsoft.github.io/durabletask-netherite/#/emulation)) | SQL Server Developer Edition (supports [Windows](/sql/database-engine/install-windows/install-sql-server), [Linux](/sql/linux/sql-server-linux-setup), and [Docker containers](/sql/linux/sql-server-linux-docker-container-deployment)) |
| Task hub configuration | Explicit | Explicit | Implicit by default ([more information](https://microsoft.github.io/durabletask-mssql/#/taskhubs)) | | Maximum throughput | Moderate | Very high | Moderate | | Maximum orchestration/entity scale-out (nodes) | 16 | 32 | N/A |
azure-functions Durable Functions Webjobs Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-webjobs-sdk.md
To complete the steps in this article:
(You can use [Visual Studio Code](https://code.visualstudio.com/) instead, but some of the instructions are specific to Visual Studio.)
-* Install and run an [Azure Storage Emulator](../../storage/common/storage-use-emulator.md). An alternative is to update the *App.config* file with a real Azure Storage connection string.
+* Install and run the [Azurite storage emulator](../../storage/common/storage-use-azurite.md). An alternative is to update the *App.config* file with a real Azure Storage connection string.
## WebJobs SDK versions
azure-functions Functions Core Tools Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-core-tools-reference.md
The following publish options apply, based on version:
| **`--no-build`** | Project isn't built during publishing. For Python, `pip install` isn't performed. | | **`--nozip`** | Turns the default `Run-From-Package` mode off. | | **`--overwrite-settings -y`** | Suppress the prompt to overwrite app settings when `--publish-local-settings -i` is used.|
-| **`--publish-local-settings -i`** | Publish settings in local.settings.json to Azure, prompting to overwrite if the setting already exists. If you're using the Microsoft Azure Storage Emulator, first change the app setting to an [actual storage connection](functions-run-local.md#get-your-storage-connection-strings). |
+| **`--publish-local-settings -i`** | Publish settings in local.settings.json to Azure, prompting to overwrite if the setting already exists. If you're using a [local storage emulator](functions-develop-local.md#local-storage-emulator), first change the app setting to an [actual storage connection](functions-run-local.md#get-your-storage-connection-strings). |
| **`--publish-settings-only`**, **`-o`** | Only publish settings and skip the content. Default is prompt. | | **`--slot`** | Optional name of a specific slot to which to publish. |
azure-functions Functions Develop Local https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-develop-local.md
The following application settings can be included in the **`Values`** array whe
| Setting | Values | Description | |--|--|--|
-|**`AzureWebJobsStorage`**| Storage account connection string, or<br/>`UseDevelopmentStorage=true`| Contains the connection string for an Azure storage account. Required when using triggers other than HTTP. For more information, see the [`AzureWebJobsStorage`] reference.<br/>When you have the [Azurite Emulator](../storage/common/storage-use-azurite.md) installed locally and you set [`AzureWebJobsStorage`] to `UseDevelopmentStorage=true`, Core Tools uses the emulator. The emulator is useful during development, but you should test with an actual storage connection before deployment.|
+|**`AzureWebJobsStorage`**| Storage account connection string, or<br/>`UseDevelopmentStorage=true`| Contains the connection string for an Azure storage account. Required when using triggers other than HTTP. For more information, see the [`AzureWebJobsStorage`] reference.<br/>When you have the [Azurite Emulator](../storage/common/storage-use-azurite.md) installed locally and you set [`AzureWebJobsStorage`] to `UseDevelopmentStorage=true`, Core Tools uses the emulator. For more information, see [Local storage emulator](#local-storage-emulator).|
|**`AzureWebJobs.<FUNCTION_NAME>.Disabled`**| `true`\|`false` | To disable a function when running locally, add `"AzureWebJobs.<FUNCTION_NAME>.Disabled": "true"` to the collection, where `<FUNCTION_NAME>` is the name of the function. To learn more, see [How to disable functions in Azure Functions](disable-function.md#localsettingsjson). | |**`FUNCTIONS_WORKER_RUNTIME`** | `dotnet`<br/>`dotnet-isolated`<br/>`node`<br/>`java`<br/>`powershell`<br/>`python`| Indicates the targeted language of the Functions runtime. Required for version 2.x and higher of the Functions runtime. This setting is generated for your project by Core Tools. To learn more, see the [`FUNCTIONS_WORKER_RUNTIME`](functions-app-settings.md#functions_worker_runtime) reference.| | **`FUNCTIONS_WORKER_RUNTIME_VERSION`** | `~7` |Indicates to use PowerShell 7 when running locally. If not set, then PowerShell Core 6 is used. This setting is only used when running locally. The PowerShell runtime version is determined by the `powerShellVersion` site configuration setting, when it runs in Azure, which can be [set in the portal](functions-reference-powershell.md#changing-the-powershell-version). |
When you develop your functions locally, any local settings required by your app
+ [Visual Studio](functions-develop-vs.md#function-app-settings) + [Azure Functions Core Tools](functions-run-local.md#local-settings)
+## Triggers and bindings
+
+When you develop your functions locally, you need to take trigger and binding behaviors into consideration. The easiest way to test bindings during local development is to use connection strings that target live Azure services. You can target live services by adding the appropriate connection string settings in the `Values` array in the local.settings.json file. When you do this, local executions during testing impact live service data. Because of this, consider setting-up separate services to use during development and testing, and then switch to difference services during production. You can also use a local storage emulator.
+
+## Local storage emulator
+
+During local development, you can use the local [Azurite emulator](/storage/common/storage-use-azurite.md) when testing functions with Azure Storage bindings (Queue Storage, Blob Storage, and Table Storage), without having to connect to remote storage services. Azurite integrates with Visual Studio Code and Visual Studio, and you can also run it from the command prompt using npm. For more information, see [Use the Azurite emulator for local Azure Storage development](/storage/common/storage-use-azurite.md).
+
+The following setting in the `Values` collection of the local.settings.json file tells the local Functions host to use Azurite for the default `AzureWebJobsStorage` connection:
+
+ ```json
+ "AzureWebJobsStorage": "UseDevelopmentStorage=true"
+ ```
+
+With this setting in place, any Azure Storage trigger or binding that uses `AzureWebJobsStorage` as its connection connects to Azurite when running locally. During local execution, you must have Azurite installed and running. The emulator is useful during development, but you should test with an actual storage connection before deployment. When you publish your project, don't publish this setting. You need to instead use an Azure Storage connection string in the same settings in your function app in Azure.
+ ## Next steps + To learn more about local development of compiled C# functions (both in-process and isolated process) using Visual Studio, see [Develop Azure Functions using Visual Studio](functions-develop-vs.md).
azure-functions Functions Develop Vs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-develop-vs.md
Your code can also read the function app settings values as environment variable
## Configure the project for local development
-The Functions runtime uses an Azure Storage account internally. For all trigger types other than HTTP and webhooks, set the `Values.AzureWebJobsStorage` key to a valid Azure Storage account connection string. Your function app can also use the [Azure Storage Emulator](../storage/common/storage-use-emulator.md) for the `AzureWebJobsStorage` connection setting that's required by the project. To use the emulator, set the value of `AzureWebJobsStorage` to `UseDevelopmentStorage=true`. Change this setting to an actual storage account connection string before deployment.
+The Functions runtime uses an Azure Storage account internally. For all trigger types other than HTTP and webhooks, set the `Values.AzureWebJobsStorage` key to a valid Azure Storage account connection string. Your function app can also use the [Azurite emulator](/storage/common/storage-use-azurite.md) for the `AzureWebJobsStorage` connection setting that's required by the project. To use the emulator, set the value of `AzureWebJobsStorage` to `UseDevelopmentStorage=true`. Change this setting to an actual storage account connection string before deployment. For more information, see [Local storage emulator](functions-develop-local.md#local-storage-emulator).
To set the storage account connection string:
In C# class library functions, the bindings used by the function are defined by
![Create a Queue storage trigger function](./media/functions-develop-vs/functions-vstools-create-queuetrigger.png)
- You'll then be prompted to choose between two Azure storage emulators or referencing a provisioned Azure storage account.
+ You'll then be prompted to choose between the Azurite storage emulator or referencing a provisioned Azure storage account.
- This trigger example uses a connection string with a key named `QueueStorage`. This key, stored in the [local.settings.json file](functions-develop-local.md#local-settings-file), either references the Azure storage emulators or an Azure storage account.
+ This trigger example uses a connection string with a key named `QueueStorage`. This key, stored in the [local.settings.json file](functions-develop-local.md#local-settings-file), either references the Azurite emulator or an Azure storage account.
4. Examine the newly added class. You see a static `Run()` method that's attributed with the `FunctionName` attribute. This attribute indicates that the method is the entry point for the function.
azure-functions Functions Run Local https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-run-local.md
The function app settings values can also be read in your code as environment va
* [PowerShell](functions-reference-powershell.md#environment-variables) * [Python](functions-reference-python.md#environment-variables)
-When no valid storage connection string is set for [`AzureWebJobsStorage`] and the emulator isn't being used, the following error message is shown:
+When no valid storage connection string is set for [`AzureWebJobsStorage`] and a local storage emulator isn't being used, the following error message is shown:
> Missing value for AzureWebJobsStorage in local.settings.json. This is required for all triggers other than HTTP. You can run 'func azure functionapp fetch-app-settings \<functionAppName\>' or specify a connection string in local.settings.json. ### Get your storage connection strings
-Even when using the Microsoft Azure Storage Emulator for development, you may want to run locally with an actual storage connection. Assuming you have already [created a storage account](../storage/common/storage-account-create.md), you can get a valid storage connection string in one of several ways:
+Even when using the [Azurite storage emulator](functions-develop-local.md#local-storage-emulator) for development, you may want to run locally with an actual storage connection. Assuming you have already [created a storage account](../storage/common/storage-account-create.md), you can get a valid storage connection string in one of several ways:
# [Portal](#tab/portal)
azure-functions Storage Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/storage-considerations.md
The storage account connection string must be updated when you regenerate storag
### Shared storage accounts
-It's possible for multiple function apps to share the same storage account without any issues. For example, in Visual Studio you can develop multiple apps using the Azure Storage Emulator. In this case, the emulator acts like a single storage account. The same storage account used by your function app can also be used to store your application data. However, this approach isn't always a good idea in a production environment.
+It's possible for multiple function apps to share the same storage account without any issues. For example, in Visual Studio you can develop multiple apps using the [Azurite storage emulator](functions-develop-local.md#local-storage-emulator). In this case, the emulator acts like a single storage account. The same storage account used by your function app can also be used to store your application data. However, this approach isn't always a good idea in a production environment.
You may need to use separate store accounts to [avoid host ID collisions](#avoiding-host-id-collisions).
azure-government Documentation Government Impact Level 5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-impact-level-5.md
recommendations: false Previously updated : 07/14/2022 Last updated : 10/21/2022 # Isolation guidelines for Impact Level 5 workloads
Virtual machine scale sets aren't currently supported on Azure Dedicated Host. B
> [!IMPORTANT] > As new hardware generations become available, some VM types might require reconfiguration (scale up or migration to a new VM SKU) to ensure they remain on properly dedicated hardware. For more information, see **[Virtual machine isolation in Azure](../virtual-machines/isolation.md).**
-#### Disk encryption for virtual machines
+#### Disk encryption options
-You can encrypt the storage that supports these virtual machines in one of two ways to support necessary encryption standards.
+There are several types of encryption available for your managed disks supporting virtual machines and virtual machine scale sets:
-- Use Azure Disk Encryption to encrypt the drives by using dm-crypt (Linux) or BitLocker (Windows):
- - [Enable Azure Disk Encryption for Linux](../virtual-machines/linux/disk-encryption-overview.md)
- - [Enable Azure Disk Encryption for Windows](../virtual-machines/windows/disk-encryption-overview.md)
-- Use Azure Storage service encryption for storage accounts with your own key to encrypt the storage account that holds the disks:
- - [Storage service encryption with customer-managed keys](../storage/common/customer-managed-keys-configure-key-vault.md)
+- Azure Disk Encryption
+- Server-side encryption of Azure Disk Storage
+- Encryption at host
+- Confidential disk encryption
-#### Disk encryption for virtual machine scale sets
+All these options enable you to have sole control over encryption keys. For more information, see [Overview of managed disk encryption options](../virtual-machines/disk-encryption-overview.md).
-You can encrypt disks that support virtual machine scale sets by using Azure Disk Encryption:
--- [Encrypt disks in virtual machine scale sets](../virtual-machine-scale-sets/disk-encryption-key-vault.md) ## Containers
azure-government Documentation Government Overview Jps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-overview-jps.md
recommendations: false Previously updated : 08/30/2022 Last updated : 10/30/2022
-# Public safety and justice in Azure Government
+# Azure for public safety and justice
## Overview
Microsoft treats Criminal Justice Information Services (CJIS) compliance as a co
The [Criminal Justice Information Services](https://www.fbi.gov/services/cjis) (CJIS) Division of the US Federal Bureau of Investigation (FBI) gives state, local, and federal law enforcement and criminal justice agencies access to criminal justice information (CJI), for example, fingerprint records and criminal histories. Law enforcement and other government agencies in the United States must ensure that their use of cloud services for the transmission, storage, or processing of CJI complies with the [CJIS Security Policy](https://www.fbi.gov/services/cjis/cjis-security-policy-resource-center/view), which establishes minimum security requirements and controls to safeguard CJI.
-### Azure Government and CJIS Security Policy
+### Azure and CJIS Security Policy
Microsoft's commitment to meeting the applicable CJIS regulatory controls help criminal justice organizations be compliant with the CJIS Security Policy when implementing cloud-based solutions. For more information about Azure support for CJIS, see [Azure CJIS compliance offering](/azure/compliance/offerings/offering-cjis).
While the current CMVP FIPS 140 implementation guidance precludes a FIPS 140 val
Proper protection and management of encryption keys is essential for data security. [Azure Key Vault](../key-vault/index.yml) is a cloud service for securely storing and managing secrets. Key Vault enables you to store your encryption keys in hardware security modules (HSMs) that are FIPS 140 validated. For more information, see [Data encryption key management](./azure-secure-isolation-guidance.md#data-encryption-key-management).
-With Key Vault, you can import or generate encryption keys in HSMs, ensuring that keys never leave the HSM protection boundary to support *bring your own key* (BYOK) scenarios. Keys generated inside the Key Vault HSMs aren't exportable ΓÇô there can be no clear-text version of the key outside the HSMs. This binding is enforced by the underlying HSM. **Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents don't see or extract your cryptographic keys.** For extra assurances, see [How does Azure Key Vault protect your keys?](../key-vault/managed-hsm/mhsm-control-data.md#how-does-azure-key-vault-managed-hsm-protect-your-keys) Therefore, if you use CMK stored in Azure Key Vault HSMs, you effectively maintain sole ownership of encryption keys.
+With Key Vault, you can import or generate encryption keys in HSMs, ensuring that keys never leave the HSM protection boundary to support *bring your own key* (BYOK) scenarios. Keys generated inside the Key Vault HSMs aren't exportable ΓÇô there can be no clear-text version of the key outside the HSMs. This binding is enforced by the underlying HSM. **Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents don't see or extract your cryptographic keys.** For more information, see [How does Azure Key Vault protect your keys?](../key-vault/managed-hsm/mhsm-control-data.md#how-does-azure-key-vault-managed-hsm-protect-your-keys) Therefore, if you use CMK stored in Azure Key Vault HSMs, you effectively maintain sole ownership of encryption keys.
### Data encryption in transit
Technologies like [Intel Software Guard Extensions](https://software.intel.com/s
Insider threat is characterized as potential for providing back-door connections and cloud service provider (CSP) privileged administrator access to your systems and data. For more information on how Microsoft restricts insider access to your data, see [Restrictions on insider access](./documentation-government-plan-security.md#restrictions-on-insider-access).
-All Azure and Azure Government employees in the United States are subject to Microsoft background checks. For more information, see [Screening](./documentation-government-plan-security.md#screening). Azure Government provides you with an extra layer of protection through contractual commitments regarding storage of your data in the United States and limiting potential access to systems processing your data to screened US persons that have completed fingerprint background checks and criminal records checks to address CJIS requirements.
- ## Monitoring your Azure resources Azure provides essential services that you can use to gain in-depth insight into your provisioned Azure resources and get alerted about suspicious activity, including outside attacks aimed at your applications and data. For more information about these services, see [Customer monitoring of Azure resources](./documentation-government-plan-security.md#customer-monitoring-of-azure-resources).
azure-monitor Action Groups Logic App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/action-groups-logic-app.md
- Title: Trigger complex actions with Azure Monitor alerts
-description: Learn how to create a logic app action to process Azure Monitor alerts.
-- Previously updated : 09/07/2022---
-# How to trigger complex actions with Azure Monitor alerts
-
-This article shows you how to set up and trigger a logic app to create a conversation in Microsoft Teams when an alert fires.
-
-## Overview
-
-When an Azure Monitor alert triggers, it calls an [action group](./action-groups.md). Action groups allow you to trigger one or more actions to notify others about an alert and also remediate it.
-
-The general process is:
--- Create the logic app for the respective alert type.--- Import a sample payload for the respective alert type into the logic app.--- Define the logic app behavior.--- Copy the HTTP endpoint of the logic app into an Azure action group.-
-The process is similar if you want the logic app to perform a different action.
-
-## Create an activity log alert: Administrative
-
-1. [Create a logic app](~/articles/logic-apps/quickstart-create-first-logic-app-workflow.md).
-
-1. Select the trigger: **When a HTTP request is received**.
-
-1. In the dialog for **When an HTTP request is received**, select **Use sample payload to generate schema**.
-
- ![Screenshot that shows the When an H T T P request dialog box and the Use sample payload to generate schema option selected. ](~/articles/app-service/media/tutorial-send-email/generate-schema-with-payload.png)
-
-1. Copy and paste the following sample payload into the dialog box:
-
- ```json
- {
- "schemaId": "Microsoft.Insights/activityLogs",
- "data": {
- "status": "Activated",
- "context": {
- "activityLog": {
- "authorization": {
- "action": "microsoft.insights/activityLogAlerts/write",
- "scope": "/subscriptions/…"
- },
- "channels": "Operation",
- "claims": "…",
- "caller": "logicappdemo@contoso.com",
- "correlationId": "91ad2bac-1afa-4932-a2ce-2f8efd6765a3",
- "description": "",
- "eventSource": "Administrative",
- "eventTimestamp": "2018-04-03T22:33:11.762469+00:00",
- "eventDataId": "ec74c4a2-d7ae-48c3-a4d0-2684a1611ca0",
- "level": "Informational",
- "operationName": "microsoft.insights/activityLogAlerts/write",
- "operationId": "61f59fc8-1442-4c74-9f5f-937392a9723c",
- "resourceId": "/subscriptions/…",
- "resourceGroupName": "LOGICAPP-DEMO",
- "resourceProviderName": "microsoft.insights",
- "status": "Succeeded",
- "subStatus": "",
- "subscriptionId": "…",
- "submissionTimestamp": "2018-04-03T22:33:36.1068742+00:00",
- "resourceType": "microsoft.insights/activityLogAlerts"
- }
- },
- "properties": {}
- }
- }
- ```
-
-1. The **Logic Apps Designer** displays a pop-up window to remind you that the request sent to the logic app must set the **Content-Type** header to **application/json**. Close the pop-up window. The Azure Monitor alert sets the header.
-
- ![Set the Content-Type header](media/action-groups-logic-app/content-type-header.png "Set the Content-Type header")
-
-1. Select **+** **New step** and then choose **Add an action**.
-
- ![Add an action](media/action-groups-logic-app/add-action.png "Add an action")
-
-1. Search for and select the Microsoft Teams connector. Choose the **Post message in a chat or channel** action.
-
- ![Microsoft Teams actions](media/action-groups-logic-app/microsoft-teams-actions-2.png "Microsoft Teams actions")
-
-1. Configure the Microsoft Teams action. The **Logic Apps Designer** asks you to authenticate to your work or school account. Choose the **Team ID** and **Channel ID** to send the message to.
-
-13. Configure the message by using a combination of static text and references to the \<fields\> in the dynamic content. Copy and paste the following text into the **Message** field:
-
- ```text
- Activity Log Alert: <eventSource>
- operationName: <operationName>
- status: <status>
- resourceId: <resourceId>
- ```
-
- Then search for and replace the \<fields\> with dynamic content tags of the same name.
-
- > [!NOTE]
- > There are two dynamic fields that are named **status**. Add both of these fields to the message. Use the field that's in the **activityLog** property bag and delete the other field. Hover your cursor over the **status** field to see the fully qualified field reference, as shown in the following screenshot:
-
- ![Microsoft Teams action: Post a message](media/action-groups-logic-app/teams-action-post-message.png "Microsoft Teams action: Post a message")
-
-1. At the top of the **Logic Apps Designer**, select **Save** to save your logic app.
-
-1. Open your existing action group and add an action to reference the logic app. If you don't have an existing action group, see [Create and manage action groups in the Azure portal](./action-groups.md) to create one. DonΓÇÖt forget to save your changes.
-
- ![Update the action group](media/action-groups-logic-app/update-action-group.png "Update the action group")
-
-The next time an alert calls your action group, your logic app is called.
-
-## Create a service health alert
-
-Azure Service Health entries are part of the activity log. The process for creating the alert is similar to [creating an activity log alert](#create-an-activity-log-alert-administrative), but with a few changes:
--- Steps 1 through 3 are the same.-- For step 4, use the following sample payload for the HTTP request trigger:-
- ```json
- {
- "schemaId": "Microsoft.Insights/activityLogs",
- "data": {
- "status": "Activated",
- "context": {
- "activityLog": {
- "channels": "Admin",
- "correlationId": "e416ed3c-8874-4ec8-bc6b-54e3c92a24d4",
- "description": "…",
- "eventSource": "ServiceHealth",
- "eventTimestamp": "2018-04-03T22:44:43.7467716+00:00",
- "eventDataId": "9ce152f5-d435-ee31-2dce-104228486a6d",
- "level": "Informational",
- "operationName": "Microsoft.ServiceHealth/incident/action",
- "operationId": "e416ed3c-8874-4ec8-bc6b-54e3c92a24d4",
- "properties": {
- "title": "...",
- "service": "...",
- "region": "Global",
- "communication": "...",
- "incidentType": "Incident",
- "trackingId": "...",
- "impactStartTime": "2018-03-22T21:40:00.0000000Z",
- "impactMitigationTime": "2018-03-22T21:41:00.0000000Z",
- "impactedServices": "[{\"ImpactedRegions\"}]",
- "defaultLanguageTitle": "...",
- "defaultLanguageContent": "...",
- "stage": "Active",
- "communicationId": "11000001466525",
- "version": "0.1.1"
- },
- "status": "Active",
- "subscriptionId": "...",
- "submissionTimestamp": "2018-04-03T22:44:50.8013523+00:00"
- }
- },
- "properties": {}
- }
- }
- ```
--- Steps 5 and 6 are the same.-- For steps 7 through 10, use the following process:-
- 1. Select **+** **New step** and then choose **Add a condition**. Set the following conditions so the logic app executes only when the input data matches the values below. When entering the version value into the text box, put quotes around it ("0.1.1") to make sure that it's evaluated as a string and not a numeric type. The system does not show the quotes if you return to the page, but the underlying code still maintains the string type.
- - `schemaId == Microsoft.Insights/activityLogs`
- - `eventSource == ServiceHealth`
- - `version == "0.1.1"`
-
- !["Service Health payload condition"](media/action-groups-logic-app/service-health-payload-condition.png "Service Health payload condition")
-
- 1. In the **If true** condition, follow the instructions in steps 6 through 8 in [Create an activity log alert](#create-an-activity-log-alert-administrative) to add the Microsoft Teams action.
-
- 1. Define the message by using a text and dynamic content. Copy and paste the following content into the **Message** field. Replace the `[incidentType]`, `[trackingID]`, `[title]`, and `[communication]` fields with dynamic content tags of the same name. Use edit options available in Message to add strong/bold texts and links. The link *"For details, log in to the Azure Service Health dashboard."* in the below image has the destination set to https://portal.azure.com/#blade/Microsoft_Azure_Health/AzureHealthBrowseBlade/serviceIssues
-
- !["Service Health true condition post action"](media/action-groups-logic-app/service-health-true-condition-post-action-2.png "Service Health true condition post action")
-
- 1. For the **If false** condition, provide a useful message:
-
- !["Service Health false condition post action"](media/action-groups-logic-app/service-health-false-condition-post-action-2.png "Service Health false condition post action")
--- Step 11 is the same. Follow the instructions to save your logic app and update your action group.-
-## Create a metric alert
-
-The process for creating a metric alert is similar to [creating an activity log alert](#create-an-activity-log-alert-administrative), but with a few changes:
--- Steps 1 through 3 are the same.-- For step 4, use the following sample payload for the HTTP request trigger:-
- ```json
- {
- "schemaId": "AzureMonitorMetricAlert",
- "data": {
- "version": "2.0",
- "status": "Activated",
- "context": {
- "timestamp": "2018-04-09T19:00:07.7461615Z",
- "id": "...",
- "name": "TEST-VM CPU Utilization",
- "description": "",
- "conditionType": "SingleResourceMultipleMetricCriteria",
- "condition": {
- "windowSize": "PT15M",
- "allOf": [
- {
- "metricName": "Percentage CPU",
- "dimensions": [
- {
- "name": "ResourceId",
- "value": "d92fc5cb-06cf-4309-8c9a-538eea6a17a6"
- }
- ],
- "operator": "GreaterThan",
- "threshold": "5",
- "timeAggregation": "PT15M",
- "metricValue": 1.0
- }
- ]
- },
- "subscriptionId": "...",
- "resourceGroupName": "TEST",
- "resourceName": "test-vm",
- "resourceType": "Microsoft.Compute/virtualMachines",
- "resourceId": "...",
- "portalLink": "..."
- },
- "properties": {}
- }
- }
- ```
--- Steps 5 and 6 are the same.-- For steps 7 through 10, use the following process:-
- 1. Select **+** **New step** and then choose **Add a condition**. Set the following conditions so the logic app executes only when the input data matches these values below. When entering the version value into the text box, put quotes around it ("2.0") to makes sure that it's evaluated as a string and not a numeric type. The system does not show the quotes if you return to the page, but the underlying code still maintains the string type.
- - `schemaId == AzureMonitorMetricAlert`
- - `version == "2.0"`
-
- !["Metric alert payload condition"](media/action-groups-logic-app/metric-alert-payload-condition.png "Metric alert payload condition")
-
- 1. In the **If true** condition, add a **For each** loop and the Microsoft Teams action. Define the message by using a combination of HTML and dynamic content.
-
- !["Metric alert true condition post action"](media/action-groups-logic-app/metric-alert-true-condition-post-action-2.png "Metric alert true condition post action")
-
- 1. In the **If false** condition, define a Microsoft Teams action to communicate that the metric alert doesn't match the expectations of the logic app. Include the JSON payload. Notice how to reference the `triggerBody` dynamic content in the `json()` expression.
-
- !["Metric alert false condition post action"](media/action-groups-logic-app/metric-alert-false-condition-post-action-2.png "Metric alert false condition post action")
--- Step 11 is the same. Follow the instructions to save your logic app and update your action group.-
-## Calling other applications besides Microsoft Teams
-Logic Apps has a number of different connectors that allow you to trigger actions in a wide range of applications and databases. Slack, SQL Server, Oracle, Salesforce, are just some examples. For more information about connectors, see [Logic App connectors](../../connectors/apis-list.md).
-
-## Next steps
-* Get an [overview of Azure activity log alerts](./alerts-overview.md) and learn how to receive alerts.
-* Learn how to [configure alerts when an Azure Service Health notification is posted](../../service-health/alerts-activity-log-service-notifications-portal.md).
-* Learn more about [action groups](./action-groups.md).
azure-monitor Alerts Common Schema Integrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-common-schema-integrations.md
- Title: How to integrate the common alert schema with Logic Apps
-description: Learn how to create a logic app that leverages the common alert schema to handle all your alerts.
- Previously updated : 05/27/2019
-ms.revewer: issahn
---
-# How to integrate the common alert schema with Logic Apps
-
-This article shows you how to create a logic app that leverages the common alert schema to handle all your alerts.
-
-## Overview
-
-The [common alert schema](./alerts-common-schema.md) provides a standardized and extensible JSON schema across all your different alert types. The common alert schema is most useful when leveraged programmatically ΓÇô through webhooks, runbooks, and logic apps. In this article, we demonstrate how a single logic app can be authored to handle all your alerts. The same principles can be applied to other programmatic methods. The logic app described in this article creates well-defined variables for the ['essential' fields](alerts-common-schema-definitions.md#essentials), and also describes how you can handle [alert type](alerts-common-schema-definitions.md#alert-context) specific logic.
--
-## Prerequisites
-
-This article assumes that the reader is familiar with
-* Setting up alert rules ([metric](../alerts/alerts-metric.md), [log](./alerts-log.md), [activity log](./alerts-activity-log.md))
-* Setting up [action groups](./action-groups.md)
-* Enabling the [common alert schema](./alerts-common-schema.md#how-do-i-enable-the-common-alert-schema) from within action groups
-
-## Create a logic app leveraging the common alert schema
-
-1. Follow the [steps outlined to create your logic app](./action-groups-logic-app.md).
-
-1. Select the trigger: **When a HTTP request is received**.
-
- ![Logic app triggers](media/action-groups-logic-app/logic-app-triggers.png "Logic app triggers")
-
-1. Select **Edit** to change the HTTP request trigger.
-
- ![HTTP request triggers](media/action-groups-logic-app/http-request-trigger-shape.png "HTTP request triggers")
--
-1. Copy and paste the following schema:
-
- ```json
- {
- "type": "object",
- "properties": {
- "schemaId": {
- "type": "string"
- },
- "data": {
- "type": "object",
- "properties": {
- "essentials": {
- "type": "object",
- "properties": {
- "alertId": {
- "type": "string"
- },
- "alertRule": {
- "type": "string"
- },
- "severity": {
- "type": "string"
- },
- "signalType": {
- "type": "string"
- },
- "monitorCondition": {
- "type": "string"
- },
- "monitoringService": {
- "type": "string"
- },
- "alertTargetIDs": {
- "type": "array",
- "items": {
- "type": "string"
- }
- },
- "originAlertId": {
- "type": "string"
- },
- "firedDateTime": {
- "type": "string"
- },
- "resolvedDateTime": {
- "type": "string"
- },
- "description": {
- "type": "string"
- },
- "essentialsVersion": {
- "type": "string"
- },
- "alertContextVersion": {
- "type": "string"
- }
- }
- },
- "alertContext": {
- "type": "object",
- "properties": {}
- }
- }
- }
- }
- }
- ```
-
-1. Select **+** **New step** and then choose **Add an action**.
-
- ![Add an action](media/action-groups-logic-app/add-action.png "Add an action")
-
-1. At this stage, you can add a variety of connectors (Microsoft Teams, Slack, Salesforce, etc.) based on your specific business requirements. You can use the 'essential fields' out-of-the-box.
-
- ![Essential fields](media/alerts-common-schema-integrations/logic-app-essential-fields.png "Essential fields")
-
- Alternatively, you can author conditional logic based on the alert type using the 'Expression' option.
-
- ![Logic app expression](media/alerts-common-schema-integrations/logic-app-expressions.png "Logic app expression")
-
- The ['monitoringService' field](alerts-common-schema-definitions.md#alert-context) allows you to uniquely identify the alert type, based on which you can create the conditional logic.
-
-
- For example, the below snippet checks if the alert is a Application Insights based log alert, and if so prints the search results. Else, it prints 'NA'.
-
- ```text
- if(equals(triggerBody()?['data']?['essentials']?['monitoringService'],'Application Insights'),triggerBody()?['data']?['alertContext']?['SearchResults'],'NA')
- ```
-
- Learn more about [writing logic app expressions](../../logic-apps/workflow-definition-language-functions-reference.md#logical-comparison-functions).
-
-
--
-## Next steps
-
-* [Learn more about action groups](./action-groups.md).
-* [Learn more about the common alert schema](./alerts-common-schema.md).
azure-monitor Manage Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/manage-access.md
To create a [custom role](../../role-based-access-control/custom-roles.md) that
1. Select **JSON** and copy the `id` field.
- You'll need the `/providers/Microsoft.Authorization/roleDefinitions/<definition_id>` value when you call the https://management.azure.com/batch?api-version=2020-06-01 POST API.
+ You'll need the `/providers/Microsoft.Authorization/roleDefinitions/<definition_id>` value when you call the `https://management.azure.com/batch?api-version=2020-06-01` POST API.
1. Assign your custom role to the relevant users or groups: 1. Select **Access control (AIM)** > **Add** > **Add role assignment**.
To create a [custom role](../../role-based-access-control/custom-roles.md) that
1. Search for and select the relevant user or group and click **Select**. 1. Select **Review and assign**.
-1. Grant the users or groups read access to specific tables in a workspace by calling the https://management.azure.com/batch?api-version=2020-06-01 POST API and sending the following details in the request body:
+1. Grant the users or groups read access to specific tables in a workspace by calling the `https://management.azure.com/batch?api-version=2020-06-01` POST API and sending the following details in the request body:
```json {
azure-video-indexer Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/insights-overview.md
Insights contain an aggregated view of the data: faces, topics, emotions. Azure Video Indexer analyzes the video and audio content by running 30+ AI models, generating rich insights. For more information about available models, see [overview](video-indexer-overview.md).
-## Concepts
- Before you start using the insights, make sure to check [Limited Access features of Azure Video Indexer](limited-access-features.md). Then, check out Azure Video Indexer insights [transparency notes and use cases](/legal/azure-video-indexer/transparency-note?context=/azure/azure-video-indexer/context/context):
Then, check out Azure Video Indexer insights [transparency notes and use cases](
* [Observed people tracking & matched faces](/legal/azure-video-indexer/observed-matched-people-transparency-note?context=/azure/azure-video-indexer/context/context) * [Topics inference](/legal/azure-video-indexer/topics-inference-transparency-note?context=/azure/azure-video-indexer/context/context)
-## Next steps
- Once you are [set up](video-indexer-get-started.md) with Azure Video Indexer, start using [insights](video-indexer-output-json-v2.md) and check out other **How to guides** that demonstrate how to navigate the website.
cosmos-db Tune Connection Configurations Java Sdk V4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/tune-connection-configurations-java-sdk-v4.md
As a first step, use the following recommended configuration settings below. The
| Configuration option | Default | Recommended | Details | | :: | :--: | :: | :--: | | maxConnectionPoolSize | "1000" | "1000" | This represents the upper bound size of the connection pool size for underlying http client, which is the maximum number of connections that SDK will create for requests going to Gateway mode. SDK reuses these connections when sending requests to the Gateway. |
-| idleConnectionTimeout | "PT60S" | "PT60S" | This represents the idle connection timeout duration for a *single connection* to the Gateway. After this time, the connection will be automatically closed and will be released back to connection pool for reusability. |
+| idleConnectionTimeout | "PT60S" | "PT60S" | This represents the idle connection timeout duration for a *single connection* to the Gateway. After this time, the connection will be automatically closed and will be removed from the connection pool. |
## Next steps
data-share Share Your Data Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-share/share-your-data-arm.md
Title: 'Share outside your org (ARM template) - Azure Data Share quickstart' description: Learn how to share data with customers and partners using Azure Data Share and an Azure Resource Manager template (ARM template) in this quickstart.--++ Previously updated : 01/03/2022 Last updated : 10/27/2022
defender-for-cloud Azure Devops Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/azure-devops-extension.md
The Microsoft Security DevOps uses the following Open Source tools:
| [Bandit](https://github.com/PyCQA/bandit) | Python | [Apache License 2.0](https://github.com/PyCQA/bandit/blob/master/LICENSE) | | [BinSkim](https://github.com/Microsoft/binskim) | Binary--Windows, ELF | [MIT License](https://github.com/microsoft/binskim/blob/main/LICENSE) | | [ESlint](https://github.com/eslint/eslint) | JavaScript | [MIT License](https://github.com/eslint/eslint/blob/main/LICENSE) |
-| [Credscan](detect-credential-leaks.md) | Credential Scanner (also known as CredScan) is a tool developed and maintained by Microsoft to identify credential leaks such as those in source code and configuration files <br> common types: default passwords, SQL connection strings, Certificates with private keys | Not Open Source |
+| [Credscan](detect-exposed-secrets.md) | Credential Scanner (also known as CredScan) is a tool developed and maintained by Microsoft to identify credential leaks such as those in source code and configuration files <br> common types: default passwords, SQL connection strings, Certificates with private keys | Not Open Source |
| [Template Analyzer](https://github.com/Azure/template-analyzer) | ARM template, Bicep file | [MIT License](https://github.com/Azure/template-analyzer/blob/main/LICENSE.txt) | | [Terrascan](https://github.com/accurics/terrascan) | Terraform (HCL2), Kubernetes (JSON/YAML), Helm v3, Kustomize, Dockerfiles, Cloud Formation | [Apache License 2.0](https://github.com/accurics/terrascan/blob/master/LICENSE) | | [Trivy](https://github.com/aquasecurity/trivy) | container images, file systems, git repositories | [Apache License 2.0](https://github.com/aquasecurity/trivy/blob/main/LICENSE) |
defender-for-cloud Concept Cloud Security Posture Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-cloud-security-posture-management.md
Title: Overview of Cloud Security Posture Management (CSPM)
description: Learn more about the new Defender CSPM plan and the other enhanced security features that can be enabled for your multicloud environment through the Defender Cloud Security Posture Management (CSPM) plan. Previously updated : 10/23/2022 Last updated : 10/30/2022 # Cloud Security Posture Management (CSPM)
Defender for Cloud continually assesses your resources, subscriptions, and organ
|Aspect|Details| |-|:-| |Release state:| Foundational CSPM capabilities: GA <br> Defender Cloud Security Posture Management (CSPM): Preview |
-|Clouds:| **Foundational CSPM capabilities** <br> :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Azure China 21Vianet)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Connected AWS accounts <br>:::image type="icon" source="./media/icons/no-icon.png"::: Connected GCP projects <br> <br> **Defender Cloud Security Posture Management (CSPM)** <br> :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Connected AWS accounts <br>:::image type="icon" source="./media/icons/no-icon.png"::: Connected GCP projects |
+|Clouds:| **Foundational CSPM capabilities** <br> :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Azure China 21Vianet)<br> <br> For Connected AWS accounts and GCP projects availability, see the [feature availability](#defender-cspm-plan-options) table. <br> <br> **Defender Cloud Security Posture Management (CSPM)** <br> :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet)<br> <br> For Connected AWS accounts and GCP projects availability, see the [feature availability](#defender-cspm-plan-options) table. |
## Defender CSPM plan options
defender-for-cloud Defender For Devops Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-devops-introduction.md
Defender for DevOps allows you to manage your connected environments and provide
:::image type="content" source="media/defender-for-devops-introduction/devops-dashboard.png" alt-text="Screenshot of the Defender for DevOps dashboard." lightbox="media/defender-for-devops-introduction/devops-dashboard.png":::
-Here, you can [add GitHub](quickstart-onboard-github.md) and [Azure DevOps](quickstart-onboard-devops.md) environments, customize DevOps workbooks to show your desired metrics, view our guides and give feedback, and [configure your pull request annotations](tutorial-enable-pull-request-annotations.md).
+Here, you can [add GitHub](quickstart-onboard-github.md) and [Azure DevOps](quickstart-onboard-devops.md) environments, customize DevOps workbooks to show your desired metrics, view our guides and give feedback, and [configure your pull request annotations](enable-pull-request-annotations.md).
### Understanding your DevOps security
On this part of the screen you see:
## Next steps
-[Connect your GitHub repositories to Microsoft Defender for Cloud](quickstart-onboard-github.md).
+[Configure the Microsoft Security DevOps GitHub action](github-action.md).
-[Connect your Azure DevOps repositories to Microsoft Defender for Cloud](quickstart-onboard-devops.md).
+[Configure the Microsoft Security DevOps Azure DevOps extension](azure-devops-extension.md)
defender-for-cloud Detect Exposed Secrets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/detect-exposed-secrets.md
+
+ Title: Detect exposed secrets in code
+
+description: Prevent passwords and other secrets that may be stored in your code from being accessed by outside individuals by using Defender for Cloud's secret scanning for Defender for DevOps.
++ Last updated : 09/11/2022++
+# Detect exposed secrets in code
+
+When passwords and other secrets are stored in source code, it poses a significant risk and could compromise the security of your environments. Defender for Cloud offers a solution by using secret scanning to detect credentials, secrets, certificates, and other sensitive content in your source code and your build output. Secret scanning can be run as part of the Microsoft Security DevOps for Azure DevOps extension. To explore the options available for secret scanning in GitHub, learn more [about secret scanning](https://docs.github.com/en/enterprise-cloud@latest/code-security/secret-scanning/about-secret-scanning) in GitHub.
+
+> [!NOTE]
+> During the Defender for DevOps preview period, GitHub Advanced Security for Azure DevOps (GHAS for AzDO) is also providing a free trial of secret scanning.
+
+Check the list of [supported file types and exit codes](#supported-file-types-and-exit-codes).
+
+## Prerequisites
+
+- An Azure subscription. If you don't have a subscription, you can sign up for a [free account](https://azure.microsoft.com/pricing/free-trial/).
+
+- [Configure the Microsoft Security DevOps Azure DevOps extension](azure-devops-extension.md)
+
+## Setup secret scanning in Azure DevOps
+
+You can run secret scanning as part of the Azure DevOps build process by using the Microsoft Security DevOps (MSDO) Azure DevOps extension.
+
+**To add secret scanning to Azure DevOps build process**:
+
+1. Sign in to [Azure DevOps](https://dev.azure.com/)
+
+1. Navigate to **Pipeline**.
+
+1. Locate the pipeline with MSDO Azure DevOps Extension is configured.
+
+1. Select **Edit**.
+
+1. Add the following lines to the YAML file
+
+ ```yml
+ inputs:
+ categories: 'secrets'
+ ```
+
+1. Select **Save**.
+
+By adding the additions to your yaml file, you will ensure that secret scanning only runs when you execute a build to your Azure DevOps pipeline.
+
+## Suppress false positives
+
+When the scanner runs, it may detect credentials that are false positives. Inline-suppression tools can be used to suppress false positives.
+
+You may want to suppress fake secrets in unit tests or mock paths, or inaccurate results. We don't recommend using suppression to suppress test credentials. Test credentials can still pose a security risk and should be securely stored.
+
+> [!NOTE]
+> Valid inline suppression syntax depends on the language, data format and CredScan version you are using.
+
+### Suppress a same line secret
+
+To suppress a secret that is found on the same line, add the following code as a comment at the end of the line that has the secret:
+
+```bash
+[SuppressMessage("Microsoft.Security", "CS001:SecretInLine", Justification="... .")]
+```
+
+### Suppress a secret in the next line
+
+To suppress the secret found in the next line, add the following code as a comment before the line that has the secret:
+
+```bash
+[SuppressMessage("Microsoft.Security", "CS002:SecretInNextLine", Justification="... .")]
+```
+
+## Supported file types and exit codes
+
+CredScan supports the following file types:
+
+| Supported file types | | | | | |
+|--|--|--|--|--|--|
+| 0.001 |\*.conf | id_rsa |\*.p12 |\*.sarif |\*.wadcfgx |
+| 0.1 |\*.config |\*.iis |\*.p12* |\*.sc |\*.waz |
+| 0.8 |\*.cpp |\*.ijs |\*.params |\*.scala |\*.webtest |
+| *_sk |\*.crt |\*.inc | password |\*.scn |\*.wsx |
+| *password |\*.cs |\*.inf |\*.pem | scopebindings.json |\*.wtl |
+| *pwd*.txt |\*.cscfg |\*.ini |\*.pfx* |\*.scr |\*.xaml |
+|\*.*_/key |\*.cshtm |\*.ino | pgpass |\*.script |\*.xdt |
+|\*.*__/key |\*.cshtml |\*.insecure |\*.php |\*.sdf |\*.xml |
+|\*.1/key |\*.csl |\*.install |\*.pkcs12* |\*.secret |\*.xslt |
+|\*.32bit |\*.csv |\*.ipynb |\*.pl |\*.settings |\*.yaml |
+|\*.3des |\*.cxx |\*.isml |\*.plist |\*.sh |\*.yml |
+|\*.added_cluster |\*.dart |\*.j2 |\*.pm |\*.shf |\*.zaliases |
+|\*.aes128 |\*.dat |\*.ja |\*.pod |\*.side |\*.zhistory |
+|\*.aes192 |\*.data |\*.jade |\*.positive |\*.side2 |\*.zprofile |
+|\*.aes256 |\*.dbg |\*.java |\*.ppk* |\*.snap |\*.zsh_aliases |
+|\*.al |\*.defaults |\*.jks* |\*.priv |\*.snippet |\*.zsh_history |
+|\*.argfile |\*.definitions |\*.js | privatekey |\*.sql |\*.zsh_profile |
+|\*.as |\*.deployment |\*.json | privatkey |\*.ss |\*.zshrc |
+|\*.asax | dockerfile |\*.jsonnet |\*.prop | ssh\\config | |
+|\*.asc | _dsa |\*.jsx |\*.properties | ssh_config | |
+|\*.ascx |\*.dsql | kefile |\*.ps |\*.ste | |
+|\*.asl |\*.dtsx | key |\*.ps1 |\*.svc | |
+|\*.asmmeta | _ecdsa | keyfile |\*.psclass1 |\*.svd | |
+|\*.asmx | _ed25519 |\*.key |\*.psm1 |\*.svg | |
+|\*.aspx |\*.ejs |\*.key* | psql_history |\*.svn-base | |
+|\*.aurora |\*.env |\*.key.* |\*.pub |\*.swift | |
+|\*.azure |\*.erb |\*.keys |\*.publishsettings |\*.tcl | |
+|\*.backup |\*.ext |\*.keystore* |\*.pubxml |\*.template | |
+|\*.bak |\*.ExtendedTests |\*.linq |\*.pubxml.user | template | |
+|\*.bas |\*.FF |\*.loadtest |\*.pvk* |\*.test | |
+|\*.bash_aliases |\*.frm |\*.local |\*.py |\*.textile | |
+|\*.bash_history |\*.gcfg |\*.log |\*.pyo |\*.tf | |
+|\*.bash_profile |\*.git |\*.m |\*.r |\*.tfvars | |
+|\*.bashrc |\*.git/config |\*.managers |\*.rake | tmdb | |
+|\*.bat |\*.gitcredentials |\*.map |\*.razor |\*.trd | |
+|\*.Beta |\*.go |\*.md |\*.rb |\*.trx | |
+|\*.BF |\*.gradle |\*.md-e |\*.rc |\*.ts | |
+|\*.bicep |\*.groovy |\*.mef |\*.rdg |\*.tsv | |
+|\*.bim |\*.grooy |\*.mst |\*.rds |\*.tsx | |
+|\*.bks* |\*.gsh |\*.my |\*.reg |\*.tt | |
+|\*.build |\*.gvy |\*.mysql_aliases |\*.resx |\*.txt | |
+|\*.c |\*.gy |\*.mysql_history |\*.retail |\*.user | |
+|\*.cc |\*.h |\*.mysql_profile |\*.robot | user | |
+|\*.ccf | host | npmrc |\*.rqy | userconfig* | |
+|\*.cfg |\*.hpp |\*.nuspec | _rsa |\*.usersaptinstall | |
+|\*.clean |\*.htm |\*.ois_export |\*.rst |\*.usersaptinstall | |
+|\*.cls |\*.html |\*.omi |\*.ruby |\*.vb | |
+|\*.cmd |\*.htpassword |\*.opn |\*.runsettings |\*.vbs | |
+|\*.code-workspace | hubot |\*.orig |\*.sample |\*.vizfx | |
+|\*.coffee |\*.idl |\*.out |\*.SAMPLE |\*.vue | |
+
+The following exit codes are available in CredScan:
+
+| Code | Description |
+|--|--|
+| 0 | Scan completed successfully with no application warning, no suppressed match, no credential match. |
+| 1 | Partial scan completed with nothing but application warning. |
+| 2 | Scan completed successfully with nothing but suppressed match(es). |
+| 3 | Partial scan completed with both application warning(s) and suppressed match(es). |
+| 4 | Scan completed successfully with nothing but credential match(es). |
+| 5 | Partial scan completed with both application warning(s) and credential match(es). |
+| 6 | Scan completed successfully with both suppressed match(es) and credential match(es). |
+| 7 | Partial scan completed with application warning(s), suppressed match(es) and credential match(es). |
+| -1000 | Scan failed with command line argument error. |
+| -1100 | Scan failed with app settings error. |
+| -1500 | Scan failed with other configuration error. |
+| -1600 | Scan failed with IO error. |
+| -9000 | Scan failed with unknown error. |
+
+## Next steps
++ Learn how to [configure pull request annotations](enable-pull-request-annotations.md) in Defender for Cloud to remediate secrets in code before they are shipped to production.
defender-for-cloud Enable Pull Request Annotations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enable-pull-request-annotations.md
+
+ Title: Enable pull request annotations in GitHub or in Azure DevOps
+description: Add pull request annotations in GitHub or in Azure DevOps. By adding pull request annotations, your SecOps and developer teams so that they can be on the same page when it comes to mitigating issues.
++ Last updated : 10/30/2022++
+# Enable pull request annotations in GitHub and Azure DevOps
+
+Defender for DevOps exposes security findings as annotations in Pull Requests (PR). Security operators can enable PR annotations in Microsoft Defender for Cloud. Any exposed issues can then be remedied by developers. This process can prevent and fix potential security vulnerabilities and misconfigurations before they enter the production stage. Defender for DevOps annotates the vulnerabilities within the differences in the file rather than all the vulnerabilities detected across the entire file. Developers are able to see annotations in their source code management systems and Security operators can see any unresolved findings in Microsoft Defender for Cloud.
+
+With Microsoft Defender for Cloud, you can configure PR annotations in Azure DevOps. You can get PR annotations in GitHub if you're a GitHub Advanced Security customer.
+
+> [!NOTE]
+> GitHub Advanced Security for Azure DevOps (GHAzDO) is providing a free trial of PR annotations during the Defender for DevOps preview.
+
+## Prerequisites
+
+**For GitHub**:
+
+- An Azure account. If you don't already have an Azure account, you can [create your Azure free account today](https://azure.microsoft.com/free/).
+- Be a [GitHub Advanced Security](https://docs.github.com/en/get-started/learning-about-github/about-github-advanced-security) customer.
+- [Connect your GitHub repositories to Microsoft Defender for Cloud](quickstart-onboard-github.md).
+- [Configure the Microsoft Security DevOps GitHub action](github-action.md).
+
+**For Azure DevOps**:
+
+- An Azure account. If you don't already have an Azure account, you can [create your Azure free account today](https://azure.microsoft.com/free/).
+- [Connect your Azure DevOps repositories to Microsoft Defender for Cloud](quickstart-onboard-devops.md).
+- [Configure the Microsoft Security DevOps Azure DevOps extension](azure-devops-extension.md).
+- [Setup secret scanning in Azure DevOps](detect-exposed-secrets.md#setup-secret-scanning-in-azure-devops).
+
+## Enable pull request annotations in GitHub
+
+By enabling pull request annotations in GitHub, your developers gain the ability to see their security issues when they create a PR directly to the main branch.
+
+**To enable pull request annotations in GitHub**:
+
+1. Navigate to [GitHub](https://github.com/) and sign in.
+
+1. Select a repository that you've onboarded to Defender for Cloud.
+
+1. Navigate to **`Your repository's home page`** > **.github/workflows**.
+
+ :::image type="content" source="media/tutorial-enable-pr-annotations/workflow-folder.png" alt-text="Screenshot that shows where to navigate to, to select the GitHub workflow folder." lightbox="media/tutorial-enable-pr-annotations/workflow-folder.png":::
+
+1. Select **msdevopssec.yml**, which was created in the [prerequisites](#prerequisites).
+
+ :::image type="content" source="media/tutorial-enable-pr-annotations/devopssec.png" alt-text="Screenshot that shows you where on the screen to select the msdevopssec.yml file." lightbox="media/tutorial-enable-pr-annotations/devopssec.png":::
+
+1. Select **edit**.
+
+ :::image type="content" source="media/tutorial-enable-pr-annotations/edit-button.png" alt-text="Screenshot that shows you what the edit button looks like." lightbox="media/tutorial-enable-pr-annotations/edit-button.png":::
+
+1. Locate and update the trigger section to include:
+
+ ```yml
+ # Triggers the workflow on push or pull request events but only for the main branch
+ pull_request:
+ branches: ["main"]
+ ```
+
+ You can also view a [sample repository](https://github.com/microsoft/security-devops-action/tree/main/samples).
+
+ (Optional) You can select which branches you want to run it on by entering the branch(es) under the trigger section. If you want to include all branches remove the lines with the branch list.ΓÇ»
+
+1. Select **Start commit**.
+
+1. Select **Commit changes**.
+
+Any issues that are discovered by the scanner will be viewable in the Files changed section of your pull request.
+
+### Resolve security issues in GitHub
+
+**To resolve security issues in GitHub**:
+
+1. Navigate through the page and locate an affected file with an annotation.
+
+1. Follow the remediation steps in the annotation. If you choose not to remediate the annotation, select **Dismiss alert**.
+
+1. Select a reason to dismiss:
+
+ - **Won't fix** - The alert is noted but won't be fixed.
+ - **False positive** - The alert isn't valid.
+ - **Used in tests** - The alert isn't in the production code.
+
+## Enable pull request annotations in Azure DevOps
+
+By enabling pull request annotations in Azure DevOps, your developers gain the ability to see their security issues when they create PRs directly to the main branch.
+
+### Enable Build Validation policy for the CI Build
+
+Before you can enable pull request annotations, your main branch must have enabled Build Validation policy for the CI Build.
+
+**To enable Build Validation policy for the CI Build**:
+
+1. Sign in to your Azure DevOps project.
+
+1. Navigate to **Project settings** > **Repositories**.
+
+ :::image type="content" source="media/tutorial-enable-pr-annotations/project-settings.png" alt-text="Screenshot that shows you where to navigate to, to select repositories.":::
+
+1. Select the repository to enable pull requests on.
+
+1. Select **Policies**.
+
+1. Navigate to **Branch Policies** > **Main branch**.
+
+ :::image type="content" source="media/tutorial-enable-pr-annotations/branch-policies.png" alt-text="Screenshot that shows where to locate the branch policies." lightbox="media/tutorial-enable-pr-annotations/branch-policies.png":::
+
+1. Locate the Build Validation section.
+
+1. Ensure the CI Build is toggled to **On**.
+
+ :::image type="content" source="media/tutorial-enable-pr-annotations/build-validation.png" alt-text="Screenshot that shows where the CI Build toggle is located.":::
+
+1. Select **Save**.
+
+ :::image type="content" source="media/tutorial-enable-pr-annotations/validation-policy.png" alt-text="Screenshot that shows the build validation.":::
+
+Once you have completed these steps you can select the build pipeline you created previously and customize its settings to suit your needs.
+
+### Enable pull request annotations
+
+**To enable pull request annotations in Azure DevOps**:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Navigate to **Defender for Cloud** > **DevOps Security**.
+
+1. Select all relevant repositories to enable pull request annotations on.
+
+1. Select **Configure**.
+
+ :::image type="content" source="media/tutorial-enable-pr-annotations/select-configure.png" alt-text="Screenshot that shows you where to select configure on the screen.":::
+
+1. Toggle Pull request annotations to **On**.
+
+ :::image type="content" source="media/tutorial-enable-pr-annotations/annotation-on.png" alt-text="Screenshot that shows the toggle switched to on.":::
+
+1. (Optional) Select a category from the drop-down menu.
+
+ > [!NOTE]
+ > Only secret scan results are currently supported.
+
+1. (Optional) Select a severity level from the drop-down menu.
+
+ > [!NOTE]
+ > Only high-level severity findings are currently supported.
+
+1. Select **Save**.
+
+All annotations on your main branch will be displayed from now on based on your configurations with the relevant line of code.
+
+### Resolve security issues in Azure DevOps
+
+Once you've configured the scanner, you'll be able to view all issues that were detected.
+
+**To resolve security issues in Azure DevOps**:
+
+1. Sign in to the [Azure DevOps](https://azure.microsoft.com/products/devops).
+
+1. Navigate to **Pull requests**.
+
+ :::image type="content" source="media/tutorial-enable-pr-annotations/pull-requests.png" alt-text="Screenshot showing where to go to navigate to pull requests.":::
+
+1. On the Overview, or files page, locate an affected line with an annotation.
+
+1. Follow the remediation steps in the annotation.
+
+1. Select **Active** to change the status of the annotation and access the dropdown menu.
+
+1. Select an action to take:
+
+ - **Active** - The default status for new annotations.
+ - **Pending** - The finding is being worked on.
+ - **Resolved** - The finding has been addressed.
+ - **Won't fix** - The finding is noted but won't be fixed.
+ - **Closed** - The discussion in this annotation is closed.
+
+Defender for DevOps will re-activate an annotation if the security issue is not fixed in a new iteration.
+
+## Learn more
+
+Learn more about [Defender for DevOps](defender-for-devops-introduction.md).
+
+Learn how to [Discover misconfigurations in Infrastructure as Code](iac-vulnerabilities.md).
+
+Learn how to [detect exposed secrets in code](detect-exposed-secrets.md).
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> Now learn more about [Defender for DevOps](defender-for-devops-introduction.md).
defender-for-cloud Quickstart Onboard Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-devops.md
Learn more about [Defender for DevOps](defender-for-devops-introduction.md).
Learn how to [configure the MSDO Azure DevOps extension](azure-devops-extension.md).
-Learn how to [configure pull request annotations](tutorial-enable-pull-request-annotations.md) in Defender for Cloud.
+Learn how to [configure pull request annotations](enable-pull-request-annotations.md) in Defender for Cloud.
defender-for-cloud Quickstart Onboard Github https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-github.md
Learn more about [Defender for DevOps](defender-for-devops-introduction.md).
Learn how to [configure the MSDO GitHub action](github-action.md).
-Learn how to [configure pull request annotations](tutorial-enable-pull-request-annotations.md) in Defender for Cloud.
+Learn how to [configure pull request annotations](enable-pull-request-annotations.md) in Defender for Cloud.
defender-for-iot How To Manage Sensors On The Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-sensors-on-the-cloud.md
This procedure describes how to use the Azure portal to contact vendors for pre-
1. Do one of the following:
- - To buy a pre-configured appliance, select **Contact** under **Buy preconfigured appliance**. This opens an email to [hardware.sales@arrow.com](mailto:hardware.sales@arrow.com) with a template request for Defender for IoT appliances. For more information, see [Pre-configured physical appliances for OT monitoring](ot-pre-configured-appliances.md).
+ - To buy a pre-configured appliance, select **Contact** under **Buy preconfigured appliance**. This opens an email to [hardware.sales@arrow.com](mailto:hardware.sales@arrow.com?cc=DIoTHardwarePurchase@microsoft.com&subject=Information%20about%20MD4IoT%20pre-configured%20appliances) with a template request for Defender for IoT appliances. For more information, see [Pre-configured physical appliances for OT monitoring](ot-pre-configured-appliances.md).
- To install software on your own appliances, do the following:
defender-for-iot Ot Pre Configured Appliances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-pre-configured-appliances.md
This article provides a catalog of the pre-configured appliances available for M
Use the links in the tables below to jump to articles with more details about each appliance.
-Microsoft has partnered with [Arrow Electronics](https://www.arrow.com/) to provide pre-configured sensors. To purchase a pre-configured sensor, contact Arrow at: [hardware.sales@arrow.com](mailto:hardware.sales@arrow.com).
+Microsoft has partnered with [Arrow Electronics](https://www.arrow.com/) to provide pre-configured sensors. To purchase a pre-configured sensor, contact Arrow at: [hardware.sales@arrow.com](mailto:hardware.sales@arrow.com?cc=DIoTHardwarePurchase@microsoft.com&subject=Information%20about%20MD4IoT%20pre-configured%20appliances).
For more information, see [Purchase sensors or download software for sensors](onboard-sensors.md#purchase-sensors-or-download-software-for-sensors).
Pre-configured physical appliances have been validated for Defender for IoT OT s
## Appliances for OT network sensors
-You can [order](mailto:hardware.sales@arrow.com) any of the following preconfigured appliances for monitoring your OT networks:
+You can [order](mailto:hardware.sales@arrow.com?cc=DIoTHardwarePurchase@microsoft.com&subject=Information%20about%20MD4IoT%20pre-configured%20appliances). any of the following preconfigured appliances for monitoring your OT networks:
|Hardware profile |Appliance |Performance / Monitoring |Physical specifications | |||||
dev-box How To Manage Dev Box Definitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-manage-dev-box-definitions.md
+
+ Title: How to manage a dev box definition
+
+description: This article describes how to create, and delete Microsoft Dev Box dev box definitions.
++++ Last updated : 10/10/2022+++
+<!-- Intent: As a dev infrastructure manager, I want to be able to manage dev box definitions so that I can provide appropriate dev boxes to my users. -->
+
+# Manage a dev box definition
+
+A dev box definition is a Microsoft Dev Box Preview resource that specifies a source image, compute size, and storage size. You can use a source image from the Azure Marketplace, or a custom image from an Azure Compute Gallery.
+
+Depending on their task, development teams have different software, configuration, compute, and storage size requirements. You can create a new dev box definition to fulfill each team's needs. There's no limit to the number of dev box definitions you can create, and you can use dev box definitions across multiple projects in a dev center.
+
+## Permissions
+To manage a dev box definition, you need the following permissions:
+
+|Action|Permission required|
+|--|--|
+|Create, delete, or update dev box definition|Owner, Contributor, or Write permissions on the dev center in which you want to create the dev box definition. |
+
+## Sources of images
+
+When you create a dev box definition, you can choose a preconfigured image from the Azure Marketplace, or a custom image from an attached Azure Compute Gallery.
+
+### Azure Marketplace
+The Azure Marketplace gives you quick, easy access to various images, including images that are preconfigured with productivity tools like Microsoft Teams and provide optimal performance.
+
+When selecting a Marketplace image, consider using an image that has the latest version of Windows 11 Enterprise and the Microsoft 365 Apps installed.
+
+### Azure Compute Gallery
+An Azure Compute Gallery enables you to store and manage a collection of custom images. You can build an image to your dev team's exact requirements, and store it in a gallery. To use the custom image while creating a dev box definition, attach the gallery to your dev center. Learn how to attach a gallery here: [Configure an Azure Compute Gallery](how-to-configure-azure-compute-gallery.md).
+
+## Image versions
+When you select an image to use in your dev box definition, you must specify if updated versions of the image will be used.
+- **Numbered image versions:** If you want a consistent dev box definition in which the base image doesn't change, use a specific, numbered version of the image. Using a numbered version ensures all the dev boxes in the pool always use the same version of the image.
+- **Latest image versions:** If you want a flexible dev box definition in which you can update the base image as needs change, use the latest version of the image. Using the latest version of the image ensures that new dev boxes use the most recent version of the image. Existing dev boxes will not be modified when an image version is updated.
+
+## Create a dev box definition
+
+You can create multiple dev box definitions to meet the needs of your developer teams.
+
+The following steps show you how to create a dev box definition using an existing dev center.
+
+If you don't have an available dev center, follow the steps in [Quickstart: Configure the Microsoft Dev Box service](./quickstart-configure-dev-box-service.md) to create one.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. In the search box, type *dev center* and then select **Dev centers** from the list.
+
+ :::image type="content" source="./media/how-to-manage-dev-box-definitions/discover-devcenter.png" alt-text="Screenshot showing a search for devcenter from the Azure portal search box.":::
+
+1. Open the dev center in which you want to create the new dev box definition, and then select **Dev box definitions**.
+
+ :::image type="content" source="./media/how-to-manage-dev-box-definitions/select-dev-box-definitions.png" alt-text="Screenshot showing the dev center overview page with Dev box definitions highlighted.":::
+
+1. On the Dev box definitions page, select **+ Create**.
+
+ :::image type="content" source="./media/how-to-manage-dev-box-definitions/create-dev-box-definition.png" alt-text="Screenshot of the list of existing dev box definitions with Create highlighted.":::
+
+1. On the Create dev box definition page, enter the following values:
+
+ |Name|Value|
+ |-|-|
+ |**Name**|Enter a descriptive name for your dev box definition. Note that you can't change the dev box definition name after it's created. |
+ |**Image**|Select the base operating system for the dev box. You can select an image from the Marketplace or from an Azure Compute Gallery.|
+ |**Image version**|Select a specific, numbered version to ensure all the dev boxes in the pool always use the same version of the image. Select **Latest** to ensure new dev boxes use the latest image available.|
+ |**Compute**|Select the compute combination for your dev box definition.|
+ |**Storage**|Select the amount of storage for your dev box definition.|
+
+ :::image type="content" source="./media/how-to-manage-dev-box-definitions/create-dev-box-definition-page.png" alt-text="Screenshot showing the Create dev box definition page.":::
+
+1. To create the dev box definition, select **Create**.
+
+## Update a dev box definition
+
+Over time, your needs for dev boxes will change. You may want to move from a Windows 10 base operating system to a Windows 11 base operating system, or increase the default compute specification for your dev boxes. Your initial dev box definitions may no longer be appropriate for your needs. You can update a dev box definition, so that new dev boxes will use the new configuration.
+
+You can update the image, image version, compute, and storage settings for a dev box definition.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. In the search box, type *dev center* and then select **Dev centers** from the list.
+
+ :::image type="content" source="./media/how-to-manage-dev-box-definitions/discover-devcenter.png" alt-text="Screenshot showing a search for devcenter from the Azure portal search box.":::
+
+1. Open the dev center that contains the dev box definition you want to update, and then select **Dev box definitions**.
+
+ :::image type="content" source="./media/how-to-manage-dev-box-definitions/select-dev-box-definitions.png" alt-text="Screenshot showing the dev center overview page with Dev box definitions highlighted.":::
+
+1. Select the dev box definition(s) you want to update and then select the edit button.
+
+ :::image type="content" source="./media/how-to-manage-dev-box-definitions/update-dev-box-definition.png" alt-text="Screenshot of the list of existing dev box definitions, with the edit button highlighted.":::
+
+1. On the Edit *dev box definition name* page, you can select a new image, change the image version, change the compute, or modify the storage available.
+
+ :::image type="content" source="./media/how-to-manage-dev-box-definitions/update-dev-box-definition-page.png" alt-text="Screenshot of the edit dev box definition page.":::
+
+1. When you have made your updates, select **Save**.
+## Delete a dev box definition
+
+You can delete a dev box definition when you no longer want to use it. Deleting a dev box definition is permanent, and can't be undone. Dev box definitions can't be deleted if they are in use by one or more dev box pools.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. In the search box, type *dev center* and then select **Dev centers** from the list.
+
+ :::image type="content" source="./media/how-to-manage-dev-box-definitions/discover-devcenter.png" alt-text="Screenshot showing a search for devcenter from the Azure portal search box.":::
+
+1. Open the dev center from which you want to delete the dev box definition, and then select **Dev box definitions**.
+
+ :::image type="content" source="./media/how-to-manage-dev-box-definitions/select-dev-box-definitions.png" alt-text="Screenshot showing the dev center overview page with Dev box definitions highlighted.":::
+
+1. Select the dev box definition you want to delete and then select **Delete**.
+
+ :::image type="content" source="./media/how-to-manage-dev-box-definitions/delete-dev-box-definition.png" alt-text="Screenshot of the list of existing dev box definitions, with the one to be deleted selected.":::
+
+1. In the warning message, select **OK**.
+
+ :::image type="content" source="./media/how-to-manage-dev-box-definitions/delete-warning.png" alt-text="Screenshot of the Delete dev box definition warning message.":::
++
+## Next steps
+
+- [Provide access to projects for project admins](./how-to-project-admin.md)
+- [Configure an Azure Compute Gallery](./how-to-configure-azure-compute-gallery.md)
dev-box Quickstart Configure Dev Box Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/quickstart-configure-dev-box-service.md
The following steps show you how to create and configure a dev box definition. Y
|Name|Value|Note| |-|-|-| |**Name**|Enter a descriptive name for your dev box definition.|
- |**Image**|Select the base operating system for the dev box. You can select an image from the marketplace or from an Azure Compute Gallery.|To use custom images while creating a dev box definition, you can attach an Azure Compute Gallery that has the custom images. Learn [How to configure an Azure Compute Gallery](./how-to-configure-azure-compute-gallery.md).|
+ |**Image**|Select the base operating system for the dev box. You can select an image from the Azure Marketplace or from an Azure Compute Gallery. </br> If you're creating a dev box definition for testing purposes, consider using the **Windows 11 Enterprise + Microsoft 365 Apps 22H2** image. |To use custom images while creating a dev box definition, you can attach an Azure Compute Gallery that has the custom images. Learn [How to configure an Azure Compute Gallery](./how-to-configure-azure-compute-gallery.md).|
|**Image version**|Select a specific, numbered version to ensure all the dev boxes in the pool always use the same version of the image. Select **Latest** to ensure new dev boxes use the latest image available.|Selecting the Latest image version enables the dev box pool to use the most recent image version for your chosen image from the gallery. This way, the dev boxes created will stay up to date with the latest tools and code on your image. Existing dev boxes will not be modified when an image version is updated.|
+ |**Compute**|Select the compute combination for your dev box definition.||
+ |**Storage**|Select the amount of storage for your dev box definition.||
- :::image type="content" source="./media/quickstart-configure-dev-box-service/dev-box-definition-create.png" alt-text="Screenshot showing the create dev box definition page with suggested images highlighted.":::
-
- While selecting the gallery image, consider using either of the two images:
- - Windows 11 Enterprise + Microsoft 365 Apps 21H2
- - Windows 10 Enterprise + Microsoft 365 Apps 21H2
-
- These images are preconfigured with productivity tools like Microsoft Teams and configured for optimal performance.
+ :::image type="content" source="./media/quickstart-configure-dev-box-service/create-dev-box-definition-page.png" alt-text="Screenshot showing the Create dev box definition page.":::
1. Select **Create**.
devtest-labs Create Lab Windows Vm Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/create-lab-windows-vm-bicep.md
Title: Create a lab in Azure DevTest Labs using Bicep description: Use Bicep to create a lab that has a virtual machine in Azure DevTest Labs.-++ - Last updated 03/22/2022
event-grid Communication Services Voice Video Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/communication-services-voice-video-events.md
This section contains an example of what that data would look like for each even
"index": 0, "endReason": "SessionEnded", "contentLocation": "https://storage.asm.skype.com/v1/objects/0-eus-d12-801b3f3fc462fe8a01e6810cbff729b8/content/video",
- "metadataLocation": "https://storage.asm.skype.com/v1/objects/0-eus-d12-801b3f3fc462fe8a01e6810cbff729b8/content/acsmetadata"
+ "metadataLocation": "https://storage.asm.skype.com/v1/objects/0-eus-d12-801b3f3fc462fe8a01e6810cbff729b8/content/acsmetadata",
"deleteLocation": "https://us-storage.asm.skype.com/v1/objects/0-eus-d1-83e9599991e21ad21220427d78fbf558" } ]
event-grid Custom Event Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/custom-event-quickstart.md
Title: 'Quickstart: Send custom events with Event Grid and Azure CLI' description: 'Quickstart Use Azure Event Grid and Azure CLI to publish a custom topic, and subscribe to events for that topic. The events are handled by a web application.' Previously updated : 07/01/2021 Last updated : 10/28/2022
Typically, you send events to an endpoint that processes the event data and take
When you're finished, you see that the event data has been sent to the web app.
-![View results in the Azure Event Grid Viewer](./media/custom-event-quickstart/azure-event-grid-viewer-record-inserted-event.png)
[!INCLUDE [quickstarts-free-trial-note.md](../../includes/quickstarts-free-trial-note.md)]
When you're finished, you see that the event data has been sent to the web app.
Event Grid topics are Azure resources, and must be placed in an Azure resource group. The resource group is a logical collection into which Azure resources are deployed and managed.
-Create a resource group with the [az group create](/cli/azure/group#az-group-create) command.
-
-The following example creates a resource group named *gridResourceGroup* in the *westus2* location.
+Create a resource group with the [az group create](/cli/azure/group#az-group-create) command. The following example creates a resource group named *gridResourceGroup* in the *westus2* location. If you click **Try it**, you'll see the Azure Cloud Shell window in the right pane. Then, click **Copy** to copy the command and paste it in the Azure Cloud Shell window, and press ENTER to run the command. Change the name of the resource group and the location if you like.
```azurecli-interactive az group create --name gridResourceGroup --location westus2
az group create --name gridResourceGroup --location westus2
## Create a custom topic
-An event grid topic provides a user-defined endpoint that you post your events to. The following example creates the custom topic in your resource group using Bash in Azure Cloud Shell. Replace `<your-topic-name>` with a unique name for your topic. The custom topic name must be unique because it's part of the DNS entry. Additionally, it must be between 3-50 characters and contain only values a-z, A-Z, 0-9, and "-"
+An Event Grid topic provides a user-defined endpoint that you post your events to. The following example creates the custom topic in your resource group using Bash in Azure Cloud Shell. Replace `<your-topic-name>` with a unique name for your topic. The custom topic name must be unique because it's part of the DNS entry. Additionally, it must be between 3-50 characters and contain only values a-z, A-Z, 0-9, and "-"
-```azurecli-interactive
-topicname=<your-topic-name>
+1. Copy the following command, specify a name for the topic, and press ENTER to run the command.
-az eventgrid topic create --name $topicname -l westus2 -g gridResourceGroup
-```
+ ```azurecli-interactive
+ topicname=<your-topic-name>
+ ```
+2. Use the [`az eventgrid topic create`](/cli/azure/eventgrid/topic#az-eventgrid-topic-create) command to create a custom topic.
+
+ ```azurecli-interactive
+ az eventgrid topic create --name $topicname -l westus2 -g gridResourceGroup
+ ```
## Create a message endpoint Before subscribing to the custom topic, let's create the endpoint for the event message. Typically, the endpoint takes actions based on the event data. To simplify this quickstart, you deploy a [pre-built web app](https://github.com/Azure-Samples/azure-event-grid-viewer) that displays the event messages. The deployed solution includes an App Service plan, an App Service web app, and source code from GitHub.
-Replace `<your-site-name>` with a unique name for your web app. The web app name must be unique because it's part of the DNS entry.
-```azurecli-interactive
-sitename=<your-site-name>
-az deployment group create \
- --resource-group gridResourceGroup \
- --template-uri "https://raw.githubusercontent.com/Azure-Samples/azure-event-grid-viewer/master/azuredeploy.json" \
- --parameters siteName=$sitename hostingPlanName=viewerhost
-```
+1. Copy the following command, specify a name for the web app (Event Grid Viewer sample), and press ENTER to run the command. Replace `<your-site-name>` with a unique name for your web app. The web app name must be unique because it's part of the DNS entry.
+
+ ```azurecli-interactive
+ sitename=<your-site-name>
+ ```
+2. Run the [`az deployment group create`](/cli/azure/deployment/group#az-deployment-group-create) to deploy the web app using an Azure Resource Manager template.
+
+ ```azurecli-interactive
+ az deployment group create \
+ --resource-group gridResourceGroup \
+ --template-uri "https://raw.githubusercontent.com/Azure-Samples/azure-event-grid-viewer/master/azuredeploy.json" \
+ --parameters siteName=$sitename hostingPlanName=viewerhost
+ ```
The deployment may take a few minutes to complete. After the deployment has succeeded, view your web app to make sure it's running. In a web browser, navigate to: `https://<your-site-name>.azurewebsites.net`
You should see the site with no messages currently displayed.
## Subscribe to a custom topic
-You subscribe to an event grid topic to tell Event Grid which events you want to track and where to send those events. The following example subscribes to the custom topic you created, and passes the URL from your web app as the endpoint for event notification.
+You subscribe to an Event Grid topic to tell Event Grid which events you want to track and where to send those events. The following example subscribes to the custom topic you created, and passes the URL from your web app as the endpoint for event notification.
The endpoint for your web app must include the suffix `/api/updates/`.
healthcare-apis Iot Connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/iot-connector-overview.md
Previously updated : 10/17/2022 Last updated : 10/28/2022
MedTech service is important because healthcare data can be difficult to access
The following diagram outlines the basic elements of how MedTech service transforms medical device data into a standardized FHIR resource in the cloud. These elements are:
MedTech service delivers your data to FHIR service in Azure Health Data Services
### Configurable
-Your MedTech service can be customized and configured by using [Device](how-to-use-device-mappings.md) and [FHIR destination](how-to-use-fhir-mappings.md) mappings to define the filtering and transformation of your data into FHIR observation resources.
+Your MedTech service can be customized and configured by using [device](how-to-use-device-mappings.md) and [FHIR destination](how-to-use-fhir-mappings.md) mappings to define the filtering and transformation of your data into FHIR observation resources.
Useful options could include:
The following Microsoft solutions can use MedTech service for extra functionalit
In this article, you learned about the MedTech service. To learn more about the MedTech service data flow and how to deploy the MedTech service in the Azure portal, see
->[!div class="nextstepaction"]
->[The MedTech service data flows](iot-data-flow.md)
+> [!div class="nextstepaction"]
+> [MedTech service data flows](iot-data-flow.md)
->[!div class="nextstepaction"]
->[Deploy the MedTech service using the Azure portal](deploy-iot-connector-in-azure.md)
+> [!div class="nextstepaction"]
+> [Deploy the MedTech service using the Azure portal](deploy-iot-connector-in-azure.md)
->[!div class="nextstepaction"]
->[Frequently asked questions about the MedTech service](iot-connector-faqs.md)
+> [!div class="nextstepaction"]
+> [Frequently asked questions about the MedTech service](iot-connector-faqs.md)
FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
machine-learning How To Generate Automl Training Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-generate-automl-training-code.md
Title: How to view AutoML model training code
description: How to view model training code for an automated ML trained model and explanation of each stage. --++
If you make changes to `script.py` that require additional dependencies, or you
### Submit the experiment
-Since the generated code isnΓÇÖt driven by automated ML anymore, instead of creating and submitting an AutoML Job, you need to create a [`Command Job`](/how-to-train-sdk) and provide the generated code (script.py) to it.
+Since the generated code isnΓÇÖt driven by automated ML anymore, instead of creating and submitting an AutoML Job, you need to create a `Command Job` and provide the generated code (script.py) to it.
The following example contains the parameters and regular dependencies needed to run a Command Job, such as compute, environment, etc. ```python
network-watcher Network Watcher Nsg Flow Logging Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-nsg-flow-logging-overview.md
$virtualNetwork | Set-AzVirtualNetwork
- [Azure Functions](https://azure.microsoft.com/services/functions/) > [!NOTE]
-> App services deployed under App Service Plan do not support NSG Flow Logs. Please refer [this documentaion](../app-service/overview-vnet-integration.md#how-regional-virtual-network-integration-works) for additional details.
+> App services deployed under the App Service Plan do not support NSG Flow Logs. [Learn more](../app-service/overview-vnet-integration.md#how-regional-virtual-network-integration-works).
## Best practices
partner-solutions Add Connectors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/apache-kafka-confluent-cloud/add-connectors.md
Title: Azure services and Confluent Cloud integration - Azure partner solutions
+ Title: Azure services and Confluent Cloud integration
description: This article describes how to use Azure services and install connectors for Confluent Cloud integration.
partner-solutions Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/apache-kafka-confluent-cloud/create-cli.md
Title: Create Apache Kafka for Confluent Cloud through Azure CLI - Azure partner solutions
+ Title: Create Apache Kafka for Confluent Cloud through Azure CLI
description: This article describes how to use the Azure CLI to create an instance of Apache Kafka for Confluent Cloud. Last updated 06/07/2021
partner-solutions Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/apache-kafka-confluent-cloud/create-powershell.md
Title: Create Apache Kafka for Confluent Cloud through Azure PowerShell - Azure partner solutions
+ Title: Create Apache Kafka for Confluent Cloud through Azure PowerShell
description: This article describes how to use Azure PowerShell to create an instance of Apache Kafka for Confluent Cloud. Last updated 11/03/2021
partner-solutions Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/apache-kafka-confluent-cloud/create.md
Title: Create Apache Kafka for Confluent Cloud through Azure portal - Azure partner solutions
+ Title: Create Apache Kafka for Confluent Cloud through Azure portal
description: This article describes how to use the Azure portal to create an instance of Apache Kafka for Confluent Cloud. Last updated 12/14/2021
partner-solutions Get Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/apache-kafka-confluent-cloud/get-support.md
Title: Contact support for Confluent Cloud - Azure partner solutions
+ Title: Contact support for Confluent Cloud
description: This article describes how to contact support for Confluent Cloud on the Azure portal. Last updated 06/07/2021
partner-solutions Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/apache-kafka-confluent-cloud/manage.md
Title: Manage a Confluent Cloud - Azure partner solutions
+ Title: Manage a Confluent Cloud
description: This article describes management of a Confluent Cloud on the Azure portal. How to set up single sign-on, delete a Confluent organization, and get support. Last updated 06/07/2021
partner-solutions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/apache-kafka-confluent-cloud/overview.md
Title: Apache Kafka on Confluent Cloud overview - Azure partner solutions
+ Title: Apache Kafka on Confluent Cloud overview
description: Learn about using Apache Kafka on Confluent Cloud in the Azure Marketplace. Last updated 02/22/2022
partner-solutions Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/apache-kafka-confluent-cloud/troubleshoot.md
Title: Troubleshooting Apache Kafka for Confluent Cloud - Azure partner solutions
+ Title: Troubleshooting Apache Kafka for Confluent Cloud
description: This article provides information about troubleshooting and frequently asked questions (FAQ) for Confluent Cloud on Azure. Last updated 02/18/2021
partner-solutions Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/datadog/create.md
Title: Create Datadog - Azure partner solutions
+ Title: Create Datadog
description: This article describes how to use the Azure portal to create an instance of Datadog. Last updated 06/08/2022
partner-solutions Get Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/datadog/get-support.md
Title: Get support for Datadog resource - Azure partner solutions
+ Title: Get support for Datadog resource
description: This article describes how to contact support for a Datadog resource. Last updated 05/28/2021
partner-solutions Link To Existing Organization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/datadog/link-to-existing-organization.md
Title: Link to existing Datadog - Azure partner solutions
+ Title: Link to existing Datadog
description: This article describes how to use the Azure portal to link to an existing instance of Datadog. Last updated 05/28/2021
partner-solutions Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/datadog/manage.md
Title: Manage a Datadog resource - Azure partner solutions
+ Title: Manage a Datadog resource
description: This article describes management of a Datadog resource in the Azure portal. How to set up single sign-on, delete a Confluent organization, and get support. Last updated 05/28/2021
partner-solutions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/datadog/overview.md
Title: Datadog overview - Azure partner solutions
+ Title: Datadog overview
description: Learn about using Datadog in the Azure Marketplace. Last updated 05/28/2021
partner-solutions Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/datadog/prerequisites.md
Title: Prerequisites for Datadog on Azure - Azure partner solutions
+ Title: Prerequisites for Datadog on Azure
description: This article describes how to configure your Azure environment to create an instance of Datadog. Last updated 05/28/2021
partner-solutions Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/datadog/troubleshoot.md
Title: Troubleshooting for Datadog - Azure partner solutions
+ Title: Troubleshooting for Datadog
description: This article provides information about troubleshooting for Datadog on Azure. Last updated 05/28/2021
partner-solutions Dynatrace Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-create.md
Title: Create Dynatrace for Azure resource - Azure partner solutions
+ Title: Create Dynatrace for Azure resource
description: This article describes how to use the Azure portal to create an instance of Dynatrace.
partner-solutions Dynatrace How To Configure Prereqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-how-to-configure-prereqs.md
Title: Configure pre-deployment to use Dynatrace with Azure - Azure partner solutions
+ Title: Configure pre-deployment to use Dynatrace with Azure
description: This article describes how to complete the prerequisites for Dynatrace on the Azure portal.
partner-solutions Dynatrace How To Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-how-to-manage.md
Title: Manage your Dynatrace for Azure integration - Azure partner solutions
+ Title: Manage your Dynatrace for Azure integration
description: This article describes how to manage Dynatrace on the Azure portal.
partner-solutions Dynatrace Link To Existing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-link-to-existing.md
Title: Linking to an existing Dynatrace for Azure resource - Azure partner solutions
+ Title: Linking to an existing Dynatrace for Azure resource
description: This article describes how to use the Azure portal to link to an instance of Dynatrace.
partner-solutions Dynatrace Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-overview.md
Title: Dynatrace for Azure overview - Azure partner solutions
+ Title: Dynatrace for Azure overview
description: Learn about using the Dynatrace Cloud-Native Observability Platform in the Azure Marketplace.
partner-solutions Dynatrace Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-troubleshoot.md
Title: Troubleshooting Dynatrace for Azure - Azure partner solutions
+ Title: Troubleshooting Dynatrace for Azure
description: This article provides information about troubleshooting Dynatrace for Azure
partner-solutions Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/elastic/create.md
Title: Create Elastic application - Azure partner solutions
+ Title: Create Elastic application
description: This article describes how to use the Azure portal to create an instance of Elastic. Last updated 09/02/2021
partner-solutions Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/elastic/manage.md
Title: Manage an Elastic integration with Azure - Azure partner solutions
+ Title: Manage an Elastic integration with Azure
description: This article describes management of Elastic on the Azure portal. How to configure diagnostic settings and delete the resource. Last updated 09/02/2021
partner-solutions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/elastic/overview.md
Title: Elastic integration overview - Azure partner solutions
+ Title: Elastic integration overview
description: Learn about using the Elastic Cloud-Native Observability Platform in the Azure Marketplace. Last updated 09/02/2021
partner-solutions Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/elastic/troubleshoot.md
Title: Troubleshooting Elastic - Azure partner solutions
+ Title: Troubleshooting Elastic
description: This article provides information about troubleshooting Elastic integration with Azure Last updated 09/02/2021
partner-solutions Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/logzio/create.md
Title: Create a Logz.io resource - Azure partner solutions
+ Title: Create a Logz.io resource
description: Quickstart article that describes how to create a Logz.io resource in Azure. Last updated 10/25/2021
partner-solutions Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/logzio/manage.md
Title: Manage the Azure integration with Logz.io - Azure partner solutions
+ Title: Manage the Azure integration with Logz.io
description: Learn how to manage the Azure integration with Logz.io. Last updated 10/25/2021
partner-solutions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/logzio/overview.md
Title: Logz.io overview - Azure partner solutions
+ Title: Logz.io overview
description: Learn about Azure integration using Logz.io in Azure Marketplace. Last updated 10/25/2021
partner-solutions Setup Sso https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/logzio/setup-sso.md
Title: Single sign-on for Azure integration with Logz.io - Azure partner solutions
+ Title: Single sign-on for Azure integration with Logz.io
description: Learn about how to set up single sign-on for Azure integration with Logz.io. Last updated 10/25/2021
partner-solutions Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/logzio/troubleshoot.md
Title: Troubleshooting Logz.io - Azure partner solutions
+ Title: Troubleshooting Logz.io
description: This article describes how to troubleshoot Logz.io integration with Azure. Last updated 05/24/2022
partner-solutions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/overview.md
Title: Offerings from partners - Azure partner solutions
-description: Learn about solutions offered by partners on Azure.
+ Title: Overview of Azure Native ISV Services
+description: Introduction to the Azure Native ISV Services.
Previously updated : 08/24/2022 Last updated : 10/10/2022
-# Extend Azure with solutions from partners
+# Azure Native ISV Services overview
-Partner organizations offer solutions that you can use in Azure to enhance your cloud infrastructure. These solutions are fully integrated into Azure. You work with these solutions in much the same way you would work with solutions from Microsoft. You use a resource provider, resource types, and SDKs to manage the solution.
+Azure Native ISV Services enables customers to easily provision, manage, and tightly integrate most used ISV software and services on Azure. Currently, several services are publicly available across these areas: observability, data, networking, and storage. For a list of all our current ISV partner service, see [Extend Azure with Azure Native ISV Services](partners.md).
-Partner solutions are available through the Marketplace.
+## Features of Azure Native ISV Services
-| Partner solution | Description |
-| : | : |
-| [Apache Kafka for Confluent Cloud](./apache-kafka-confluent-cloud/overview.md) | Fully managed event streaming platform powered by Apache Kafka |
-| [Datadog](./datadog/overview.md) | Monitor your servers, clouds, metrics, and apps in one place. |
-| [Elastic](./elastic/overview.md) | Monitor the health and performance of your Azure environment. |
-| [Logz.io](./logzio/overview.md) | Monitor the health and performance of your Azure environment. |
-| [Dynatrace for Azure](./dynatrace/dynatrace-overview.md) | Use Dynatrace for Azure for monitoring your workloads using the Azure portal. |
-| [NGINX for Azure (preview)](./nginx/nginx-overview.md) | Use NGINX for Azure (preview) as a reverse proxy within your Azure environment. |
+A comprehensive list of features of Azure Native ISV Services is listed below.
+
+### Unified operations
+
+- Integrated onboarding: Use ARM template, SDK, CLI and the Azure portal to create and manage services.
+- Unified management: Manage entire lifecycle of these ISV services through the Azure portal.
+- Unified access: Use Single Sign-on (SSO) through Azure Active Directory--no need for separate ISV authentications for subscribing to the service.
+
+### Integrations
+
+- Log and metrics: Use Microsoft Azure Monitor for collecting telemetry across all Azure environments.
+- VNet injection: Provides private data plane access to Azure Native ISV services from customersΓÇÖ virtual networks.
+- Unified billing: Engage with a single entity, Microsoft Azure Marketplace, for billing. No separate license purchase is required to use Azure Native ISV Services.
partner-solutions Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/partners.md
+
+ Title: Partner services
+description: Learn about services offered by partners on Azure.
+++ Last updated : 09/24/2022++++
+# Extend Azure with Azure Native ISV Services
+
+Partner organizations use Azure Native ISV Services to offer solutions that you can use in Azure to enhance your cloud infrastructure. These Azure Native ISV Services are fully integrated into Azure. You work with these solutions in much the same way you would work with solutions from Microsoft. You use a resource provider, resource types, and SDKs to manage the solution.
+
+Azure Native ISV Services are available through the Marketplace.
+
+## Observability
+
+|Partner |Description |
+|||
+|[Datadog](datadog/overview.md) | Monitoring and analytics platform for large scale applications. |
+|[Elastic](elastic/overview.md) | Build modern search experiences and maximize visibility into health, performance, and security of your infrastructure, applications, and data. |
+|[Logz.io](logzio/overview.md) | Observability platform that centralizes log, metric, and tracing analytics. |
+|[Dynatrace for Azure](dynatrace/dynatrace-overview.md) | Provides deep cloud observability, advanced AIOps, and continuous runtime application security. |
+
+## Data and storage
+
+|Partner |Description |
+|||
+| [Apache Kafka for Confluent Cloud](apache-kafka-confluent-cloud/overview.md) | Fully managed event streaming platform powered by Apache Kafka. |
+
+## Networking and security
+
+|Partner |Description |
+|||
+|[NGINX for Azure (preview)](nginx/nginx-overview.md) | Use NGINX for Azure (preview) as a reverse proxy within your Azure environment. |
private-link Private Link Service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-link-service-overview.md
Azure Private Link service is the reference to your own service that is powered by Azure Private Link. Your service that is running behind [Azure Standard Load Balancer](../load-balancer/load-balancer-overview.md) can be enabled for Private Link access so that consumers to your service can access it privately from their own VNets. Your customers can create a private endpoint inside their virtual network and map it to this service. This article explains concepts related to the service provider side. *Figure: Azure Private Link Service.* ## Workflow
-![Private Link service workflow](media/private-link-service-overview/private-link-service-workflow.png)
*Figure: Azure Private Link service workflow.*
purview How To Enable Data Use Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-enable-data-use-management.md
To disable Data use management for a source, resource group, or subscription, a
## Additional considerations related to Data use management - Make sure you write down the **Name** you use when registering in Microsoft Purview. You will need it when you publish a policy. The recommended practice is to make the registered name exactly the same as the endpoint name.-- To disable a source for *Data use management*, remove it first from being bound (i.e. published) in any policy.-- While user needs to have both data source *Owner* and Microsoft Purview *Data source admin* to enable a source for *Data use management*, either of those roles can independently disable it.
+- To disable a source for *Data use management*, you first have to remove any published policies on that data source.
+- While user needs to have both data source *Owner* and Microsoft Purview *Data source admin* to enable a source for *Data use management*, **any** Data Source admin for the collection can disable it.
- Disabling *Data use management* for a subscription will disable it also for all assets registered in that subscription. > [!WARNING]
purview How To Policies Data Owner Authoring Generic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-policies-data-owner-authoring-generic.md
Before authoring data policies in the Microsoft Purview governance portal, you'l
## Create a new policy This section describes the steps to create a new policy in Microsoft Purview.
-Ensure you have the *Policy Author* permission as described [here](how-to-enable-data-use-management.md#configure-microsoft-purview-permissions-needed-to-create-and-publish-data-owner-policies).
+Ensure you have the *Policy Author* permission as described [here](how-to-enable-data-use-management.md#configure-microsoft-purview-permissions-needed-to-create-or-update-access-policies).
1. Sign in to the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/).
Now that you have created your policy, you will need to publish it for it to bec
## Publish a policy A newly created policy is in the **draft** state. The process of publishing associates the new policy with one or more data sources under governance. This is called "binding" a policy to a data source.
-Ensure you have the *Data Source Admin* permission as described [here](how-to-enable-data-use-management.md#configure-microsoft-purview-permissions-needed-to-create-and-publish-data-owner-policies)
+Ensure you have the *Data Source Admin* permission as described [here](how-to-enable-data-use-management.md#configure-microsoft-purview-permissions-needed-to-publish-data-owner-policies)
The steps to publish a policy are as follows:
The steps to publish a policy are as follows:
## Update or delete a policy Steps to update or delete a policy in Microsoft Purview are as follows.
-Ensure you have the *Policy Author* permission as described [here](how-to-enable-data-use-management.md#configure-microsoft-purview-permissions-needed-to-create-and-publish-data-owner-policies)
+Ensure you have the *Policy Author* permission as described [here](how-to-enable-data-use-management.md#configure-microsoft-purview-permissions-needed-to-create-or-update-access-policies)
1. Sign in to the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/).
role-based-access-control Role Assignments Alert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-assignments-alert.md
Previously updated : 07/29/2022 Last updated : 10/30/2022
To get notified of privileged role assignments, you create an alert rule in Azur
```kusto AzureActivity
- | where CategoryValue == "Administrative" and
- OperationNameValue == "Microsoft.Authorization/roleAssignments/write" and
- (ActivityStatusValue == "Start" or ActivityStatus == "Started")
+ | where CategoryValue =~ "Administrative" and
+ OperationNameValue =~ "Microsoft.Authorization/roleAssignments/write" and
+ (ActivityStatusValue =~ "Start" or ActivityStatus =~ "Started")
| extend RoleDefinition = extractjson("$.Properties.RoleDefinitionId",tostring(Properties_d.requestbody),typeof(string)) | extend PrincipalId = extractjson("$.Properties.PrincipalId",tostring(Properties_d.requestbody),typeof(string)) | extend PrincipalType = extractjson("$.Properties.PrincipalType",tostring(Properties_d.requestbody),typeof(string))
To get notified of privileged role assignments, you create an alert rule in Azur
| where Scope !contains "resourcegroups" | extend RoleId = split(RoleDefinition,'/')[-1] | extend RoleDisplayName = case(
- RoleId == 'b24988ac-6180-42a0-ab88-20f7382dd24c', "Contributor",
- RoleId == '8e3af657-a8ff-443c-a75c-2fe8c4bcb635', "Owner",
- RoleId == '18d7d88d-d35e-4fb5-a5c3-7773c20a72d9', "User Access Administrator",
+ RoleId =~ 'b24988ac-6180-42a0-ab88-20f7382dd24c', "Contributor",
+ RoleId =~ '8e3af657-a8ff-443c-a75c-2fe8c4bcb635', "Owner",
+ RoleId =~ '18d7d88d-d35e-4fb5-a5c3-7773c20a72d9', "User Access Administrator",
"Irrelevant") | where RoleDisplayName != "Irrelevant" | project TimeGenerated,Scope, PrincipalId,PrincipalType,RoleDisplayName
sentinel Network Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/network-normalization-schema.md
The following list mentions fields that have specific guidelines for Network Ses
| Field | Class | Type | Description | ||-||--| | **EventCount** | Mandatory | Integer | Netflow sources support aggregation, and the **EventCount** field should be set to the value of the Netflow **FLOWS** field. For other sources, the value is typically set to `1`. |
-| <a name="eventtype"></a> **EventType** | Mandatory | Enumerated | Describes the operation reported by the record.<br><br> For Network Session records, the allowed values are:<br> - `EndpointNetworkSession`: for sessions reported by endpoint systems, including clients and servers. For such systems, the schema supports the `remote` and `local` alias fields. <br> - `NetworkSession`: for sessions reported by intermediary systems and network taps. <br> - `Flow`: for `NetFlow` type aggregated flows, which group multiple similar sessions together. For such records, [EventSubType](#eventsubtype) should be left empty. |
+| <a name="eventtype"></a> **EventType** | Mandatory | Enumerated | Describes the operation reported by the record.<br><br> For Network Session records, the allowed values are:<br> - `EndpointNetworkSession`: for sessions reported by endpoint systems, including clients and servers. For such systems, the schema supports the `remote` and `local` alias fields. <br> - `NetworkSession`: for sessions reported by intermediary systems and network taps. <br> - `L2NetworkSession`: for sessions reported by intermediary systems and network taps, but which for which only layer 2 information is available. Such events will include MAC addresses but not IP addresses. <br> - `Flow`: for `NetFlow` type aggregated flows, which group multiple similar sessions together. For such records, [EventSubType](#eventsubtype) should be left empty. |
| <a name="eventsubtype"></a>**EventSubType** | Optional | String | Additional description of the event type, if applicable. <br> For Network Session records, supported values include:<br>- `Start`<br>- `End` | | <a name="eventresult"></a>**EventResult** | Mandatory | Enumerated | If the source device does not provide an event result, **EventResult** should be based on the value of [DvcAction](#dvcaction). If [DvcAction](#dvcaction) is `Deny`, `Drop`, `Drop ICMP`, `Reset`, `Reset Source`, or `Reset Destination`<br>, **EventResult** should be `Failure`. Otherwise, **EventResult** should be `Success`. | | **EventResultDetails** | Recommended | Enumerated | Reason or details for the result reported in the [EventResult](#eventresult) field. Supported values are:<br> - Failover <br> - Invalid TCP <br> - Invalid Tunnel <br> - Maximum Retry <br> - Reset <br> - Routing issue <br> - Simulation <br> - Terminated <br> - Timeout <br> - Unknown <br> - NA.<br><br>The original, source specific, value is stored in the [EventOriginalResultDetails](normalization-common-fields.md#eventoriginalresultdetails) field. |
Fields that appear in the table below are common to all ASIM schemas. Any guidel
| **NetworkApplicationProtocol** | Optional | String | The application layer protocol used by the connection or session. If the [DstPortNumber](#dstportnumber) value is provided, we recommend that you include **NetworkApplicationProtocol** too. If the value isn't available from the source, derive the value from the [DstPortNumber](#dstportnumber) value.<br><br>Example: `FTP` | | <a name="networkprotocol"></a> **NetworkProtocol** | Optional | Enumerated | The IP protocol used by the connection or session as listed in [IANA protocol assignment](https://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml), which is typically `TCP`, `UDP`, or `ICMP`.<br><br>Example: `TCP` | | **NetworkProtocolVersion** | Optional | Enumerated | The version of [NetworkProtocol](#networkprotocol). When using it to distinguish between IP version, use the values `IPv4` and `IPv6`. |
-| <a name="networkdirection"></a>**NetworkDirection** | Optional | Enumerated | The direction of the connection or session:<br><br> - For the [EventType](#eventtype) `NetworkSession`, **NetworkDirection** represents the direction relative to the organization or cloud environment boundary. Supported values are `Inbound`, `Outbound`, `Local` (to the organization), `External` (to the organization) or `NA` (Not Applicable).<br><br> - For the [EventType](#eventtype) `EndpointNetworkSession`, **NetworkDirection** represents the direction relative to the endpoint. Supported values are `Inbound`, `Outbound`, `Local` (to the system), `Listen` or `NA` (Not Applicable). The `Listen` value indicates that a device has started accepting network connections but isn't actually, necessarily, connected. |
+| <a name="networkdirection"></a>**NetworkDirection** | Optional | Enumerated | The direction of the connection or session:<br><br> - For the [EventType](#eventtype) `NetworkSession`, `Flow` or `L2NetworkSession`, **NetworkDirection** represents the direction relative to the organization or cloud environment boundary. Supported values are `Inbound`, `Outbound`, `Local` (to the organization), `External` (to the organization) or `NA` (Not Applicable).<br><br> - For the [EventType](#eventtype) `EndpointNetworkSession`, **NetworkDirection** represents the direction relative to the endpoint. Supported values are `Inbound`, `Outbound`, `Local` (to the system), `Listen` or `NA` (Not Applicable). The `Listen` value indicates that a device has started accepting network connections but isn't actually, necessarily, connected. |
| <a name="networkduration"></a>**NetworkDuration** | Optional | Integer | The amount of time, in milliseconds, for the completion of the network session or connection.<br><br>Example: `1500` | | **Duration** | Alias | | Alias to [NetworkDuration](#networkduration). | |<a name="networkicmptype"></a> **NetworkIcmpType** | Optional | String | For an ICMP message, the ICMP message type number, as described in [RFC 2780](https://datatracker.ietf.org/doc/html/rfc2780) for IPv4 network connections, or in [RFC 4443](https://datatracker.ietf.org/doc/html/rfc4443) for IPv6 network connections. |
For example, for an inbound event, the field `LocalIpAddr` is an alias to `DstIp
| Field | Class | Type | Description | | | | | |
-| <a name="hostname"></a>**Hostname** | Alias | | - If the event type is `NetworkSession`, Hostname is an alias to [DstHostname](#dsthostname).<br> - If the event type is `EndpointNetworkSession`, Hostname is an alias to `RemoteHostname`, which can alias either [DstHostname](#dsthostname) or [SrcHostName](#srchostname), depending on [NetworkDirection](#networkdirection) |
-| <a name="ipaddr"></a>**IpAddr** | Alias | | - If the event type is `NetworkSession`, Hostname is an alias to [SrcIpAddr](#srcipaddr).<br> - If the event type is `EndpointNetworkSession`, Hostname is an alias to `LocalIpAddr`, which can alias either [SrcIpAddr](#srcipaddr) or [DstIpAddr](#dstipaddr), depending on [NetworkDirection](#networkdirection). |
+| <a name="hostname"></a>**Hostname** | Alias | | - If the event type is `NetworkSession`, `Flow` or `L2NetworkSession`, Hostname is an alias to [DstHostname](#dsthostname).<br> - If the event type is `EndpointNetworkSession`, Hostname is an alias to `RemoteHostname`, which can alias either [DstHostname](#dsthostname) or [SrcHostName](#srchostname), depending on [NetworkDirection](#networkdirection) |
+| <a name="ipaddr"></a>**IpAddr** | Alias | | - If the event type is `NetworkSession`, `Flow` or `L2NetworkSession`, IpAddr is an alias to [SrcIpAddr](#srcipaddr).<br> - If the event type is `EndpointNetworkSession`, IpAddr is an alias to `LocalIpAddr`, which can alias either [SrcIpAddr](#srcipaddr) or [DstIpAddr](#dstipaddr), depending on [NetworkDirection](#networkdirection). |
### <a name="Intermediary"></a>Intermediary device and Network Address Translation (NAT) fields
storage Customer Managed Keys Configure Cross Tenant Existing Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/customer-managed-keys-configure-cross-tenant-existing-account.md
Previously updated : 10/14/2022 Last updated : 10/28/2022
$kvUri = "<key-vault-uri>"
$keyName = "<key-name>" $multiTenantAppId = "<multi-tenant-app-id>"
-Set-AzStorageAccount -ResourceGroupName $rgName `
+Set-AzStorageAccount -ResourceGroupName $isvRgName `
-Name $accountName ` -KeyvaultEncryption ` -UserAssignedIdentityId $userIdentity.Id `
storage Customer Managed Keys Configure Cross Tenant New Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/customer-managed-keys-configure-cross-tenant-new-account.md
Previously updated : 10/14/2022 Last updated : 10/28/2022
$accountName = "<account-name>"
$kvUri = "<key-vault-uri>" $keyName = "<keyName>" $location = "<location>"
-$multiTenantAppId = "<application-id>"
+$multiTenantAppId = "<application-id>" # appId value from multi-tenant app
$userIdentity = Get-AzUserAssignedIdentity -Name <user-assigned-identity> -ResourceGroupName $rgName
Remember to replace the placeholder values in brackets with your own values and
accountName="<storage-account>" kvUri="<key-vault-uri>" keyName="<key-name>"
-multiTenantAppId="<multi-tenant-app-id>"
+multiTenantAppId="<multi-tenant-app-id>" # appId value from multi-tenant app
# Get the resource ID for the user-assigned managed identity. identityResourceId=$(az identity show --name $managedIdentity \
storage Elastic San Connect Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-connect-linux.md
tcp:[18] <name>:port,-1 <iqn>
``` 15 is the session ID we'll use from the previous example.
-With the session ID, you can create as many sessions as you need however, none of the additional sessions are persistent, even if you modified node.startup. You must recreate them after each reboot. The following script is a loop that creates as many additional sessions as you specify. Replace **numberOfAdditionalSessions** with your desired number of additional sessions and replace **sessionID** with the session ID you'd like to use, then run the script.
+The following script is a loop that creates as many additional sessions as you specify. Replace **numberOfAdditionalSessions** with your desired number of additional sessions and replace **sessionID** with the session ID you'd like to use, then run the script.
``` for i in `seq 1 numberOfAdditionalSessions`; do sudo iscsiadm -m session -r sessionID --op new; done
for i in `seq 1 numberOfAdditionalSessions`; do sudo iscsiadm -m session -r sess
You can verify the number of sessions using `sudo multipath -ll`
+When you've finished creating sessions for each of your volumes, run the following command once for each volume you'd like to maintain persistent connections to. This keeps the volume's connections active when your client reboots.
+
+```
+sudo iscsiadm -m node --targetname yourTargetIQN --portal yourTargetPortalHostName:yourTargetPortalPort --op update -n node.session.nr_sessions -v numberofAdditionalSessions+1
+```
+ ### Single-session connections To establish persistent iSCSI connections, modify **node.startup** in **/etc/iscsi/iscsid.conf** from **manual** to **automatic**.
synapse-analytics 7 Beyond Data Warehouse Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/oracle/7-beyond-data-warehouse-migration.md
A key reason to migrate your existing data warehouse to Azure Synapse Analytics
- [Azure HDInsight](../../../hdinsight/index.yml) to process large amounts of data, and to join big data with Azure Synapse data by creating a logical data warehouse using PolyBase. -- [Azure Event Hubs](../../../event-hubs/event-hubs-about.md), [Azure Stream Analytics](../../../stream-analytics/stream-analytics-introduction.md), and [Apache Kafka](/azure/databricks/spark/latest/structured-streaming/kafka) to integrate live streaming data from Azure Synapse.
+- [Azure Event Hubs](../../../event-hubs/event-hubs-about.md), [Azure Stream Analytics](../../../stream-analytics/stream-analytics-introduction.md), and [Apache Kafka](/azure/hdinsight/kafka/apache-kafka-introduction) to integrate live streaming data from Azure Synapse.
The growth of big data has led to an acute demand for [machine learning](../../machine-learning/what-is-machine-learning.md) to enable custom-built, trained machine learning models for use in Azure Synapse. Machine learning models enable in-database analytics to run at scale in batch, on an event-driven basis and on-demand. The ability to take advantage of in-database analytics in Azure Synapse from multiple BI tools and applications also guarantees consistent predictions and recommendations.
By migrating your data warehouse to Azure Synapse, you can take advantage of the
## Next steps
-To learn about migrating to a dedicated SQL pool, see [Migrate a data warehouse to a dedicated SQL pool in Azure Synapse Analytics](../migrate-to-synapse-analytics-guide.md).
+To learn about migrating to a dedicated SQL pool, see [Migrate a data warehouse to a dedicated SQL pool in Azure Synapse Analytics](../migrate-to-synapse-analytics-guide.md).
virtual-desktop App Attach File Share https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/app-attach-file-share.md
Here are some other things we recommend you do to optimize MSIX app attach perfo
The setup process for MSIX app attach file share is largely the same as [the setup process for FSLogix profile file shares](create-host-pools-user-profile.md). However, you'll need to assign users different permissions. MSIX app attach requires read-only permissions to access the file share.
-If you're storing your MSIX applications in Azure Files, then for your session hosts, you'll need to assign all session host VMs both storage account role-based access control (RBAC) and file share New Technology File System (NTFS) permissions on the share.
+If you're storing your MSIX applications in Azure Files, then for your session hosts, you'll need to assign all session hosts VMs both storage account role-based access control (RBAC) and file share New Technology File System (NTFS) permissions on the share.
| Azure object | Required role | Role function | |--|--|--|
-| Session host (VM computer objects)| Storage File Data SMB Share Reader | Allows for read access to Azure File Share over SMB |
+| Session hosts (VM computer objects)| Storage File Data SMB Share Reader | Allows for read access to Azure File Share over SMB |
| Admins on File Share | Storage File Data SMB Share Elevated Contributor | Full control | | Users on File Share | Storage File Data SMB Share Contributor | Read and Execute, Read, List folder contents |
-To assign session host VMs permissions for the storage account and file share:
+To assign session hosts VMs permissions for the storage account and file share:
1. Create an Active Directory Domain Services (AD DS) security group.
-2. Add the computer accounts for all session host VMs as members of the group.
+2. Add the computer accounts for all session hosts VMs as members of the group.
3. Sync the AD DS group to Azure Active Directory (Azure AD).
To assign session host VMs permissions for the storage account and file share:
Once you've assigned the identity to your storage, follow the instructions in the articles in [Next steps](#next-steps) to grant other required permissions to the identity you've assigned to the VMs.
-You'll also need to make sure your session host VMs have NTFS permissions. You must have an OU container that's sourced from Active Directory Domain Services (AD DS), and your users must be members of that OU to use these permissions.
+You'll also need to make sure your session hosts VMs have **Modify** NTFS permissions. You must have an OU container that's sourced from Active Directory Domain Services (AD DS), and your users must be members of that OU to use these permissions.
## Next steps
virtual-desktop Safe Url List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/safe-url-list.md
In order to deploy and make Azure Virtual Desktop available to your users, you must allow specific URLs that your session host virtual machines (VMs) can access them anytime. Users also need to be able to connect to certain URLs to access their Azure Virtual Desktop resources. This article lists the required URLs you need to allow for your session hosts and users. These URLs could be blocked if you're using [Azure Firewall](../firewall/protect-azure-virtual-desktop.md) or a third-party firewall or [proxy service](proxy-server-support.md). Azure Virtual Desktop doesn't support deployments that block the URLs listed in this article. >[!IMPORTANT]
->Proxy Services that perform the following are not supported with Azure Virtual Desktop.
+>Proxy Services that perform the following are not recommended with Azure Virtual Desktop. See the above link or Table of Contents regarding Proxy Support Guidelines for further details.
>1. SSL Termination (Break and Inspect) >2. Require Authentication
virtual-machines Automatic Extension Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/automatic-extension-upgrade.md
Automatic Extension Upgrade supports the following extensions (and more are adde
- [Guest Configuration Extension](./extensions/guest-configuration.md) ΓÇô Linux and Windows - Key Vault ΓÇô [Linux](./extensions/key-vault-linux.md) and [Windows](./extensions/key-vault-windows.md) - [Azure Monitor Agent](../azure-monitor/agents/azure-monitor-agent-overview.md)
+- [Log Analytics Agent for Linux](../azure-monitor/agents/log-analytics-agent.md)
+- [Azure Diagnostics extension for Linux](../azure-monitor/agents/diagnostics-extension-overview.md)
- [DSC extension for Linux](extensions/dsc-linux.md)
virtual-machines Maintenance Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/maintenance-configurations.md
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
-Maintenance Configurations give you the ability to control and manage updates for many Azure virtual machine resources since Azure frequently updates its infrastructure to improve reliability, performance, security or launch new features. Most updates are transparent to users, but some sensitive workloads, like gaming, media streaming, and financial transactions, can't tolerate even few seconds of a VM freezing or disconnecting for maintenance. Maintenance configurations is integrated with Azure Resource Graph (ARG) for low latency and high scale customer experience.
+Maintenance Configurations gives you the ability to control and manage updates for many Azure virtual machine resources since Azure frequently updates its infrastructure to improve reliability, performance, security or launch new features. Most updates are transparent to users, but some sensitive workloads, like gaming, media streaming, and financial transactions, can't tolerate even few seconds of a VM freezing or disconnecting for maintenance. Maintenance Configurations is integrated with Azure Resource Graph (ARG) for low latency and high scale customer experience.
>[!IMPORTANT] > Users are required to have a role of at least contributor in order to use maintenance configurations.
Maintenance Configurations give you the ability to control and manage updates fo
Maintenance Configurations currently supports three (3) scopes: Host, OS image, and Guest. While each scope allows scheduling and managing updates, the major difference lies in the resource they each support. This section outlines the details on the various scopes and their supported types: - | Scope | Support Resources | |-|-| | Host | Isolated Virtual Machines, Isolated Virtual Machine Scale Sets, Dedicated Hosts | | OS Image | Virtual Machine Scale Sets | | Guest | Virtual Machines, Azure Arc Servers | - ### Host+ With this scope, you can manage platform updates that do not require a reboot on your *isolated VMs*, *isolated Virtual Machine Scale Set instances* and *dedicated hosts*. Some features and limitations unique to the host scope are: - Schedules can be set anytime within 35 days. After 35 days, updates are automatically applied. - A minimum of a 2 hour maintenance window is required for this scope.
+- Rack level maintenance is not currently supported.
[Learn more about Azure Dedicated Hosts](dedicated-hosts.md) ### OS image
-Using this scope with maintenance configurations lets you decide when to apply upgrades to OS disks in your *virtual machine scale sets* through an easier and more predictable experience. An upgrade works by replacing the OS disk of a VM with a new disk created using the latest image version. Any configured extensions and custom data scripts are run on the OS disk, while data disks are retained. Some features and limitations unique to this scope are:
-- Scale sets need to have [automatic OS upgrades](../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md) enabled in order to use maintenance configurations.-- Schedule recurrence is defaulted to daily -- A minimum of 5 hours is required for the maintenance window
+Using this scope with maintenance configurations lets you decide when to apply upgrades to OS disks in your *Virtual Machine Scale Sets* through an easier and more predictable experience. An upgrade works by replacing the OS disk of a VM with a new disk created using the latest image version. Any configured extensions and custom data scripts are run on the OS disk, while data disks are retained. Some features and limitations unique to this scope are:
+
+- Scale sets need to have [automatic OS upgrades](/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md) enabled in order to use maintenance configurations.
+- You can schedule recurrence up to a week (7 days).
+- A minimum of 5 hours is required for the maintenance window.
### Guest+ This scope is integrated with [update management center](../update-center/overview.md) which allows you to save recurring deployment schedules to install updates for your Windows Server and Linux machines in Azure, in on-premises environments, and in other cloud environments connected using Azure Arc-enabled servers. Some features and limitations unique to this scope include: - [Patch orchestration](automatic-vm-guest-patching.md#patch-orchestration-modes) for virtual machines need to be set to AutomaticByPlatform - A minimum of 1 hour and 10 minutes is required for the maintenance window.-- There is no limit to the recurrence of your schedule
+- There is no limit to the recurrence of your schedule.
To learn more about this topic, checkout [update management center and scheduled patching](../update-center/scheduled-patching.md)
-
+ ## Management options You can create and manage maintenance configurations using any of the following options:
For an Azure Functions sample, see [Scheduling Maintenance Updates with Maintena
## Next steps
-To learn more, see [Maintenance and updates](maintenance-and-updates.md).
+To learn more, see [Maintenance and updates](maintenance-and-updates.md).
virtual-machines Azure Monitor Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/azure-monitor-providers.md
If you don't configure any providers at the time of deployment, the Azure Monito
## Provider type: SAP NetWeaver
-You can configure one or more providers of provider type SAP NetWeaver to enable data collection from SAP NetWeaver layer. Azure Monitor for SAP solutions NetWeaver provider uses the existing [**SAPControl** Web service](https://www.sap.com/documents/2016/09/0a40e60d-8b7c-0010-82c7-eda71af511fa.html) interface to retrieve the appropriate information.
-
-For the current release, the following SOAP web methods are the standard, out-of-box methods invoked by Azure Monitor for SAP solutions.
-
-| Web method | ABAP support | Java support | Metrics |
-| - | | | - |
-| **GetSystemInstanceList** | Yes | Yes | Instance availability, message server, gateway, ICM, ABAP availability |
-| **GetProcessList** | Yes | Yes | If instance list is red, you can find what process caused the issue |
-| **GetQueueStatistic** | Yes | Yes | Queue statistics (DIA, BATCH, UPD) |
-| **ABAPGetWPTable** | Yes | No | Work process utilization |
-| **EnqGetStatistic** | Yes | Yes | Locks |
+You can configure one or more providers of provider type SAP NetWeaver to enable data collection from SAP NetWeaver layer. Azure Monitor for SAP solutions NetWeaver provider uses the existing
+- [**SAPControl** Web service](https://www.sap.com/documents/2016/09/0a40e60d-8b7c-0010-82c7-eda71af511fa.html) interface to retrieve the appropriate information (also available in Azure Monitor for SAP Solutions classic)
+- SAP RFC - ability to collect additional information from the SAP system leveraging Standard SAP RFC. (available only in Azure Monitor for SAP solution)
You can get the following data with the SAP NetWeaver provider: -- SAP system and application server availability
- - Instance process availability of dispatcher
- - ICM
- - Gateway
- - Message server
- - Enqueue Server
- - IGS Watchdog
-- Work process usage statistics and trends-- Enqueue Lock statistics and trends-- Queue usage statistics and trends-- SMON Metrics (**/SDF/SMON**)-- SWNC Workload, Memory, Transaction, User, RFC Usage (St03n)-- Short Dumps (**ST22**)-- Object Lock (**SM12**)-- Failed Updates (**SM13**)-- System Logs Analysis (**SM21**)-- Batch Jobs Statistics (**SM37**)-- Outbound Queues (**SMQ1**)-- Inbound Queues (**SMQ2**)-- Transactional RFC (**SM59**)-- STMS Change Transport System Metrics (**STMS**)
+- SAP system and application server availability (e.g Instance process availability of dispatcher,ICM,Gateway,Message server,Enqueue Server,IGS Watchdog) (SAPOsControl)
+- Work process usage statistics and trends (SAPOsControl)
+- Enqueue Lock statistics and trends (SAPOsControl)
+- Queue usage statistics and trends (SAPOsControl)
+- SMON Metrics (**Tcode - /SDF/SMON**) (RFC)
+- SWNC Workload, Memory, Transaction, User, RFC Usage (**Tcode - St03n**) (RFC)
+- Short Dumps (**Tcode - ST22**) (RFC)
+- Object Lock (**Tcode - SM12**) (RFC)
+- Failed Updates (**Tcode - SM13**) (RFC)
+- System Logs Analysis (**Tcode - SM21**) (RFC)
+- Batch Jobs Statistics (**Tcode - SM37**) (RFC)
+- Outbound Queues (**Tcode - SMQ1**) (RFC)
+- Inbound Queues (**Tcode - SMQ2**) (RFC)
+- Transactional RFC (**Tcode - SM59**) (RFC)
+- STMS Change Transport System Metrics (**Tcode - STMS**) (RFC)
![Diagram showing the NetWeaver provider architecture.](./media/azure-monitor-providers/netweaver-architecture.png)
You can see the following data with the SAP HANA provider:
- SAP HANA host status - SAP HANA system replication - SAP HANA Backup data
+- Fetching Services
+- Network throughput between the nodes in a scaleout system
+- SAP HANA Long Idling Cursors
+- SAP HANA Long Running Transactions
+- Checks for configuration parameter values
+- SAP HANA Uncommitted Write Transactions
+- SAP HANA Disk Fragmentation
+- SAP HANA Statistics Server Health
+- SAP HANA High Memory Usage Service
+- SAP HANA Blocking Transactions
+ Configuring the SAP HANA provider requires: - The host IP address,
virtual-network Nat Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/nat-availability-zones.md
# NAT gateway and availability zones NAT gateway is a zonal resource, which means it can be deployed and operate out of individual availability zones. With zone isolation scenarios, you can align your zonal NAT gateway resources with zonally designated IP based resources, such as virtual machines, to provide zone resiliency against outages. Review this document to understand key concepts and fundamental design guidance. +
+*Figure 1: Zonal deployment of NAT gateway.*
+ NAT gateway can either be designated to a specific zone within a region or to ΓÇÿno zoneΓÇÖ. Which zone property you select for your NAT gateway resource will inform the zone property of the public IP address that can be used for outbound connectivity as well. ## NAT gateway has built in resiliency
Now that you understand the zone-related properties for NAT gateway, see the fol
### Single zonal NAT gateway resource for zone-spanning resources
-A single zonal NAT gateway resource can be configured to either a subnet that contains virtual machines that span across multiple availability zones or to multiple subnets with different zonal virtual machines. When this type of deployment is configured, NAT gateway will provide outbound connectivity to the internet for all subnet resources from the specific zone it's located. If the zone that NAT gateway is deployed in goes down, then outbound connectivity across all virtual machine instances associated with the NAT gateway will also go down. This set up doesn't provide the best method of zone-resiliency.
+A single zonal NAT gateway resource can be configured to either a subnet that contains virtual machines that span across multiple availability zones or to multiple subnets with different zonal virtual machines. When this type of deployment is configured, NAT gateway will provide outbound connectivity to the internet for all subnet resources from the specific zone it's located. If the zone that NAT gateway is deployed in goes down, then outbound connectivity across all virtual machine instances associated with the NAT gateway will also go down. This set up doesn't provide the best method of zone-resiliency.
++
+*Figure 2: Single zonal NAT gateway resource for multi-zone spanning resources doesn't provide an effective method of zone-resiliency against outages.*
### Zonal NAT gateway resource for each zone in a region to create zone-resiliency A zonal promise for zone isolation scenarios exists when a virtual machine instance using a NAT gateway resource is in the same zone as the NAT gateway resource and its public IP addresses. The pattern you want to use for zone isolation is creating a "zonal stack" per availability zone. This "zonal stack" consists of virtual machine instances, a NAT gateway resource with public IP addresses or prefix on a subnet all in the same zone. +
+*Figure 3: Zonal isolation by creating zonal stacks with the same zone NAT gateway, public IPs, and virtual machines provides the best method of ensuring zone resiliency against outages.*
+ Failure of outbound connectivity due to a zone outage is isolated to the specific zone affected. The outage won't affect the other zonal stacks where other NAT gateways are deployed with their own subnets and zonal public IPs. Creating zonal stacks for each availability zone within a region is the most effective method for building zone-resiliency against outages for NAT gateway.