Updates from: 08/17/2023 01:16:38
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Force Password Reset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/force-password-reset.md
Last updated 06/26/2023 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Manage Custom Policies Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/manage-custom-policies-powershell.md
-+ Last updated 02/14/2020
active-directory-b2c Tenant Management Directory Quota https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tenant-management-directory-quota.md
The response from the API call looks similar to the following json:
{ "directorySizeQuota": { "used": 211802,
- "total": 300000
+ "total": 50000000
} } ]
If your tenant usage is higher that 80%, you can remove inactive users or reques
## Request increase directory quota size
-You can request to increase the quota size by [contacting support](find-help-open-support-ticket.md)
+You can request to increase the quota size by [contacting support](find-help-open-support-ticket.md)
active-directory-domain-services Alert Service Principal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/alert-service-principal.md
ms.assetid: f168870c-b43a-4dd6-a13f-5cfadc5edf2c
+ Last updated 01/29/2023 - # Known issues: Service principal alerts in Azure Active Directory Domain Services
active-directory-domain-services Create Forest Trust Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/create-forest-trust-powershell.md
Last updated 04/03/2023 --+ #Customer intent: As an identity administrator, I want to create an Azure AD Domain Services forest and one-way outbound trust from an Azure Active Directory Domain Services forest to an on-premises Active Directory Domain Services forest using Azure PowerShell to provide authentication and resource access between forests.- # Create an Azure Active Directory Domain Services forest trust to an on-premises domain using Azure PowerShell
For more conceptual information about forest types in Azure AD DS, see [How do f
[Install-Script]: /powershell/module/powershellget/install-script <!-- EXTERNAL LINKS -->
-[powershell-gallery]: https://www.powershellgallery.com/
+[powershell-gallery]: https://www.powershellgallery.com/
active-directory-domain-services Powershell Create Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/powershell-create-instance.md
Last updated 01/29/2023 --+ # Enable Azure Active Directory Domain Services using PowerShell
active-directory-domain-services Powershell Scoped Synchronization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/powershell-scoped-synchronization.md
Last updated 01/29/2023 -+ # Configure scoped synchronization from Azure AD to Azure Active Directory Domain Services using Azure AD PowerShell
active-directory-domain-services Secure Your Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/secure-your-domain.md
Last updated 01/29/2023 -+ # Harden an Azure Active Directory Domain Services managed domain
active-directory-domain-services Synchronization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/synchronization.md
ms.assetid: 57cbf436-fc1d-4bab-b991-7d25b6e987ef
+ Last updated 04/03/2023 - # How objects and credentials are synchronized in an Azure Active Directory Domain Services managed domain
active-directory-domain-services Template Create Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/template-create-instance.md
-+ Last updated 06/01/2023
active-directory-domain-services Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/troubleshoot.md
ms.assetid: 4bc8c604-f57c-4f28-9dac-8b9164a0cf0b
+ Last updated 01/29/2023 - # Common errors and troubleshooting steps for Azure Active Directory Domain Services
active-directory-domain-services Tutorial Create Instance Advanced https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-create-instance-advanced.md
+ Last updated 04/03/2023 - #Customer intent: As an identity administrator, I want to create an Azure Active Directory Domain Services managed domain and define advanced configuration options so that I can synchronize identity information with my Azure Active Directory tenant and provide Domain Services connectivity to virtual machines and applications in Azure.
To see this managed domain in action, create and join a virtual machine to the d
[availability-zones]: ../reliability/availability-zones-overview.md [concepts-sku]: administration-concepts.md#azure-ad-ds-skus
-<!-- EXTERNAL LINKS -->
+<!-- EXTERNAL LINKS -->
active-directory-domain-services Tutorial Create Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-create-instance.md
+ Last updated 08/01/2023 - #Customer intent: As an identity administrator, I want to create an Azure Active Directory Domain Services managed domain so that I can synchronize identity information with my Azure Active Directory tenant and provide Domain Services connectivity to virtual machines and applications in Azure.
Before you domain-join VMs and deploy applications that use the managed domain,
[concepts-sku]: administration-concepts.md#azure-ad-ds-skus <!-- EXTERNAL LINKS -->
-[naming-prefix]: /windows-server/identity/ad-ds/plan/selecting-the-forest-root-domain#selecting-a-prefix
+[naming-prefix]: /windows-server/identity/ad-ds/plan/selecting-the-forest-root-domain#selecting-a-prefix
active-directory Customize Application Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/customize-application-attributes.md
Applications and systems that support customization of the attribute list includ
> Editing the list of supported attributes is only recommended for administrators who have customized the schema of their applications and systems, and have first-hand knowledge of how their custom attributes have been defined or if a source attribute isn't automatically displayed in the Azure portal UI. This sometimes requires familiarity with the APIs and developer tools provided by an application or system. The ability to edit the list of supported attributes is locked down by default, but customers can enable the capability by navigating to the following URL: https://portal.azure.com/?Microsoft_AAD_Connect_Provisioning_forceSchemaEditorEnabled=true . You can then navigate to your application to view the [attribute list](#editing-the-list-of-supported-attributes). > [!NOTE]
-> When a directory extension attribute in Azure AD doesn't show up automatically in your attribute mapping drop-down, you can manually add it to the "Azure AD attribute list". When manually adding Azure AD directory extension attributes to your provisioning app, note that directory extension attribute names are case-sensitive. For example: If you have a directory extension attribute named `extension_53c9e2c0exxxxxxxxxxxxxxxx_acmeCostCenter`, make sure you enter it in the same format as defined in the directory.
+> When a directory extension attribute in Azure AD doesn't show up automatically in your attribute mapping drop-down, you can manually add it to the "Azure AD attribute list". When manually adding Azure AD directory extension attributes to your provisioning app, note that directory extension attribute names are case-sensitive. For example: If you have a directory extension attribute named `extension_53c9e2c0exxxxxxxxxxxxxxxx_acmeCostCenter`, make sure you enter it in the same format as defined in the directory. Provisioning multi-valued directory extension attributes is not supported.
When you're editing the list of supported attributes, the following properties are provided:
active-directory User Provisioning Sync Attributes For Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/user-provisioning-sync-attributes-for-mapping.md
+ Last updated 10/20/2022
active-directory Application Proxy Configure Cookie Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-cookie-settings.md
+ Last updated 11/17/2022
active-directory Application Proxy Configure Custom Home Page https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-custom-home-page.md
+ Last updated 11/17/2022
active-directory Application Proxy Ping Access Publishing Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-ping-access-publishing-guide.md
Azure Active Directory (Azure AD) Application Proxy has partnered with PingAcces
With PingAccess for Azure AD, you can give users access and single sign-on (SSO) to applications that use headers for authentication. Application Proxy treats these applications like any other, using Azure AD to authenticate access and then passing traffic through the connector service. PingAccess sits in front of the applications and translates the access token from Azure AD into a header. The application then receives the authentication in the format it can read.
-Your users wonΓÇÖt notice anything different when they sign in to use your corporate applications. They can still work from anywhere on any device. The Application Proxy connectors direct remote traffic to all apps without regard to their authentication type, so theyΓÇÖll still balance loads automatically.
+Your users won't notice anything different when they sign in to use your corporate applications. They can still work from anywhere on any device. The Application Proxy connectors direct remote traffic to all apps without regard to their authentication type, so they'll still balance loads automatically.
## How do I get access?
For more information, see [Azure Active Directory editions](../fundamentals/what
## Publish your application in Azure
-This article is for people to publish an application with this scenario for the first time. Besides detailing the publishing steps, it guides you in getting started with both Application Proxy and PingAccess. If youΓÇÖve already configured both services but want a refresher on the publishing steps, skip to the [Add your application to Azure AD with Application Proxy](#add-your-application-to-azure-ad-with-application-proxy) section.
+This article is for people to publish an application with this scenario for the first time. Besides detailing the publishing steps, it guides you in getting started with both Application Proxy and PingAccess. If you've already configured both services but want a refresher on the publishing steps, skip to the [Add your application to Azure AD with Application Proxy](#add-your-application-to-azure-ad-with-application-proxy) section.
> [!NOTE] > Since this scenario is a partnership between Azure AD and PingAccess, some of the instructions exist on the Ping Identity site.
To publish your own on-premises application:
> [!NOTE] > For a more detailed walkthrough of this step, see [Add an on-premises app to Azure AD](../app-proxy/application-proxy-add-on-premises-application.md#add-an-on-premises-app-to-azure-ad).
- 1. **Internal URL**: Normally you provide the URL that takes you to the appΓÇÖs sign-in page when youΓÇÖre on the corporate network. For this scenario, the connector needs to treat the PingAccess proxy as the front page of the application. Use this format: `https://<host name of your PingAccess server>:<port>`. The port is 3000 by default, but you can configure it in PingAccess.
+ 1. **Internal URL**: Normally you provide the URL that takes you to the app's sign-in page when you're on the corporate network. For this scenario, the connector needs to treat the PingAccess proxy as the front page of the application. Use this format: `https://<host name of your PingAccess server>:<port>`. The port is 3000 by default, but you can configure it in PingAccess.
> [!WARNING] > For this type of single sign-on, the internal URL must use `https` and can't use `http`. Also, there is a constraint when configuring an application that no two apps should have the same internal URL as this allows App Proxy to maintain distinction between applications.
To publish your own on-premises application:
1. **Translate URL in Headers**: Choose **No**. > [!NOTE]
- > If this is your first application, use port 3000 to start and come back to update this setting if you change your PingAccess configuration. For subsequent applications, the port will need to match the Listener youΓÇÖve configured in PingAccess. Learn more about [listeners in PingAccess](https://docs.pingidentity.com/access/sources/dita/topic?category=pingaccess&Releasestatus_ce=Current&resourceid=pa_assigning_key_pairs_to_https_listeners).
+ > If this is your first application, use port 3000 to start and come back to update this setting if you change your PingAccess configuration. For subsequent applications, the port will need to match the Listener you've configured in PingAccess. Learn more about [listeners in PingAccess](https://docs.pingidentity.com/access/sources/dita/topic?category=pingaccess&Releasestatus_ce=Current&resourceid=pa_assigning_key_pairs_to_https_listeners).
1. Select **Add**. The overview page for the new application appears.
In addition to the external URL, an authorize endpoint of Azure Active Directory
Finally, set up your on-premises application so that users have read access and other applications have read/write access:
-1. From the **App registrations** sidebar for your application, select **API permissions** > **Add a permission** > **Microsoft APIs** > **Microsoft Graph**. The **Request API permissions** page for **Microsoft Graph** appears, which contains the APIs for Windows Azure Active Directory.
+1. From the **App registrations** sidebar for your application, select **API permissions** > **Add a permission** > **Microsoft APIs** > **Microsoft Graph**. The **Request API permissions** page for **Microsoft Graph** appears, which contains the permissions for Microsoft Graph.
![Shows the Request API permissions page](./media/application-proxy-configure-single-sign-on-with-ping-access/required-permissions.png)
active-directory Powershell Assign Group To App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-assign-group-to-app.md
-+ Last updated 08/29/2022
active-directory Powershell Assign User To App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-assign-user-to-app.md
-+ Last updated 08/29/2022
active-directory Powershell Display Users Group Of App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-display-users-group-of-app.md
-+ Last updated 08/29/2022
active-directory Powershell Get All App Proxy Apps Basic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-app-proxy-apps-basic.md
-+ Last updated 08/29/2022
active-directory Powershell Get All App Proxy Apps By Connector Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-app-proxy-apps-by-connector-group.md
-+ Last updated 08/29/2022
active-directory Powershell Get All App Proxy Apps Extended https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-app-proxy-apps-extended.md
-+ Last updated 08/29/2022
active-directory Powershell Get All App Proxy Apps With Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-app-proxy-apps-with-policy.md
-+ Last updated 08/29/2022
active-directory Powershell Get All Connectors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-connectors.md
-+ Last updated 08/29/2022
active-directory Powershell Get All Custom Domain No Cert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-custom-domain-no-cert.md
-+ Last updated 08/29/2022
active-directory Powershell Get All Custom Domains And Certs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-custom-domains-and-certs.md
-+ Last updated 08/29/2022
active-directory Powershell Get All Default Domain Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-default-domain-apps.md
-+ Last updated 08/29/2022
active-directory Powershell Get All Wildcard Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-wildcard-apps.md
-+ Last updated 08/29/2022
active-directory Powershell Get Custom Domain Identical Cert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-custom-domain-identical-cert.md
-+ Last updated 08/29/2022
active-directory Powershell Get Custom Domain Replace Cert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-custom-domain-replace-cert.md
-+ Last updated 08/29/2022
active-directory Powershell Move All Apps To Connector Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-move-all-apps-to-connector-group.md
-+ Last updated 08/29/2022
active-directory Govern Service Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/govern-service-accounts.md
Last updated 02/09/2023 -+
active-directory Multi Tenant Common Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/multi-tenant-common-considerations.md
Last updated 04/19/2023 -+ # Common considerations for multi-tenant user management
active-directory Multi Tenant User Management Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/multi-tenant-user-management-scenarios.md
Last updated 04/19/2023 -+
active-directory Service Accounts Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/service-accounts-managed-identities.md
Last updated 02/07/2023 -+
active-directory Service Accounts Principal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/service-accounts-principal.md
Last updated 02/08/2023 -+
active-directory Certificate Based Authentication Federation Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/certificate-based-authentication-federation-android.md
description: Learn about the supported scenarios and the requirements for config
+ Last updated 09/30/2022
active-directory Certificate Based Authentication Federation Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/certificate-based-authentication-federation-get-started.md
description: Learn how to configure certificate-based authentication with federa
+ Last updated 05/04/2022
- # Get started with certificate-based authentication in Azure Active Directory with federation
active-directory Certificate Based Authentication Federation Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/certificate-based-authentication-federation-ios.md
description: Learn about the supported scenarios and the requirements for config
+ Last updated 09/30/2022
active-directory Concept Certificate Based Authentication Certificateuserids https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication-certificateuserids.md
-+ # Certificate user IDs
active-directory Concept Password Ban Bad Combined Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-password-ban-bad-combined-policy.md
description: Learn about the combined password policy and check for weak passwor
+ Last updated 04/02/2023
active-directory Concept Resilient Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-resilient-controls.md
tags: azuread+
active-directory Concept Sspr Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-sspr-policy.md
-+ # Password policies and account restrictions in Azure Active Directory
active-directory Concepts Azure Multi Factor Authentication Prompts Session Lifetime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concepts-azure-multi-factor-authentication-prompts-session-lifetime.md
description: Learn about the recommended configuration for reauthentication prom
+ Previously updated : 03/28/2023 Last updated : 08/15/2023
Azure Active Directory (Azure AD) has multiple settings that determine how often
The Azure AD default configuration for user sign-in frequency is a rolling window of 90 days. Asking users for credentials often seems like a sensible thing to do, but it can backfire. If users are trained to enter their credentials without thinking, they can unintentionally supply them to a malicious credential prompt.
-It might sound alarming to not ask for a user to sign back in, though any violation of IT policies revokes the session. Some examples include a password change, an incompliant device, or an account disable operation. You can also explicitly [revoke users' sessions using PowerShell](/powershell/module/azuread/revoke-azureaduserallrefreshtoken).
+It might sound alarming to not ask for a user to sign back in, though any violation of IT policies revokes the session. Some examples include a password change, an incompliant device, or an account disable operation. You can also explicitly [revoke users' sessions by using Microsoft Graph PowerShell](/powershell/module/microsoft.graph.users.actions/revoke-mgusersigninsession).
This article details recommended configurations and how different settings work and interact with each other.
active-directory How To Certificate Based Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-certificate-based-authentication.md
-+ # How to configure Azure AD certificate-based authentication
active-directory How To Migrate Mfa Server To Azure Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-migrate-mfa-server-to-azure-mfa.md
description: Step-by-step guidance to migrate from MFA Server on-premises to Azu
+ Last updated 01/29/2023
active-directory How To Migrate Mfa Server To Mfa With Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-migrate-mfa-server-to-mfa-with-federation.md
Title: Migrate to Azure AD MFA with federations
description: Step-by-step guidance to move from MFA Server on-premises to Azure AD MFA with federation + Last updated 05/23/2023
active-directory Howto Authentication Passwordless Phone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-phone.md
description: Enable passwordless sign-in to Azure AD using Microsoft Authenticat
+ Last updated 05/16/2023
active-directory Howto Authentication Use Email Signin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-use-email-signin.md
description: Learn how to enable users to sign in to Azure Active Directory with
+ Last updated 06/01/2023
- # Sign-in to Azure AD with email as an alternate login ID (Preview)
active-directory Howto Mfa Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-getstarted.md
Title: Deployment considerations for Azure AD Multi-Factor Authentication
description: Learn about deployment considerations and strategy for successful implementation of Azure AD Multi-Factor Authentication + Last updated 03/06/2023
active-directory Howto Mfa Nps Extension Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-nps-extension-errors.md
If you encounter errors with the NPS extension for Azure AD Multi-Factor Authent
| **REQUEST_FORMAT_ERROR** <br> Radius Request missing mandatory Radius userName\Identifier attribute.Verify that NPS is receiving RADIUS requests | This error usually reflects an installation issue. The NPS extension must be installed in NPS servers that can receive RADIUS requests. NPS servers that are installed as dependencies for services like RDG and RRAS don't receive radius requests. NPS Extension does not work when installed over such installations and errors out since it cannot read the details from the authentication request. | | **REQUEST_MISSING_CODE** | Make sure that the password encryption protocol between the NPS and NAS servers supports the secondary authentication method that you're using. **PAP** supports all the authentication methods of Azure AD MFA in the cloud: phone call, one-way text message, mobile app notification, and mobile app verification code. **CHAPV2** and **EAP** support phone call and mobile app notification. | | **USERNAME_CANONICALIZATION_ERROR** | Verify that the user is present in your on-premises Active Directory instance, and that the NPS Service has permissions to access the directory. If you are using cross-forest trusts, [contact support](#contact-microsoft-support) for further help. |
+| **Challenge requested in Authentication Ext for User** | Organizations using a RADIUS protocol other than PAP will observe user VPN authorization failing with these events appearing in the AuthZOptCh event log of the NPS Extension server. You can configure the NPS Server to support PAP. If PAP is not an option, you can set OVERRIDE_NUMBER_MATCHING_WITH_OTP = FALSE to fall back to Approve/Deny push notifications. For further help, please check [Number matching using NPS Extension](how-to-mfa-number-match.md#nps-extension). |
### Alternate login ID errors
active-directory Howto Mfa Nps Extension Rdg https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-nps-extension-rdg.md
description: Integrate your Remote Desktop Gateway infrastructure with Azure AD
+ Last updated 01/29/2023
active-directory Howto Mfa Nps Extension Vpn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-nps-extension-vpn.md
description: Integrate your VPN infrastructure with Azure AD MFA by using the Ne
+ Last updated 01/29/2023
active-directory Howto Mfa Nps Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-nps-extension.md
-+ # Integrate your existing Network Policy Server (NPS) infrastructure with Azure AD Multi-Factor Authentication
active-directory Howto Mfa Reporting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-reporting.md
-+ # Use the sign-ins report to review Azure AD Multi-Factor Authentication events
active-directory Howto Mfa Userstates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-userstates.md
-+ # Enable per-user Azure AD Multi-Factor Authentication to secure sign-in events
active-directory Howto Registration Mfa Sspr Combined Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-registration-mfa-sspr-combined-troubleshoot.md
description: Troubleshoot Azure AD Multi-Factor Authentication and self-service
+ Last updated 01/29/2023
active-directory Howto Sspr Authenticationdata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-sspr-authenticationdata.md
-+ # Pre-populate user authentication contact information for Azure Active Directory self-service password reset (SSPR)
active-directory V1 Permissions Consent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/v1-permissions-consent.md
Last updated 09/24/2018 -+
active-directory Concept Conditional Access Cloud Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-cloud-apps.md
description: What are cloud apps, actions, and authentication context in an Azur
+ Last updated 06/27/2023
active-directory Concept Continuous Access Evaluation Workload https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-continuous-access-evaluation-workload.md
Last updated 07/22/2022
-+
The following steps detail how an admin can verify sign in activity in the sign-
- [Register an application with Azure AD and create a service principal](../develop/howto-create-service-principal-portal.md#register-an-application-with-azure-ad-and-create-a-service-principal) - [How to use Continuous Access Evaluation enabled APIs in your applications](../develop/app-resilience-continuous-access-evaluation.md) - [Sample application using continuous access evaluation](https://github.com/Azure-Samples/ms-identity-dotnetcore-daemon-graph-cae)
+- [Securing workload identities with Azure AD Identity Protection](../identity-protection/concept-workload-identity-risk.md)
- [What is continuous access evaluation?](../conditional-access/concept-continuous-access-evaluation.md)
active-directory Howto Conditional Access Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-apis.md
description: Using the Azure AD Conditional Access APIs and PowerShell to manage
+ Last updated 09/10/2020
active-directory Howto Conditional Access Session Lifetime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-session-lifetime.md
description: Customize Azure AD authentication session configuration including u
+ Last updated 07/18/2023
active-directory App Objects And Service Principals https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/app-objects-and-service-principals.md
Last updated 05/22/2023 -+
active-directory Custom Extension Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/custom-extension-get-started.md
Previously updated : 05/23/2023 Last updated : 08/16/2023
# Configure a custom claim provider token issuance event (preview)
-This article describes how to configure and setup a custom claims provider with the [token issuance start event](custom-claims-provider-overview.md#token-issuance-start-event-listener) type. This event is triggered right before the token is issued, and allows you to call a REST API to add claims to the token.
+This article describes how to configure and set up a custom claims provider with the [token issuance start event](custom-claims-provider-overview.md#token-issuance-start-event-listener) type. This event is triggered right before the token is issued, and allows you to call a REST API to add claims to the token.
This how-to guide demonstrates the token issuance start event with a REST API running in Azure Functions and a sample OpenID Connect application. Before you start, take a look at following video, which demonstrates how to configure Azure AD custom claims provider with Function App:
In this step, you configure a custom authentication extension, which will be use
# [Microsoft Graph](#tab/microsoft-graph)
-Create an Application Registration to authenticate your custom authentication extension to your Azure Function.
+Register an application to authenticate your custom authentication extension to your Azure Function.
-1. Sign in to the [Microsoft Graph Explorer](https://aka.ms/ge) using an account whose home tenant is the tenant you wish to manage your custom authentication extension in.
-1. Set the HTTP method to **POST**.
-1. Paste the URL: `https://graph.microsoft.com/v1.0/applications`
-1. Select **Request Body** and paste the following JSON:
+1. Sign in to [Graph Explorer](https://aka.ms/ge) using an account whose home tenant is the tenant you wish to manage your custom authentication extension in. The account must have the privileges to create and manage an application registration in the tenant.
+2. Run the following request.
- ```json
+ # [HTTP](#tab/http)
+ ```http
+ POST https://graph.microsoft.com/v1.0/applications
+ Content-type: application/json
+
{
- "displayName": "authenticationeventsAPI"
+ "displayName": "authenticationeventsAPI"
} ```
-1. Select **Run Query** to submit the request.
-
-1. Copy the **Application ID** value (*appId*) from the response. You need this value later, which is referred to as the `{authenticationeventsAPI_AppId}`. Also get the object ID of the app (*ID*), which is referred to as `{authenticationeventsAPI_ObjectId}` from the response.
+ # [C#](#tab/csharp)
+ [!INCLUDE [sample-code](~/microsoft-graph/includes/snippets/csharp/v1/tutorial-application-basics-create-app-csharp-snippets.md)]
+
+ # [Go](#tab/go)
+ [!INCLUDE [sample-code](~/microsoft-graph/includes/snippets/go/v1/tutorial-application-basics-create-app-go-snippets.md)]
+
+ # [Java](#tab/java)
+ [!INCLUDE [sample-code](~/microsoft-graph/includes/snippets/jav)]
+
+ # [JavaScript](#tab/javascript)
+ [!INCLUDE [sample-code](~/microsoft-graph/includes/snippets/javascript/v1/tutorial-application-basics-create-app-javascript-snippets.md)]
+
+ # [PHP](#tab/php)
+ Snippet not available.
+
+ # [PowerShell](#tab/powershell)
+ [!INCLUDE [sample-code](~/microsoft-graph/includes/snippets/powershell/v1/tutorial-application-basics-create-app-powershell-snippets.md)]
+
+ # [Python](#tab/python)
+ [!INCLUDE [sample-code](~/microsoft-graph/includes/snippets/python/v1/tutorial-application-basics-create-app-python-snippets.md)]
+
+
-Create a service principal in the tenant for the authenticationeventsAPI app registration:
+3. From the response, record the value of **id** and **appId** of the newly created app registration. These values will be referenced in this article as `{authenticationeventsAPI_ObjectId}` and `{authenticationeventsAPI_AppId}` respectively.
-1. Set the HTTP method to **POST**.
-1. Paste the URL: `https://graph.microsoft.com/v1.0/servicePrincipals`
-1. Select **Request Body** and paste the following JSON:
+Create a service principal in the tenant for the authenticationeventsAPI app registration.
- ```json
- {
- "appId": "{authenticationeventsAPI_AppId}"
- }
- ```
+Still in Graph Explorer, run the following request. Replace `{authenticationeventsAPI_AppId}` with the value of **appId** that you recorded from the previous step.
-1. Select **Run Query** to submit the request.
+```http
+POST https://graph.microsoft.com/v1.0/servicePrincipals
+Content-type: application/json
+
+{
+ "appId": "{authenticationeventsAPI_AppId}"
+}
+```
### Set the App ID URI, access token version, and required resource access Update the newly created application to set the application ID URI value, the access token version, and the required resource access.
-1. Set the HTTP method to **PATCH**.
-1. Paste the URL: `https://graph.microsoft.com/v1.0/applications/{authenticationeventsAPI_ObjectId}`
-1. Select **Request Body** and paste the following JSON:
+In Graph Explorer, run the following request.
+ - Set the application ID URI value in the *identifierUris* property. Replace `{Function_Url_Hostname}` with the hostname of the `{Function_Url}` you recorded earlier.
+ - Set the `{authenticationeventsAPI_AppId}` value with the **appId** that you recorded earlier.
+ - An example value is `api://authenticationeventsAPI.azurewebsites.net/f4a70782-3191-45b4-b7e5-dd415885dd80`. Take note of this value as you'll use it later in this article in place of `{functionApp_IdentifierUri}`.
- Set the application ID URI value in the *identifierUris* property. Replace `{Function_Url_Hostname}` with the hostname of the `{Function_Url}` you recorded earlier.
-
- Set the `{authenticationeventsAPI_AppId}` value with the App ID generated from the app registration created in the previous step.
-
- An example value would be `api://authenticationeventsAPI.azurewebsites.net/f4a70782-3191-45b4-b7e5-dd415885dd80`. Take note of this value as it is used in following steps and is referenced as `{functionApp_IdentifierUri}`.
-
- ```json
+```http
+POST https://graph.microsoft.com/v1.0/applications/{authenticationeventsAPI_ObjectId}
+Content-type: application/json
+
+{
+"identifierUris": [
+ "api://{Function_Url_Hostname}/{authenticationeventsAPI_AppId}"
+],
+"api": {
+ "requestedAccessTokenVersion": 2,
+ "acceptMappedClaims": null,
+ "knownClientApplications": [],
+ "oauth2PermissionScopes": [],
+ "preAuthorizedApplications": []
+},
+"requiredResourceAccess": [
{
- "identifierUris": [
- "api://{Function_Url_Hostname}/{authenticationeventsAPI_AppId}"
- ],
- "api": {
- "requestedAccessTokenVersion": 2,
- "acceptMappedClaims": null,
- "knownClientApplications": [],
- "oauth2PermissionScopes": [],
- "preAuthorizedApplications": []
- },
- "requiredResourceAccess": [
+ "resourceAppId": "00000003-0000-0000-c000-000000000000",
+ "resourceAccess": [
{
- "resourceAppId": "00000003-0000-0000-c000-000000000000",
- "resourceAccess": [
- {
- "id": "214e810f-fda8-4fd7-a475-29461495eb00",
- "type": "Role"
- }
- ]
+ "id": "214e810f-fda8-4fd7-a475-29461495eb00",
+ "type": "Role"
} ] }
- ```
-
-1. Select **Run Query** to submit the request.
+]
+}
+```
### Register a custom authentication extension
-Next, you register the custom authentication extension. You register the custom authentication extension by associating it with the App Registration for the Azure Function, and your Azure Function endpoint `{Function_Url}`.
+Next, you register the custom authentication extension. You register the custom authentication extension by associating it with the app registration for the Azure Function, and your Azure Function endpoint `{Function_Url}`.
-1. Set the HTTP method to **POST**.
-1. Paste the URL: `https://graph.microsoft.com/beta/identity/customAuthenticationExtensions`
-1. Select **Request Body** and paste the following JSON:
+1. In Graph Explorer, run the following request. Replace `{Function_Url}` with the hostname of your Azure Function app. Replace `{functionApp_IdentifierUri}` with the identifierUri used in the previous step.
+ - You'll need the *CustomAuthenticationExtension.ReadWrite.All* delegated permission.
- Replace `{Function_Url}` with the hostname of your Azure Function app. Replace `{functionApp_IdentifierUri}` with the identifierUri used in the previous step.
+ # [HTTP](#tab/http)
+ ```http
+ POST https://graph.microsoft.com/beta/identity/customAuthenticationExtensions
+ Content-type: application/json
- ```json
{ "@odata.type": "#microsoft.graph.onTokenIssuanceStartCustomExtension", "displayName": "onTokenIssuanceStartCustomExtension",
Next, you register the custom authentication extension. You register the custom
] } ```
+ # [C#](#tab/csharp)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)]
+
+ # [Go](#tab/go)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)]
+
+ # [Java](#tab/java)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)]
+
+ # [JavaScript](#tab/javascript)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)]
+
+ # [PHP](#tab/php)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)]
+
+ # [PowerShell](#tab/powershell)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)]
+
+ # [Python](#tab/python)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)]
-1. Select **Run Query** to submit the request.
+
-Record the ID value of the created custom claims provider object. The ID is needed in a later step and is referred to as the `{customExtensionObjectId}`.
+1. Record the **id** value of the created custom claims provider object. You'll use the value later in this tutorial in place of `{customExtensionObjectId}`.
### 2.2 Grant admin consent
-After your custom authentication extension is created, you'll be taken to the **Overview** tab of the new custom authentication extension.
+After your custom authentication extension is created, open the **Overview** tab of the new custom authentication extension.
From the **Overview** page, select the **Grant permission** button to give admin consent to the registered app, which allows the custom authentication extension to authenticate to your API. The custom authentication extension uses `client_credentials` to authenticate to the Azure Function App using the `Receive custom authentication extension HTTP requests` permission.
The following screenshot shows how to register the *My Test application*.
### 3.1 Get the application ID
-In your app registration, under **Overview**, copy the **Application (client) ID**. The app ID is referred to as the `{App_to_enrich_ID}` in later steps.
+In your app registration, under **Overview**, copy the **Application (client) ID**. The app ID is referred to as the `{App_to_enrich_ID}` in later steps. In Microsoft Graph, it's referenced by the **appId** propety.
:::image type="content" border="false"source="media/custom-extension-get-started/get-the-test-application-id.png" alt-text="Screenshot that shows how to copy the application ID.":::
Next, assign the attributes from the custom claims provider, which should be iss
# [Microsoft Graph](#tab/microsoft-graph)
-First create an event listener to trigger a custom authentication extension using the token issuance start event:
-
-1. Sign in to the [Microsoft Graph Explorer](https://aka.ms/ge) using an account whose home tenant is the tenant you wish to manage your custom authentication extension in.
-1. Set the HTTP method to **POST**.
-1. Paste the URL: `https://graph.microsoft.com/beta/identity/authenticationEventListeners`
-1. Select **Request Body** and paste the following JSON:
+First create an event listener to trigger a custom authentication extension for the *My Test application* using the token issuance start event.
- Replace `{App_to_enrich_ID}` with the app ID of *My Test application* recorded earlier. Replace `{customExtensionObjectId}` with the custom authentication extension ID recorded earlier.
+1. Sign in to [Graph Explorer](https://aka.ms/ge) using an account whose home tenant is the tenant you wish to manage your custom authentication extension in.
+1. Run the following request. Replace `{App_to_enrich_ID}` with the app ID of *My Test application* recorded earlier. Replace `{customExtensionObjectId}` with the custom authentication extension ID recorded earlier.
+ - You'll need the *EventListener.ReadWrite.All* delegated permission.
- ```json
+ # [HTTP](#tab/http)
+ ```http
+ POST https://graph.microsoft.com/beta/identity/authenticationEventListeners
+ Content-type: application/json
+
{ "@odata.type": "#microsoft.graph.onTokenIssuanceStartListener", "conditions": {
First create an event listener to trigger a custom authentication extension usin
} ```
-1. Select **Run Query** to submit the request.
+ # [C#](#tab/csharp)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)]
+
+ # [Go](#tab/go)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)]
+
+ # [Java](#tab/java)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)]
+
+ # [JavaScript](#tab/javascript)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)]
+
+ # [PHP](#tab/php)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)]
+
+ # [PowerShell](#tab/powershell)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)]
+
+ # [Python](#tab/python)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)]
+
+
+
-Next, create the claims mapping policy, which describes which claims can be issued to an application from a custom claims provider:
+Next, create the claims mapping policy, which describes which claims can be issued to an application from a custom claims provider.
-1. Set the HTTP method to **POST**.
-1. Paste the URL: `https://graph.microsoft.com/v1.0/policies/claimsmappingpolicies`
-1. Select **Request Body** and paste the following JSON:
+1. Still in Graph Explorer, run the following request. You'll need the *Policy.ReadWrite.ApplicationConfiguration* delegated permission.
++
+ # [HTTP](#tab/http)
+ ```http
+ POST https://graph.microsoft.com/v1.0/policies/claimsMappingPolicies
+ Content-type: application/json
- ```json
{ "definition": [ "{\"ClaimsMappingPolicy\":{\"Version\":1,\"IncludeBasicClaimSet\":\"true\",\"ClaimsSchema\":[{\"Source\":\"CustomClaimsProvider\",\"ID\":\"DateOfBirth\",\"JwtClaimType\":\"dob\"},{\"Source\":\"CustomClaimsProvider\",\"ID\":\"CustomRoles\",\"JwtClaimType\":\"my_roles\"},{\"Source\":\"CustomClaimsProvider\",\"ID\":\"CorrelationId\",\"JwtClaimType\":\"correlationId\"},{\"Source\":\"CustomClaimsProvider\",\"ID\":\"ApiVersion\",\"JwtClaimType\":\"apiVersion \"},{\"Value\":\"tokenaug_V2\",\"JwtClaimType\":\"policy_version\"}]}}"
Next, create the claims mapping policy, which describes which claims can be issu
"isOrganizationDefault": false } ```
+ # [C#](#tab/csharp)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/csharp/create-claimsmappingpolicy-from-claimsmappingpolicies-csharp-snippets.md)]
+
+ # [Go](#tab/go)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/go/create-claimsmappingpolicy-from-claimsmappingpolicies-go-snippets.md)]
+
+ # [Java](#tab/java)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/jav)]
+
+ # [JavaScript](#tab/javascript)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/javascript/create-claimsmappingpolicy-from-claimsmappingpolicies-javascript-snippets.md)]
+
+ # [PHP](#tab/php)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/php/create-claimsmappingpolicy-from-claimsmappingpolicies-php-snippets.md)]
+
+ # [PowerShell](#tab/powershell)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/powershell/create-claimsmappingpolicy-from-claimsmappingpolicies-powershell-snippets.md)]
+
+ # [Python](#tab/python)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/python/create-claimsmappingpolicy-from-claimsmappingpolicies-python-snippets.md)]
+
+
-1. Record the `ID` generated in the response, later it's referred to as `{claims_mapping_policy_ID}`.
-1. Select **Run Query** to submit the request.
+2. Record the `ID` generated in the response, later it's referred to as `{claims_mapping_policy_ID}`.
-Get the `servicePrincipal` objectId:
+Get the service principal object ID:
-1. Set the HTTP method to **GET**.
-1. Paste the URL: `https://graph.microsoft.com/v1.0/servicePrincipals(appId='{App_to_enrich_ID}')/claimsMappingPolicies/$ref`. Replace `{App_to_enrich_ID}` with *My Test Application* App ID.
-1. Record the `id` value, later it's referred to as `{test_App_Service_Principal_ObjectId}`.
+1. Run the following request in Graph Explorer. Replace `{App_to_enrich_ID}` with the **appId** of *My Test Application*.
-Assign the claims mapping policy to the `servicePrincipal` of *My Test Application*:
+ ```http
+ GET https://graph.microsoft.com/v1.0/servicePrincipals(appId='{App_to_enrich_ID}')
+ ```
+
+Record the value of **id**.
-1. Set the HTTP method to **POST**.
-1. Paste the URL: `https://graph.microsoft.com/v1.0/servicePrincipals/{test_App_Service_Principal_ObjectId}/claimsMappingPolicies/$ref`
-1. Select **Request Body** and paste the following JSON:
+Assign the claims mapping policy to the service principal of *My Test Application*.
+
+1. Run the following request in Graph Explorer. You'll need the *Policy.ReadWrite.ApplicationConfiguration* and *Application.ReadWrite.All* delegated permission.
+
+ # [HTTP](#tab/http)
+ ```http
+ POST https://graph.microsoft.com/v1.0/servicePrincipals/{test_App_Service_Principal_ObjectId}/claimsMappingPolicies/$ref
+ Content-type: application/json
- ```json
{ "@odata.id": "https://graph.microsoft.com/v1.0/policies/claimsMappingPolicies/{claims_mapping_policy_ID}" } ```
-1. Select **Run Query** to submit the request.
+ # [C#](#tab/csharp)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/csharp/create-claimsmappingpolicy-from-serviceprincipal-csharp-snippets.md)]
+
+ # [Go](#tab/go)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/go/create-claimsmappingpolicy-from-serviceprincipal-go-snippets.md)]
+
+ # [Java](#tab/java)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/jav)]
+
+ # [JavaScript](#tab/javascript)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/javascript/create-claimsmappingpolicy-from-serviceprincipal-javascript-snippets.md)]
+
+ # [PHP](#tab/php)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/php/create-claimsmappingpolicy-from-serviceprincipal-php-snippets.md)]
+
+ # [PowerShell](#tab/powershell)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/powershell/create-claimsmappingpolicy-from-serviceprincipal-powershell-snippets.md)]
+
+ # [Python](#tab/python)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/python/create-claimsmappingpolicy-from-serviceprincipal-python-snippets.md)]
+
+
active-directory How Applications Are Added https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/how-applications-are-added.md
Last updated 10/26/2022 -+
active-directory Assign Local Admin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/assign-local-admin.md
Previously updated : 10/27/2022 Last updated : 08/16/2023
When you connect a Windows device with Azure AD using an Azure AD join, Azure AD
- The Azure AD joined device local administrator role - The user performing the Azure AD join
-By adding Azure AD roles to the local administrators group, you can update the users that can manage a device anytime in Azure AD without modifying anything on the device. Azure AD also adds the Azure AD joined device local administrator role to the local administrators group to support the principle of least privilege (PoLP). In addition to the global administrators, you can also enable users that have been *only* assigned the device administrator role to manage a device.
+By adding Azure AD roles to the local administrators group, you can update the users that can manage a device anytime in Azure AD without modifying anything on the device. Azure AD also adds the Azure AD joined device local administrator role to the local administrators group to support the principle of least privilege (PoLP). In addition to users with the Global Administrator role, you can also enable users that have been *only* assigned the Azure AD Joined Device Local Administrator role to manage a device.
-## Manage the global administrators role
+## Manage the Global Administrator role
-To view and update the membership of the Global Administrator role, see:
+To view and update the membership of the [Global Administrator](/azure/active-directory/roles/permissions-reference#global-administrator) role, see:
- [View all members of an administrator role in Azure Active Directory](../roles/manage-roles-portal.md) - [Assign a user to administrator roles in Azure Active Directory](../fundamentals/how-subscriptions-associated-directory.md)
-## Manage the device administrator role
+## Manage the Azure AD Joined Device Local Administrator role
[!INCLUDE [portal updates](~/articles/active-directory/includes/portal-update.md)]
-In the Azure portal, you can manage the device administrator role from **Device settings**.
+In the Azure portal, you can manage the [Azure AD Joined Device Local Administrator](/azure/active-directory/roles/permissions-reference#azure-ad-joined-device-local-administrator) role from **Device settings**.
1. Sign in to the [Azure portal](https://portal.azure.com) as a Global Administrator. 1. Browse to **Azure Active Directory** > **Devices** > **Device settings**. 1. Select **Manage Additional local administrators on all Azure AD joined devices**. 1. Select **Add assignments** then choose the other administrators you want to add and select **Add**.
-To modify the device administrator role, configure **Additional local administrators on all Azure AD joined devices**.
+To modify the Azure AD Joined Device Local Administrator role, configure **Additional local administrators on all Azure AD joined devices**.
> [!NOTE] > This option requires Azure AD Premium licenses.
-Device administrators are assigned to all Azure AD joined devices. You canΓÇÖt scope device administrators to a specific set of devices. Updating the device administrator role doesn't necessarily have an immediate impact on the affected users. On devices where a user is already signed into, the privilege elevation takes place when *both* the below actions happen:
+Azure AD Joined Device Local Administrators are assigned to all Azure AD joined devices. You canΓÇÖt scope this role to a specific set of devices. Updating the Azure AD Joined Device Local Administrator role doesn't necessarily have an immediate impact on the affected users. On devices where a user is already signed into, the privilege elevation takes place when *both* the below actions happen:
- Upto 4 hours have passed for Azure AD to issue a new Primary Refresh Token with the appropriate privileges. - User signs out and signs back in, not lock/unlock, to refresh their profile.
-Users won't be listed in the local administrator group, the permissions are received through the Primary Refresh Token.
+Users aren't directly listed in the local administrator group, the permissions are received through the Primary Refresh Token.
> [!NOTE] > The above actions are not applicable to users who have not signed in to the relevant device previously. In this case, the administrator privileges are applied immediately after their first sign-in to the device. ## Manage administrator privileges using Azure AD groups (preview)
-Starting with Windows 10 version 20H2, you can use Azure AD groups to manage administrator privileges on Azure AD joined devices with the [Local Users and Groups](/windows/client-management/mdm/policy-csp-localusersandgroups) MDM policy. This policy allows you to assign individual users or Azure AD groups to the local administrators group on an Azure AD joined device, providing you the granularity to configure distinct administrators for different groups of devices.
+Starting with Windows 10 version 20H2, you can use Azure AD groups to manage administrator privileges on Azure AD joined devices with the [Local Users and Groups](/windows/client-management/mdm/policy-csp-localusersandgroups) MDM policy. This policy allows you to assign individual users or Azure AD groups to the local administrators group on an Azure AD joined device, providing you with the granularity to configure distinct administrators for different groups of devices.
Organizations can use Intune to manage these policies using [Custom OMA-URI Settings](/mem/intune/configuration/custom-settings-windows-10) or [Account protection policy](/mem/intune/protect/endpoint-security-account-protection-policy). A few considerations for using this policy: -- Adding Azure AD groups through the policy requires the group's SID that can be obtained by executing the [Microsoft Graph API for Groups](/graph/api/resources/group). The SID is defined by the property `securityIdentifier` in the API response.
+- Adding Azure AD groups through the policy requires the group's SID that can be obtained by executing the [Microsoft Graph API for Groups](/graph/api/resources/group). The SID equates to the property `securityIdentifier` in the API response.
- Administrator privileges using this policy are evaluated only for the following well-known groups on a Windows 10 or newer device - Administrators, Users, Guests, Power Users, Remote Desktop Users and Remote Management Users.
By default, Azure AD adds the user performing the Azure AD join to the administr
- [Windows Autopilot](/windows/deployment/windows-autopilot/windows-10-autopilot) - Windows Autopilot provides you with an option to prevent primary user performing the join from becoming a local administrator by [creating an Autopilot profile](/intune/enrollment-autopilot#create-an-autopilot-deployment-profile).-- [Bulk enrollment](/intune/windows-bulk-enroll) - An Azure AD join that is performed in the context of a bulk enrollment happens in the context of an auto-created user. Users signing in after a device has been joined aren't added to the administrators group.
+- [Bulk enrollment](/intune/windows-bulk-enroll) - An Azure AD join that is performed in the context of a bulk enrollment happens in the context of an autocreated user. Users signing in after a device has been joined aren't added to the administrators group.
## Manually elevate a user on a device
Additionally, you can also add users using the command prompt:
## Considerations -- You can only assign role based groups to the device administrator role.-- Device administrators are assigned to all Azure AD Joined devices. They can't be scoped to a specific set of devices.
+- You can only assign role based groups to the Azure AD Joined Device Local Administrator role.
+- The Azure AD Joined Device Local Administrator role is assigned to all Azure AD Joined devices. This role can't be scoped to a specific set of devices.
- Local administrator rights on Windows devices aren't applicable to [Azure AD B2B guest users](../external-identities/what-is-b2b.md).-- When you remove users from the device administrator role, changes aren't instant. Users still have local administrator privilege on a device as long as they're signed in to it. The privilege is revoked during their next sign-in when a new primary refresh token is issued. This revocation, similar to the privilege elevation, could take upto 4 hours.
+- When you remove users from the Azure AD Joined Device Local Administrator role, changes aren't instant. Users still have local administrator privilege on a device as long as they're signed in to it. The privilege is revoked during their next sign-in when a new primary refresh token is issued. This revocation, similar to the privilege elevation, could take upto 4 hours.
## Next steps
active-directory How To Hybrid Join Verify https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/how-to-hybrid-join-verify.md
description: Verify configurations for hybrid Azure AD joined devices
+ Last updated 02/27/2023
active-directory Howto Vm Sign In Azure Ad Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-vm-sign-in-azure-ad-windows.md
--+ # Log in to a Windows virtual machine in Azure by using Azure AD including passwordless
active-directory Hybrid Join Manual https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/hybrid-join-manual.md
description: Learn how to manually configure hybrid Azure Active Directory join
+ Last updated 07/05/2022
active-directory Manage Stale Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/manage-stale-devices.md
description: Learn how to remove stale devices from your database of registered
+ Last updated 09/27/2022
-#Customer intent: As an IT admin, I want to understand how I can get rid of stale devices, so that I can I can cleanup my device registration data.
-
+#Customer intent: As an IT admin, I want to understand how I can get rid of stale devices, so that I can I can cleanup my device registration data.
# How To: Manage stale devices in Azure AD
active-directory Directory Delete Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/directory-delete-howto.md
Last updated 10/03/2022 -+
active-directory Directory Self Service Signup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/directory-self-service-signup.md
Last updated 03/02/2022 -+
active-directory Domains Admin Takeover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/domains-admin-takeover.md
Last updated 06/23/2022 -+
active-directory Domains Verify Custom Subdomain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/domains-verify-custom-subdomain.md
Last updated 06/23/2022 --+
active-directory Groups Assign Sensitivity Labels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-assign-sensitivity-labels.md
Last updated 06/28/2023 -+
active-directory Groups Change Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-change-type.md
Last updated 06/23/2022 -+
active-directory Groups Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-lifecycle.md
Last updated 06/24/2022 -+
active-directory Groups Naming Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-naming-policy.md
Last updated 06/24/2022 -+
active-directory Groups Restore Deleted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-restore-deleted.md
Last updated 06/24/2022 -+ # Restore a deleted Microsoft 365 group in Azure Active Directory
active-directory Groups Self Service Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-self-service-management.md
Last updated 06/12/2023 -+
active-directory Groups Settings Cmdlets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-settings-cmdlets.md
Last updated 06/24/2022 -+ # Azure Active Directory cmdlets for configuring group settings
active-directory Groups Settings V2 Cmdlets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-settings-v2-cmdlets.md
Last updated 06/24/2022 -+ # Azure Active Directory version 2 cmdlets for group management
active-directory Licensing Group Advanced https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-group-advanced.md
Last updated 01/09/2023 -+
active-directory Licensing Ps Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-ps-examples.md
+ Last updated 12/02/2020
active-directory Linkedin Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/linkedin-integration.md
Last updated 06/24/2022 -+
active-directory Users Bulk Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-bulk-restore.md
-+
active-directory Users Custom Security Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-custom-security-attributes.md
-+
active-directory Users Restrict Guest Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-restrict-guest-permissions.md
-+
active-directory Users Revoke Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-revoke-access.md
Last updated 06/24/2022-+
active-directory Authentication Conditional Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/authentication-conditional-access.md
description: Learn how to enforce multi-factor authentication policies for Azure
+ Last updated 04/17/2023
active-directory Bulk Invite Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/bulk-invite-powershell.md
Last updated 07/31/2023
--
-# Customer intent: As a tenant administrator, I want to send B2B invitations to multiple external users at the same time so that I can avoid having to send individual invitations to each user.
+
+# Customer intent: As a tenant administrator, I want to send B2B invitations to multiple external users at the same time so that I can avoid having to send individual invitations to each user.
# Tutorial: Use PowerShell to bulk invite Azure AD B2B collaboration users
active-directory Code Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/code-samples.md
Last updated 04/06/2023
-+ # Customer intent: As a tenant administrator, I want to bulk-invite external users to an organization from email addresses that I've stored in a .csv file.
active-directory How To Facebook Federation Customers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-facebook-federation-customers.md
Last updated 06/20/2023 --+ #Customer intent: As a dev, devops, or it admin, I want to
At this point, the Facebook identity provider has been set up in your customer t
## Next steps - [Add Google as an identity provider](how-to-google-federation-customers.md)-- [Customize the branding for customer sign-in experiences](how-to-customize-branding-customers.md)
+- [Customize the branding for customer sign-in experiences](how-to-customize-branding-customers.md)
active-directory How To Google Federation Customers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-google-federation-customers.md
Last updated 05/24/2023 --+ #Customer intent: As a dev, devops, or it admin, I want to
At this point, the Google identity provider has been set up in your Azure AD, bu
## Next steps - [Add Facebook as an identity provider](how-to-facebook-federation-customers.md)-- [Customize the branding for customer sign-in experiences](how-to-customize-branding-customers.md)
+- [Customize the branding for customer sign-in experiences](how-to-customize-branding-customers.md)
active-directory Customize Invitation Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customize-invitation-api.md
description: Azure Active Directory B2B collaboration supports your cross-compan
+ Last updated 12/02/2022
-# Customer intent: As a tenant administrator, I want to customize the invitation process with the API.
+# Customer intent: As a tenant administrator, I want to customize the invitation process with the API.
# Azure Active Directory B2B collaboration API and customization
Check out the invitation API reference in [https://developer.microsoft.com/graph
- [What is Azure AD B2B collaboration?](what-is-b2b.md) - [Add and invite guest users](add-users-administrator.md) - [The elements of the B2B collaboration invitation email](invitation-email-elements.md)-
active-directory Direct Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/direct-federation.md
Last updated 03/15/2023
-+
active-directory External Collaboration Settings Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/external-collaboration-settings-configure.md
description: Learn how to enable Active Directory B2B external collaboration and
+ Last updated 10/24/2022
active-directory Facebook Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/facebook-federation.md
Last updated 01/20/2023
-+ -
-# Customer intent: As a tenant administrator, I want to set up Facebook as an identity provider for guest user login.
+# Customer intent: As a tenant administrator, I want to set up Facebook as an identity provider for guest user login.
# Add Facebook as an identity provider for External Identities
active-directory Google Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/google-federation.md
Last updated 01/20/2023
-+
active-directory Invite Internal Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/invite-internal-users.md
description: If you have internal user accounts for partners, distributors, supp
+ Last updated 07/27/2023
- # Customer intent: As a tenant administrator, I want to know how to invite internal users to B2B collaboration.
active-directory Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/troubleshoot.md
Last updated 05/23/2023
tags: active-directory -+
active-directory User Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/user-properties.md
Last updated 05/18/2023
-+ -
-# Customer intent: As a tenant administrator, I want to learn about B2B collaboration guest user properties and states before and after invitation redemption.
+# Customer intent: As a tenant administrator, I want to learn about B2B collaboration guest user properties and states before and after invitation redemption.
# Properties of an Azure Active Directory B2B collaboration user
active-directory Custom Security Attributes Add https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/custom-security-attributes-add.md
+ Last updated 06/29/2023
active-directory Custom Security Attributes Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/custom-security-attributes-manage.md
+ Last updated 06/29/2023
active-directory New Name https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/new-name.md
+ Last updated 07/11/2023 - # Customer intent: As a new or existing customer, I want to learn more about the new name for Azure Active Directory (Azure AD) and understand the impact the name change may have on other products, new or existing license(s), what I need to do, and where I can learn more about Microsoft Entra products.
active-directory Security Defaults https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-defaults.md
description: Get protected from common identity threats using Azure AD security
+ Last updated 07/31/2023
active-directory What Is Deprecated https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/what-is-deprecated.md
Last updated 01/27/2023 --+ # What's deprecated in Azure Active Directory?
active-directory Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-archive.md
Last updated 7/18/2023 -+
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new.md
Last updated 05/31/2023 -+
active-directory Tutorial Prepare User Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-prepare-user-accounts.md
Last updated 08/02/2023 -+ # Preparing user accounts for Lifecycle workflows tutorials
active-directory Custom Attribute Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/cloud-sync/custom-attribute-mapping.md
-+ Last updated 01/12/2023
active-directory How To Inbound Synch Ms Graph https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/cloud-sync/how-to-inbound-synch-ms-graph.md
+ Last updated 01/11/2023
active-directory Migrate Azure Ad Connect To Cloud Sync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/cloud-sync/migrate-azure-ad-connect-to-cloud-sync.md
+ Last updated 01/17/2023
active-directory Reference Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/cloud-sync/reference-powershell.md
+ Last updated 01/17/2023
active-directory How To Bypassdirsyncoverrides https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-bypassdirsyncoverrides.md
+
active-directory How To Connect Emergency Ad Fs Certificate Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-emergency-ad-fs-certificate-rotation.md
+ Last updated 01/26/2023
active-directory How To Connect Fed O365 Certs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-fed-o365-certs.md
ms.assetid: 543b7dc1-ccc9-407f-85a1-a9944c0ba1be
na+ Last updated 01/26/2023
active-directory How To Connect Fed Saml Idp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-fed-saml-idp.md
description: This document describes using a SAML 2.0 compliant Idp for single s
-+ na
active-directory How To Connect Fed Single Adfs Multitenant Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-fed-single-adfs-multitenant-federation.md
ms.assetid:
na+ Last updated 01/26/2023
active-directory How To Connect Install Existing Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-install-existing-tenant.md
description: This topic describes how to use Connect when you have an existing A
+ Last updated 01/26/2023
active-directory How To Connect Install Multiple Domains https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-install-multiple-domains.md
ms.assetid: 5595fb2f-2131-4304-8a31-c52559128ea4
na+ Last updated 01/26/2023
active-directory How To Connect Install Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-install-prerequisites.md
ms.assetid: 91b88fda-bca6-49a8-898f-8d906a661f07
na+ Last updated 05/02/2023
active-directory How To Connect Password Hash Synchronization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-password-hash-synchronization.md
ms.assetid: 05f16c3e-9d23-45dc-afca-3d0fa9dbf501 + Last updated 05/18/2023
active-directory How To Connect Sync Change The Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-sync-change-the-configuration.md
ms.assetid: 7b9df836-e8a5-4228-97da-2faec9238b31 + Last updated 01/26/2023
active-directory How To Connect Sync Feature Preferreddatalocation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-sync-feature-preferreddatalocation.md
description: Describes how to put your Microsoft 365 user resources close to the
+ Last updated 01/26/2023
active-directory How To Connect Syncservice Duplicate Attribute Resiliency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-syncservice-duplicate-attribute-resiliency.md
ms.assetid: 537a92b7-7a84-4c89-88b0-9bce0eacd931
na+ Last updated 01/26/2023
active-directory How To Connect Syncservice Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-syncservice-features.md
ms.assetid: 213aab20-0a61-434a-9545-c4637628da81
na+ Last updated 01/26/2023
active-directory Migrate From Federation To Cloud Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/migrate-from-federation-to-cloud-authentication.md
description: This article has information about moving your hybrid identity envi
+ Last updated 04/04/2023
active-directory Reference Connect Accounts Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/reference-connect-accounts-permissions.md
na+ Last updated 01/19/2023
active-directory Reference Connect Adsynctools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/reference-connect-adsynctools.md
-+ # Azure AD Connect: ADSyncTools PowerShell Reference
active-directory Reference Connect Version History Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/reference-connect-version-history-archive.md
Last updated 01/19/2023
-+ # Azure AD Connect: Version release history archive
active-directory Reference Connect Version History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/reference-connect-version-history.md
Last updated 7/6/2022 -+
To read more about autoupgrade, see [Azure AD Connect: Automatic upgrade](how-to
- We have enabled Auto Upgrade for tenants with custom synchronization rules. Note that deleted (not disabled) default rules will be re-created and enabled upon Auto Upgrade. - We have added Microsoft Azure AD Connect Agent Updater service to the install. This new service will be used for future auto upgrades. - We have removed the Synchronization Service WebService Connector Config program from the install.
+ - Default sync rule ΓÇ£In from AD ΓÇô User CommonΓÇ¥ was updated to flow the employeeType attribute.
### Bug Fixes - We have made improvements to accessibility.
active-directory Tshoot Connect Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/tshoot-connect-connectivity.md
-+ # Troubleshoot Azure AD Connect connectivity issues
active-directory Tshoot Connect Object Not Syncing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/tshoot-connect-object-not-syncing.md
ms.assetid:
na+ Last updated 01/19/2023
active-directory Tshoot Connect Sso https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/tshoot-connect-sso.md
ms.assetid: 9f994aca-6088-40f5-b2cc-c753a4f41da7 + Last updated 01/19/2023
active-directory Tshoot Connect Sync Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/tshoot-connect-sync-errors.md
Last updated 01/19/2023 -+
active-directory App Management Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/app-management-powershell-samples.md
Last updated 07/12/2023 -+ # Azure Active Directory PowerShell examples for Application Management
active-directory Assign User Or Group Access Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/assign-user-or-group-access-portal.md
Last updated 11/22/2022 -+ zone_pivot_groups: enterprise-apps-all- #customer intent: As an admin, I want to manage user assignment for an app in Azure Active Directory using PowerShell
active-directory Configure Authentication For Federated Users Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-authentication-for-federated-users-portal.md
Last updated 03/16/2023 -+ zone_pivot_groups: home-realm-discovery- #customer intent: As and admin, I want to configure Home Realm Discovery for Azure AD authentication for federated users.
active-directory Configure Permission Classifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-permission-classifications.md
Last updated 3/28/2023 -+ zone_pivot_groups: enterprise-apps-all- #customer intent: As an admin, I want configure permission classifications for applications in Azure AD
active-directory Configure Risk Based Step Up Consent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-risk-based-step-up-consent.md
Last updated 11/17/2021 --+ #customer intent: As an admin, I want to configure risk-based step-up consent. # Configure risk-based step-up consent using PowerShell
active-directory Configure User Consent Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-user-consent-groups.md
Last updated 09/06/2022 --+ #customer intent: As an admin, I want to configure group owner consent to apps accessing group data using Azure AD
active-directory Delete Application Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/delete-application-portal.md
Last updated 06/21/2023
zone_pivot_groups: enterprise-apps-all-+ #Customer intent: As an administrator of an Azure AD tenant, I want to delete an enterprise application.
active-directory Disable User Sign In Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/disable-user-sign-in-portal.md
Last updated 2/23/2023 -+ zone_pivot_groups: enterprise-apps-all- #customer intent: As an admin, I want to disable user sign-in for an application so that no user can sign in to it in Azure Active Directory. # Disable user sign-in for an application
active-directory Hide Application From User Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/hide-application-from-user-portal.md
zone_pivot_groups: enterprise-apps-all--+ #customer intent: As an admin, I want to hide an enterprise application from user's experience so that it is not listed in the user's Active directory access portals or Microsoft 365 launchers
active-directory Home Realm Discovery Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/home-realm-discovery-policy.md
Last updated 01/02/2023 --+ # Home Realm Discovery for an application
active-directory Howto Saml Token Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/howto-saml-token-encryption.md
Last updated 06/15/2023
-+ # Configure Azure Active Directory SAML token encryption
active-directory Manage Application Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/manage-application-permissions.md
zone_pivot_groups: enterprise-apps-all --+ #customer intent: As an admin, I want to review permissions granted to applications so that I can restrict suspicious or over privileged applications.- # Review permissions granted to enterprise applications
active-directory Migrate Okta Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-okta-federation.md
Last updated 05/23/2023 -+ # Tutorial: Migrate Okta federation to Azure Active Directory-managed authentication
active-directory Migrate Okta Sync Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-okta-sync-provisioning.md
Last updated 05/23/2023 -+ # Tutorial: Migrate Okta sync provisioning to Azure AD Connect synchronization
active-directory Prevent Domain Hints With Home Realm Discovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/prevent-domain-hints-with-home-realm-discovery.md
Last updated 03/16/2023
zone_pivot_groups: home-realm-discovery--+ #customer intent: As an admin, I want to disable auto-acceleration to federated IDP during sign in using Home Realm Discovery policy # Disable auto-acceleration sign-in
active-directory Restore Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/restore-application.md
Last updated 06/21/2023 -+ zone_pivot_groups: enterprise-apps-minus-portal #Customer intent: As an administrator of an Azure AD tenant, I want to restore a soft deleted enterprise application.
active-directory Powershell Export Apps With Secrets Beyond Required https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/scripts/powershell-export-apps-with-secrets-beyond-required.md
-+ Last updated 07/12/2023
active-directory How To Assign App Role Managed Identity Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-assign-app-role-managed-identity-powershell.md
Last updated 05/12/2022 -+ # Assign a managed identity access to an application role using PowerShell
active-directory Qs Configure Powershell Windows Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/qs-configure-powershell-windows-vm.md
Last updated 05/10/2023 -+ # Configure managed identities for Azure resources on an Azure VM using PowerShell
active-directory Concept Pim For Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/concept-pim-for-groups.md
na Previously updated : 6/7/2023 Last updated : 8/15/2023
One group can be an eligible member of another group, even if one of those group
If a user is an active member of Group A, and Group A is an eligible member of Group B, the user can activate their membership in Group B. This activation will be only for the user that requested the activation for, it does not mean that the entire Group A becomes an active member of Group B.
+## Privileged Identity Management and app provisioning (Public Preview)
+
+If the group is configured for [app provisioning](../app-provisioning/index.yml), activation of group membership will trigger provisioning of group membership (and user account itself if it wasnΓÇÖt provisioned previously) to the application using SCIM protocol.
+
+In Public Preview we have a functionality that triggers provisioning right after group membership is activated in PIM.
+Provisioning configuration depends on the application. Generally, we recommend having at least two groups assigned to the application. Depending on the number of roles in your application, you may choose to define additional ΓÇ£privileged groups.ΓÇ¥:
++
+|Group|Purpose|Members|Group membership|Role assigned in the application|
+|--|--|--|--|--|
+|All users group|Ensure that all users that need access to the application are constantly provisioned to the application.|All users that need to access application.|Active|None, or low-privileged role|
+|Privileged group|Provide just-in-time access to privileged role in the application.|Users that need to have just-in-time access to privileged role in the application.|Eligible|Privileged role|
+ ## Next steps - [Bring groups into Privileged Identity Management](groups-discover-groups.md)
active-directory Pim Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-roles.md
We support all Microsoft 365 roles in the Azure AD Roles and Administrators port
> [!NOTE] > - Eligible users for the SharePoint administrator role, the Device administrator role, and any roles trying to access the Microsoft Security & Compliance Center might experience delays of up to a few hours after activating their role. We are working with those teams to fix the issues.
-> - For information about delays activating the Azure AD Joined Device Local Administrator role, see [How to manage the local administrators group on Azure AD joined devices](../devices/assign-local-admin.md#manage-the-device-administrator-role).
+> - For information about delays activating the Azure AD Joined Device Local Administrator role, see [How to manage the local administrators group on Azure AD joined devices](../devices/assign-local-admin.md#manage-the-azure-ad-joined-device-local-administrator-role).
## Next steps
active-directory Concept Diagnostic Settings Logs Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-diagnostic-settings-logs-options.md
++
+ Title: Logs available for streaming to endpoints from Azure Active Directory
+description: Learn about the Azure Active Directory logs available for streaming to an endpoint for storage, analysis, or monitoring.
+++++++ Last updated : 08/09/2023+++++
+# Learn about the identity logs you can stream to an endpoint
+
+Using Diagnostic settings in Azure Active Directory (Azure AD), you can route activity logs to several endpoints for long term retention and data insights. You select the logs you want to route, then select the endpoint.
+
+This article describes the logs that you can route to an endpoint from Azure AD Diagnostic settings.
+
+## Prerequisites
+
+Setting up an endpoint, such as an event hub or storage account, may require different roles and licenses. To create or edit a new Diagnostic setting, you need a user who's a **Security Administrator** or **Global Administrator** for the Azure AD tenant.
+
+To help decide which log routing option is best for you, see [How to access activity logs](howto-access-activity-logs.md). The overall process and requirements for each endpoint type are covered in the following articles.
+
+- [Send logs to a Log Analytics workspace to integrate with Azure Monitor logs](howto-integrate-activity-logs-with-azure-monitor-logs.md)
+- [Archive logs to a storage account](howto-archive-logs-to-storage-account.md)
+- [Stream logs to an event hub](howto-stream-logs-to-event-hub.md)
+- [Send to a partner solution](../../partner-solutions/overview.md)
+
+## Activity log options
+
+The following logs can be sent to an endpoint. Some logs may be in public preview but still visible in the portal.
+
+### Audit logs
+
+The `AuditLogs` report capture changes to applications, groups, users, and licenses in your Azure AD tenant. Once you've routed your audit logs, you can filter or analyze by date/time, the service that logged the event, and who made the change. For more information, see [Audit logs](concept-audit-logs.md).
+
+### Sign-in logs
+
+The `SignInLogs` send the interactive sign-in logs, which are logs generated by your users signing in. Sign-in logs are generated by users providing their username and password on an Azure AD sign-in screen or passing an MFA challenge. For more information, see [Interactive user sign-ins](concept-all-sign-ins.md#interactive-user-sign-ins).
+
+### Non-interactive sign-in logs
+
+The `NonInteractiveUserSIgnInLogs` are sign-ins done on behalf of a user, such as by a client app. The device or client uses a token or code to authenticate or access a resource on behalf of a user. For more information, see [Non-interactive user sign-ins](concept-all-sign-ins.md#non-interactive-user-sign-ins).
+
+### Service principal sign-in logs
+
+If you need to review sign-in activity for apps or service principals, the `ServicePrincipalSignInLogs` may be a good option. In these scenarios, certificates or client secrets are used for authentication. For more information, see [Service principal sign-ins](concept-all-sign-ins.md#service-principal-sign-ins).
+
+### Managed identity sign-in logs
+
+The `ManagedIdentitySignInLogs` provide similar insights as the service principal sign-in logs, but for managed identities, where Azure manages the secrets. For more information, see [Managed identity sign-ins](concept-all-sign-ins.md#managed-identity-for-azure-resources-sign-ins).
+
+### Provisioning logs
+
+If your organization provisions users through a third-party application such as Workday or ServiceNow, you may want to export the `ProvisioningLogs` reports. For more information, see [Provisioning logs](concept-provisioning-logs.md).
+
+### AD FS sign-in logs
+
+Sign-in activity for Active Directory Federated Services (AD FS) applications are captured in this Usage and insight reports. You can export the `ADFSSignInLogs` report to monitor sign-in activity for AD FS applications. For more information, see [AD FS sign-in logs](concept-usage-insights-report.md#ad-fs-application-activity).
+
+### Risky users
+
+The `RiskyUsers` logs identify users who are at risk based on their sign-in activity. This report is part of Azure AD Identity Protection and uses sign-in data from Azure AD. For more information, see [What is Azure AD Identity Protection?](../identity-protection/overview-identity-protection.md).
+
+### User risk events
+
+The `UserRiskEvents` logs are part of Azure AD Identity Protection. These logs capture details about risky sign-in events. For more information, see [How to investigate risk](../identity-protection/howto-identity-protection-investigate-risk.md#risky-sign-ins).
+
+### Risky service principals
+
+The `RiskyServicePrincipals` logs provide information about service principals that Azure AD Identity Protection detected as risky. Service principal risk represents the probability that an identity or account is compromised. These risks are calculated asynchronously using data and patterns from Microsoft's internal and external threat intelligence sources. These sources may include security researchers, law enforcement professionals, and security teams at Microsoft. For more information, see [Securing workload identities](../identity-protection/concept-workload-identity-risk.md)
+
+### Service principal risk events
+
+The `ServicePrincipalRiskEvents` logs provide details around the risky sign-in events for service principals. These logs may include any identified suspicious events related to the service principal accounts. For more information, see [Securing workload identities](../identity-protection/concept-workload-identity-risk.md)
+
+### Enriched Microsoft 365 audit logs
+
+The `EnrichedOffice365AuditLogs` logs are associated with the enriched logs you can enable for Microsoft Entra Internet Access. Selecting this option doesn't add new logs to your workspace unless your organization is using Microsoft Entra Internet to secure access to your Microsoft 365 traffic *and* you enabled the enriched logs. For more information, see [How to use the Global Secure Access enriched Microsoft 365 logs](../../global-secure-access/how-to-view-enriched-logs.md).
+
+### Microsoft Graph activity logs
+
+The `MicrosoftGraphActivityLogs` logs are associated with a feature that is still in preview. The logs are visible in Azure AD, but selecting these options won't add new logs to your workspace unless your organization was included in the preview.
+
+### Network access traffic logs
+
+The `NetworkAccessTrafficLogs` logs are associated with Microsoft Entra Internet Access and Microsoft Entra Private Access. The logs are visible in Azure AD, but selecting this option doesn't add new logs to your workspace unless your organization is using Microsoft Entra Internet Access and Microsoft Entra Private Access to secure access to your corporate resources. For more information, see [What is Global Secure Access?](../../global-secure-access/overview-what-is-global-secure-access.md).
+
+## Next steps
+
+- [Learn about the sign-ins logs](concept-all-sign-ins.md)
+- [Explore how to access the activity logs](howto-access-activity-logs.md)
active-directory Concept Log Monitoring Integration Options Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-log-monitoring-integration-options-considerations.md
+
+ Title: Azure Active Directory activity log integration options and considerations
+description: Introduction to the options and considerations for integrating Azure Active Directory activity logs with storage and analysis tools.
+++++++ Last updated : 08/09/2023+++
+# Azure AD activity log integrations
+
+Using **Diagnostic settings** in Azure Active Directory (Azure AD), you can route activity logs to several endpoints for long term data retention and insights. You can archive logs for storage, route to Security Information and Event Management (SIEM) tools, and integrate logs with Azure Monitor logs.
+
+With these integrations, you can enable rich visualizations, monitoring, and alerting on the connected data. This article describes the recommended uses for each integration type or access method. Cost considerations for sending Azure AD activity logs to various endpoints are also covered.
+
+## Supported reports
+
+The following logs can be integrated with one of many endpoints:
+
+* The [**audit logs activity report**](concept-audit-logs.md) gives you access to the history of every task that's performed in your tenant.
+* With the [**sign-in activity report**](concept-sign-ins.md), you can see when users attempt to sign in to your applications or troubleshoot sign-in errors.
+* With the [**provisioning logs**](../app-provisioning/application-provisioning-log-analytics.md), you can monitor which users have been created, updated, and deleted in all your third-party applications.
+* The [**risky users logs**](../identity-protection/howto-identity-protection-investigate-risk.md#risky-users) helps you monitor changes in user risk level and remediation activity.
+* With the [**risk detections logs**](../identity-protection/howto-identity-protection-investigate-risk.md#risk-detections), you can monitor user's risk detections and analyze trends in risk activity detected in your organization.
+
+## Integration options
+
+To help choose the right method for integrating Azure AD activity logs for storage or analysis, think about the overall task you're trying to accomplish. We've grouped the options into three main categories:
+
+- Troubleshooting
+- Long-term storage
+- Analysis and monitoring
+
+### Troubleshooting
+
+If you're performing troubleshooting tasks but you don't need to retain the logs for more than 30 days, we recommend using the Azure portal or Microsoft Graph to access activity logs. You can filter the logs for your scenario and export or download them as needed.
+
+If you're performing troubleshooting tasks *and* you need to retain the logs for more than 30 days, take a look at the long-term storage options.
+
+### Long-term storage
+
+If you're performing troubleshooting tasks *and* you need to retain the logs for more than 30 days, you can export your logs to an Azure storage account. This option is ideal of you don't plan on querying that data often.
+
+If you need to query the data that you're retaining for more than 30 days, take a look at the analysis and monitoring options.
+
+### Analysis and monitoring
+
+If your scenario requires that you retain data for more than 30 days *and* you plan on querying that data regularly, you've got a few options to integrate your data with SIEM tools for analysis and monitoring.
+
+If you have a third party SIEM tool, we recommend setting up an Event Hubs namespace and event hub that you can stream your data through. With an event hub, you can stream logs to one of the supported SIEM tools.
+
+If you don't plan on using a third-party SIEM tool, we recommend sending your Azure AD activity logs to Azure Monitor logs. With this integration, you can query your activity logs with Log Analytics. In Addition to Azure Monitor logs, Microsoft Sentinel provides near real-time security detection and threat hunting. If you decide to integrate with SIEM tools later, you can stream your Azure AD activity logs along with your other Azure data through an event hub.
+
+## Cost considerations
+
+There's a cost for sending data to a Log Analytics workspace, archiving data in a storage account, or streaming logs to an event hub. The amount of data and the cost incurred can vary significantly depending on the tenant size, the number of policies in use, and even the time of day.
+
+Because the size and cost for sending logs to an endpoint is difficult to predict, the most accurate way to determine your expected costs is to route your logs to an endpoint for day or two. With this snapshot, you can get an accurate prediction for your expected costs. You can also get an estimate of your costs by downloading a sample of your logs and multiplying accordingly to get an estimate for one day.
+
+Other considerations for sending Azure AD logs to Azure Monitor logs are covered in the following Azure Monitor cost details articles:
+
+- [Azure Monitor logs cost calculations and options](../../azure-monitor/logs/cost-logs.md)
+- [Azure Monitor cost and usage](../../azure-monitor/usage-estimated-costs.md)
+- [Optimize costs in Azure Monitor](../../azure-monitor/best-practices-cost.md)
+
+Azure Monitor provides the option to exclude whole events, fields, or parts of fields when ingesting logs from Azure AD. Learn more about this cost saving feature in [Data collection transformation in Azure Monitor](../../azure-monitor/essentials/data-collection-transformations.md).
+
+## Estimate your costs
+
+To estimate the costs for your organization, you can estimate either the daily log size or the daily cost for integrating your logs with an endpoint.
+
+The following factors could affect costs for your organization:
+
+- Audit log events use around 2 KB of data storage
+- Sign-in log events use on average 11.5 KB of data storage
+- A tenant of about 100,000 users could incur about 1.5 million events per day
+- Events are batched into about 5-minute intervals and sent as a single message that contains all the events within that time frame
+
+### Daily log size
+
+To estimate the daily log size, gather a sample of your logs, adjust the sample to reflect your tenant size and settings, then apply that sample to the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/).
+
+If you haven't downloaded logs from the Azure portal, review the [How to download logs in Azure AD](howto-download-logs.md) article. Depending on the size of your organization, you may need to choose a different sample size to start your estimation. The following sample sizes are a good place to start:
+
+- 1000 records
+- For large tenants, 15 minutes of sign-ins
+- For small to medium tenants, 1 hour of sign-ins
+
+You should also consider the geographic distribution and peak hours of your users when you capture your data sample. If your organization is based in one region, it's likely that sign-ins peak around the same time. Adjust your sample size and when you capture the sample accordingly.
+
+With the data sample captured, multiply accordingly to find out how large the file would be for one day.
+
+### Estimate the daily cost
+
+To get an idea of how much a log integration could cost for your organization, you can enable an integration for a day or two. Use this option if your budget allows for the temporary increase.
+
+To enable a log integration, follow the steps in the [Integrate activity logs with Azure Monitor logs](howto-integrate-activity-logs-with-log-analytics.md) article. If possible, create a new resource group for the logs and endpoint you want to try out. Having a devoted resource group makes it easy to view the cost analysis and then delete it when you're done.
+
+With the integration enabled, navigate to **Azure portal** > **Cost Management** > **Cost analysis**. There are several ways to analyze costs. This [Cost Management quickstart](../../cost-management-billing/costs/quick-acm-cost-analysis.md) should help you get started. The figures in the following screenshot are used for example purposes and are not intended to reflect actual amounts.
+
+![Screenshot of a cost analysis breakdown as a pie chart.](media/concept-activity-logs-azure-monitor/cost-analysis-breakdown.png)
+
+Make sure you're using your new resource group as the scope. Explore the daily costs and forecasts to get an idea of how much your log integration could cost.
+
+## Calculate estimated costs
+
+From the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/) landing page, you can estimate the costs for various products.
+
+- [Azure Monitor](https://azure.microsoft.com/pricing/details/monitor/)
+- [Azure storage](https://azure.microsoft.com/pricing/details/storage/blobs/)
+- [Azure Event Hubs](https://azure.microsoft.com/pricing/details/event-hubs/)
+- [Microsoft Sentinel](https://azure.microsoft.com/pricing/details/microsoft-sentinel/)
+
+Once you have an estimate for the GB/day that will be sent to an endpoint, enter that value in the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/). The figures in the following screenshot are used for example purposes and are not intended to reflect actual prices.
+
+![Screenshot of the Azure pricing calculator, with 8 GB/Day used as an example.](media/concept-activity-logs-azure-monitor/azure-pricing-calculator-values.png)
+
+## Next steps
+
+* [Create a storage account](../../storage/common/storage-account-create.md)
+* [Archive activity logs to a storage account](quickstart-azure-monitor-route-logs-to-storage-account.md)
+* [Route activity logs to an event hub](./tutorial-azure-monitor-stream-logs-to-event-hub.md)
+* [Integrate activity logs with Azure Monitor](howto-integrate-activity-logs-with-log-analytics.md)
active-directory Howto Access Activity Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-access-activity-logs.md
Title: Access activity logs in Azure AD
-description: Learn how to choose the right method for accessing the activity logs in Azure AD.
+description: Learn how to choose the right method for accessing the activity logs in Azure Active Directory.
Previously updated : 07/26/2023 Last updated : 08/08/2023 --
-# How To: Access activity logs in Azure AD
+# How to access activity logs in Azure AD
-The data in your Azure Active Directory (Azure AD) logs enables you to assess many aspects of your Azure AD tenant. To cover a broad range of scenarios, Azure AD provides you with various options to access your activity log data. As an IT administrator, you need to understand the intended uses cases for these options, so that you can select the right access method for your scenario.
+The data collected in your Azure Active Directory (Azure AD) logs enables you to assess many aspects of your Azure AD tenant. To cover a broad range of scenarios, Azure AD provides you with several options to access your activity log data. As an IT administrator, you need to understand the intended uses cases for these options, so that you can select the right access method for your scenario.
You can access Azure AD activity logs and reports using the following methods:
Each of these methods provides you with capabilities that may align with certain
## Prerequisites
-The required roles and licenses may vary based on the report. Global Administrator can access all reports, but we recommend using a role with least privilege access to align with the [Zero Trust guidance](/security/zero-trust/zero-trust-overview).
+The required roles and licenses may vary based on the report. Global Administrators can access all reports, but we recommend using a role with least privilege access to align with the [Zero Trust guidance](/security/zero-trust/zero-trust-overview).
| Log / Report | Roles | Licenses | |--|--|--|
The required roles and licenses may vary based on the report. Global Administrat
| Usage and insights | Security Reader<br>Reports Reader<br> Security Administrator | Premium P1/P2 | | Identity Protection* | Security Administrator<br>Security Operator<br>Security Reader<br>Global Reader | Azure AD Free/Microsoft 365 Apps<br>Azure AD Premium P1/P2 |
-*The level of access and capabilities for Identity Protection varies with the role and license. For more information, see the [license requirements for Identity Protection](../identity-protection/overview-identity-protection.md#license-requirements).
+*The level of access and capabilities for Identity Protection vary with the role and license. For more information, see the [license requirements for Identity Protection](../identity-protection/overview-identity-protection.md#license-requirements).
Audit logs are available for features that you've licensed. To access the sign-ins logs using the Microsoft Graph API, your tenant must have an Azure AD Premium license associated with it.
The SIEM tools you can integrate with your event hub can provide analysis and mo
## Access logs with Microsoft Graph API
-The Microsoft Graph API provides a unified programmability model that you can use to access data for your Azure AD Premium tenants. It doesn't require an administrator or developer to set up extra infrastructure to support your script or app. The Microsoft Graph API is **not** designed for pulling large amounts of activity data. Pulling large amounts of activity data using the API may lead to issues with pagination and performance.
+The Microsoft Graph API provides a unified programmability model that you can use to access data for your Azure AD Premium tenants. It doesn't require an administrator or developer to set up extra infrastructure to support your script or app.
### Recommended uses
We recommend manually downloading and storing your activity logs if you have bud
Use the following basic steps to archive or download your activity logs.
-### Archive activity logs to a storage account
+#### Archive activity logs to a storage account
1. Sign in to the [Azure portal](https://portal.azure.com) using one of the required roles. 1. Create a storage account.
active-directory Howto Archive Logs To Storage Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-archive-logs-to-storage-account.md
+
+ Title: How to archive activity logs to a storage account
+description: Learn how to archive Azure Active Directory logs to a storage account
+++++++ Last updated : 08/09/2023+++
+# Customer intent: As an IT administrator, I want to learn how to archive Azure AD logs to an Azure storage account so I can retain it for longer than the default retention period.
++
+# How to archive Azure AD logs to an Azure storage account
+
+If you need to store Azure Active Directory (Azure AD) activity logs for longer than the [default retention period](reference-reports-data-retention.md), you can archive your logs to a storage account.
+
+## Prerequisites
+
+To use this feature, you need:
+
+* An Azure subscription with an Azure storage account. If you don't have an Azure subscription, you can [sign up for a free trial](https://azure.microsoft.com/free/).
+* A user who's a *Security Administrator* or *Global Administrator* for the Azure AD tenant.
+
+## Archive logs to an Azure storage account
++
+6. Under **Destination Details** select the **Archive to a storage account** check box.
+
+7. Select the appropriate **Subscription** and **Storage account** from the menus.
+
+ ![Diagnostics settings](media/howto-archive-logs-to-storage-account/diagnostic-settings-storage.png)
+
+8. After the categories have been selected, in the **Retention days** field, type in the number of days of retention you need of your log data. By default, this value is *0*, which means that logs are retained in the storage account indefinitely. If you set a different value, events older than the number of days selected are automatically cleaned up.
+
+ > [!NOTE]
+ > The Diagnostic settings storage retention feature is being deprecated. For details on this change, see [**Migrate from diagnostic settings storage retention to Azure Storage lifecycle management**](../../azure-monitor/essentials/migrate-to-azure-storage-lifecycle-policy.md).
+
+9. Select **Save** to save the setting.
+
+10. Close the window to return to the Diagnostic settings pane.
+
+## Next steps
+
+- [Learn about other ways to access activity logs](howto-access-activity-logs.md)
+- [Manually download activity logs](howto-download-logs.md)
+- [Integrate activity logs with Azure Monitor logs](howto-integrate-activity-logs-with-azure-monitor-logs.md)
+- [Stream logs to an event hub](howto-stream-logs-to-event-hub.md)
active-directory Howto Configure Prerequisites For Reporting Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-configure-prerequisites-for-reporting-api.md
# Prerequisites to access the Azure Active Directory reporting API
-The Azure Active Directory (Azure AD) [reporting APIs](/graph/api/resources/azure-ad-auditlog-overview) provide you with programmatic access to the data through a set of REST APIs. You can call these APIs from many programming languages and tools. The reporting API uses [OAuth](../../api-management/api-management-howto-protect-backend-with-aad.md) to authorize access to the web APIs.
+The Azure Active Directory (Azure AD) [reporting APIs](/graph/api/resources/azure-ad-auditlog-overview) provide you with programmatic access to the data through a set of REST APIs. You can call these APIs from many programming languages and tools. The reporting API uses [OAuth](../../api-management/api-management-howto-protect-backend-with-aad.md) to authorize access to the web APIs. The Microsoft Graph API is **not** designed for pulling large amounts of activity data. Pulling large amounts of activity data using the API may lead to issues with pagination and performance.
This article describes how to enable Microsoft Graph to access the Azure AD reporting APIs in the Azure portal and through PowerShell
To get access to the reporting data through the API, you need to have one of the
- Security Administrator - Global Administrator
-In order to access the sign-in reports for a tenant, an Azure AD tenant must have associated Azure AD Premium P1 or P2 license. Alternatively if the directory type is Azure AD B2C, the sign-in reports are accessible through the API without any additional license requirement.
+In order to access the sign-in reports for a tenant, an Azure AD tenant must have associated Azure AD Premium P1 or P2 license. If the directory type is Azure AD B2C, the sign-in reports are accessible through the API without any additional license requirement.
Registration is needed even if you're accessing the reporting API using a script. The registration gives you an **Application ID**, which is required for the authorization calls and enables your code to receive tokens. To configure your directory to access the Azure AD reporting API, you must sign in to the [Azure portal](https://portal.azure.com) in one of the required roles.
active-directory Howto Download Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-download-logs.md
-# How to: Download logs in Azure Active Directory
+# How to download logs in Azure Active Directory
The Azure Active Directory (Azure AD) portal gives you access to three types of activity logs:
active-directory Howto Integrate Activity Logs With Arcsight https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-integrate-activity-logs-with-arcsight.md
- Title: Integrate logs with ArcSight using Azure Monitor
-description: Learn how to integrate Azure Active Directory logs with ArcSight using Azure Monitor
------- Previously updated : 10/31/2022------
-# Integrate Azure Active Directory logs with ArcSight using Azure Monitor
-
-[Micro Focus ArcSight](https://software.microfocus.com/products/siem-security-information-event-management/overview) is a security information and event management (SIEM) solution that helps you detect and respond to security threats in your platform. You can now route Azure Active Directory (Azure AD) logs to ArcSight using Azure Monitor using the ArcSight connector for Azure AD. This feature allows you to monitor your tenant for security compromise using ArcSight.
-
-In this article, you learn how to route Azure AD logs to ArcSight using Azure Monitor.
-
-## Prerequisites
-
-To use this feature, you need:
-* An Azure event hub that contains Azure AD activity logs. Learn how to [stream your activity logs to an event hub](./tutorial-azure-monitor-stream-logs-to-event-hub.md).
-* A configured instance of ArcSight Syslog NG Daemon SmartConnector (SmartConnector) or ArcSight Load Balancer. If the events are sent to ArcSight Load Balancer, they're sent to the SmartConnector by the Load Balancer.
-
-Download and open the [configuration guide for ArcSight SmartConnector for Azure Monitor Event Hubs](https://community.microfocus.com/t5/ArcSight-Connectors/SmartConnector-for-Microsoft-Azure-Monitor-Event-Hub/ta-p/1671292). This guide contains the steps you need to install and configure the ArcSight SmartConnector for Azure Monitor.
-
-## Integrate Azure AD logs with ArcSight
-
-1. First, complete the steps in the **Prerequisites** section of the configuration guide. This section includes the following steps:
- * Set user permissions in Azure, to ensure there's a user with the **owner** role to deploy and configure the connector.
- * Open ports on the server with Syslog NG Daemon SmartConnector, so it's accessible from Azure.
- * The deployment runs a Windows PowerShell script, so you must enable PowerShell to run scripts on the machine where you want to deploy the connector.
-
-2. Follow the steps in the **Deploying the Connector** section of configuration guide to deploy the connector. This section walks you through how to download and extract the connector, configure application properties and run the deployment script from the extracted folder.
-
-3. Use the steps in the **Verifying the Deployment in Azure** to make sure the connector is set up and functions correctly. Verify the following prerequisites:
- * The requisite Azure functions are created in your Azure subscription.
- * The Azure AD logs are streamed to the correct destination.
- * The application settings from your deployment are persisted in the Application Settings in Azure Function Apps.
- * A new resource group for ArcSight is created in Azure, with an Azure AD application for the ArcSight connector and storage accounts containing the mapped files in CEF format.
-
-4. Finally, complete the post-deployment steps in the **Post-Deployment Configurations** of the configuration guide. This section explains how to perform another configuration if you are on an App Service Plan to prevent the function apps from going idle after a timeout period, configure streaming of resource logs from the event hub, and update the SysLog NG Daemon SmartConnector keystore certificate to associate it with the newly created storage account.
-
-5. The configuration guide also explains how to customize the connector properties in Azure, and how to upgrade and uninstall the connector. There's also a section on performance improvements, including upgrading to an [Azure Consumption plan](https://azure.microsoft.com/pricing/details/functions) and configuring an ArcSight Load Balancer if the event load is greater than what a single Syslog NG Daemon SmartConnector can handle.
-
-## Next steps
-
-[Configuration guide for ArcSight SmartConnector for Azure Monitor Event Hubs](https://community.microfocus.com/t5/ArcSight-Connectors/SmartConnector-for-Microsoft-Azure-Monitor-Event-Hub/ta-p/1671292)
active-directory Howto Integrate Activity Logs With Azure Monitor Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-integrate-activity-logs-with-azure-monitor-logs.md
+
+ Title: Integrate Azure Active Directory logs with Azure Monitor logs
+description: Learn how to integrate Azure Active Directory logs with Azure Monitor logs for querying and analysis.
+++++++ Last updated : 08/08/2023++++
+# Integrate Azure AD logs with Azure Monitor logs
+
+Using **Diagnostic settings** in Azure Active Directory (Azure AD), you can integrate logs with Azure Monitor so your sign-in activity and the audit trail of changes within your tenant can be analyzed along with other Azure data.
+
+This article provides the steps to integrate Azure Active Directory (Azure AD) logs with Azure Monitor.
+
+Use the integration of Azure AD activity logs and Azure Monitor to perform the following tasks:
+
+- Compare your Azure AD sign-in logs against security logs published by Microsoft Defender for Cloud.
+- Troubleshoot performance bottlenecks on your applicationΓÇÖs sign-in page by correlating application performance data from Azure Application Insights.
+- Analyze the Identity Protection risky users and risk detections logs to detect threats in your environment.
+- Identify sign-ins from applications still using the Active Directory Authentication Library (ADAL) for authentication. [Learn about the ADAL end-of-support plan.](../develop/msal-migration.md)
+
+> [!NOTE]
+> Integrating Azure Active Directory logs with Azure Monitor automatically enables the Azure Active Directory data connector within Microsoft Sentinel.
+
+## How do I access it?
+
+To use this feature, you need:
+
+* An Azure subscription. If you don't have an Azure subscription, you can [sign up for a free trial](https://azure.microsoft.com/free/).
+* An Azure AD Premium P1 or P2 tenant.
+* **Global Administrator** or **Security Administrator** access for the Azure AD tenant.
+* A **Log Analytics workspace** in your Azure subscription. Learn how to [create a Log Analytics workspace](../../azure-monitor/logs/quick-create-workspace.md).
+* Permission to access data in a Log Analytics workspace. See [Manage access to log data and workspaces in Azure Monitor](../../azure-monitor/logs/manage-access.md) for information on the different permission options and how to configure permissions.
+
+## Create a Log Analytics workspace
+
+A Log Analytics workspace allows you to collect data based on a variety or requirements, such as geographic location of the data, subscription boundaries, or access to resources. Learn how to [create a Log Analytics workspace](../../azure-monitor/logs/quick-create-workspace.md).
+
+Looking for how to set up a Log Analytics workspace for Azure resources outside of Azure AD? Check out the [Collect and view resource logs for Azure Monitor](../../azure-monitor/essentials/diagnostic-settings.md) article.
+
+## Send logs to Azure Monitor
+
+Follow the steps below to send logs from Azure Active Directory to Azure Monitor logs. Looking for how to set up Log Analytics workspace for Azure resources outside of Azure AD? Check out the [Collect and view resource logs for Azure Monitor](../../azure-monitor/essentials/diagnostic-settings.md) article.
++
+6. Under **Destination Details** select the **Send to Log Analytics workspace** check box.
+
+7. Select the appropriate **Subscription** and **Log Analytics workspace** from the menus.
+
+8. Select the **Save** button.
+
+ ![Screenshot of the Diagnostics settings with some destination details shown.](./media/howto-integrate-activity-logs-with-azure-monitor-logs/diagnostic-settings-log-analytics-workspace.png)
+
+If you do not see logs appearing in the selected destination after 15 minutes, sign out and back into Azure to refresh the logs.
+
+## Next steps
+
+* [Analyze Azure AD activity logs with Azure Monitor logs](howto-analyze-activity-logs-log-analytics.md)
+* [Learn about the data sources you can analyze with Azure Monitor](../../azure-monitor/data-sources.md)
+* [Automate creating diagnostic settings with Azure Policy](../../azure-monitor/essentials/diagnostic-settings-policy.md)
active-directory Howto Integrate Activity Logs With Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md
- Title: Integrate Azure Active Directory logs with Azure Monitor | Microsoft Docs
-description: Learn how to integrate Azure Active Directory logs with Azure Monitor
------- Previously updated : 06/26/2023-----
-# How to integrate Azure AD logs with Azure Monitor logs
-
-Using **Diagnostic settings** in Azure Active Directory (Azure AD), you can integrate logs with Azure Monitor so sign-in activity and the audit trail of changes within your tenant can be analyzed along with other Azure data. Integrating Azure AD logs with Azure Monitor logs enables rich visualizations, monitoring, and alerting on the connected data.
-
-This article provides the steps to integrate Azure Active Directory (Azure AD) logs with Azure Monitor Logs.
-
-## Roles and licenses
-
-To integrate Azure AD logs with Azure Monitor, you need the following roles and licenses:
-
-* **An Azure subscription:** If you don't have an Azure subscription, you can [sign up for a free trial](https://azure.microsoft.com/free/).
-
-* **An Azure AD Premium P1 or P2 tenant:** You can find the license type of your tenant on the [Overview](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Overview) page in Azure AD.
-
-* **Security Administrator access for the Azure AD tenant:** This role is required to set up the Diagnostics settings.
-
-* **Permission to access data in a Log Analytics workspace:** See [Manage access to log data and workspaces in Azure Monitor](../../azure-monitor/logs/manage-access.md) for information on the different permission options and how to configure permissions.
-
-## Integrate logs with Azure Monitor logs
-
-To send Azure AD logs to Azure Monitor Logs you must first have a [Log Analytics workspace](../../azure-monitor/logs/log-analytics-overview.md). Then you can set up the Diagnostics settings in Azure AD to send your activity logs to that workspace.
-
-### Create a Log Analytics workspace
-
-A Log Analytics workspace allows you to collect data based on a variety or requirements, such as geographic location of the data, subscription boundaries, or access to resources. Learn how to [create a Log Analytics workspace](../../azure-monitor/logs/quick-create-workspace.md).
-
-Looking for how to set up a Log Analytics workspace for Azure resources outside of Azure AD? Check out the [Collect and view resource logs for Azure Monitor](../../azure-monitor/essentials/diagnostic-settings.md) article.
-
-### Set up Diagnostics settings
-
-Once you have a Log Analytics workspace created, follow the steps below to send logs from Azure Active Directory to that workspace.
--
-Follow the steps below to send logs from Azure Active Directory to Azure Monitor. Looking for how to set up Log Analytics workspace for Azure resources outside of Azure AD? Check out the [Collect and view resource logs for Azure Monitor](../../azure-monitor/essentials/diagnostic-settings.md) article.
-
-1. Sign in to the [Azure portal](https://portal.azure.com) as a **Security Administrator**.
-
-1. Go to **Azure Active Directory** > **Diagnostic settings**. You can also select **Export Settings** from the Audit logs or Sign-in logs.
-
-1. Select **+ Add diagnostic setting** to create a new integration or select **Edit setting** to change an existing integration.
-
-1. Enter a **Diagnostic setting name**. If you're editing an existing integration, you can't change the name.
-
-1. Any or all of the following logs can be sent to the Log Analytics workspace. Some logs may be in public preview but still visible in the portal.
- * `AuditLogs`
- * `SignInLogs`
- * `NonInteractiveUserSignInLogs`
- * `ServicePrincipalSignInLogs`
- * `ManagedIdentitySignInLogs`
- * `ProvisioningLogs`
- * `ADFSSignInLogs` Active Directory Federation Services (ADFS)
- * `RiskyServicePrincipals`
- * `RiskyUsers`
- * `ServicePrincipalRiskEvents`
- * `UserRiskEvents`
-
-1. The following logs are in preview but still visible in Azure AD. At this time, selecting these options will not add new logs to your workspace unless your organization was included in the preview.
- * `EnrichedOffice365AuditLogs`
- * `MicrosoftGraphActivityLogs`
- * `NetworkAccessTrafficLogs`
-
-1. In the **Destination details**, select **Send to Log Analytics workspace** and choose the appropriate details from the menus that appear.
- * You can also send logs to any or all of the following destinations. Additional fields appear, depending on your selection.
- * **Archive to a storage account:** Provide the number of days you'd like to retain the data in the **Retention days** boxes that appear next to the log categories. Select the appropriate details from the menus that appear.
- * **Stream to an event hub:** Select the appropriate details from the menus that appear.
- * **Send to partner solution:** Select the appropriate details from the menus that appear.
-
-1. Select **Save** to save the setting.
-
- ![Screenshot of the Diagnostics settings with some destination details shown.](./media/howto-integrate-activity-logs-with-log-analytics/Configure.png)
-
-If you do not see logs appearing in the selected destination after 15 minutes, sign out and back into Azure to refresh the logs.
-
-> [!NOTE]
-> Integrating Azure Active Directory logs with Azure Monitor will automatically enable the Azure Active Directory data connector within Microsoft Sentinel.
-
-## Next steps
-
-* [Analyze Azure AD activity logs with Azure Monitor logs](howto-analyze-activity-logs-log-analytics.md)
-* [Learn about the data sources you can analyze with Azure Monitor](../../azure-monitor/data-sources.md)
-* [Automate creating diagnostic settings with Azure Policy](../../azure-monitor/essentials/diagnostic-settings-policy.md)
active-directory Howto Integrate Activity Logs With Splunk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-integrate-activity-logs-with-splunk.md
- Title: Integrate Splunk using Azure Monitor
-description: Learn how to integrate Azure Active Directory logs with Splunk using Azure Monitor.
------- Previously updated : 10/31/2022------
-# How to: Integrate Azure Active Directory logs with Splunk using Azure Monitor
-
-In this article, you learn how to integrate Azure Active Directory (Azure AD) logs with Splunk by using Azure Monitor. You first route the logs to an Azure event hub, and then you integrate the event hub with Splunk.
-
-## Prerequisites
-
-To use this feature, you need:
--- An Azure event hub that contains Azure AD activity logs. Learn how to [stream your activity logs to an event hub](./tutorial-azure-monitor-stream-logs-to-event-hub.md). --- The [Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110/#/details). -
-## Integrate Azure Active Directory logs
-
-1. Open your Splunk instance, and select **Data Summary**.
-
- ![The "Data Summary" button](./media/howto-integrate-activity-logs-with-splunk/DataSummary.png)
-
-2. Select the **Sourcetypes** tab, and then select **mscs:azure:eventhub**
-
- ![The Data Summary Sourcetypes tab](./media/howto-integrate-activity-logs-with-splunk/source-eventhub.png)
-
-Append **body.records.category=AuditLogs** to the search. The Azure AD activity logs are shown in the following figure:
-
- ![Activity logs](./media/howto-integrate-activity-logs-with-splunk/activity-logs.png)
-
-> [!NOTE]
-> If you cannot install an add-on in your Splunk instance (for example, if you're using a proxy or running on Splunk Cloud), you can forward these events to the Splunk HTTP Event Collector. To do so, use this [Azure function](https://github.com/splunk/azure-functions-splunk), which is triggered by new messages in the event hub.
->
-
-## Next steps
-
-* [Interpret audit logs schema in Azure Monitor](./overview-reports.md)
-* [Interpret sign-in logs schema in Azure Monitor](reference-azure-monitor-sign-ins-log-schema.md)
active-directory Howto Integrate Activity Logs With Sumologic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-integrate-activity-logs-with-sumologic.md
- Title: Stream logs to SumoLogic using Azure Monitor
-description: Learn how to integrate Azure Active Directory logs with SumoLogic using Azure Monitor.
------- Previously updated : 10/31/2022------
-# Integrate Azure Active Directory logs with SumoLogic using Azure Monitor
-
-In this article, you learn how to integrate Azure Active Directory (Azure AD) logs with SumoLogic using Azure Monitor. You first route the logs to an Azure event hub, and then you integrate the event hub with SumoLogic.
-
-## Prerequisites
-
-To use this feature, you need:
-* An Azure event hub that contains Azure AD activity logs. Learn how to [stream your activity logs to an event hub](./tutorial-azure-monitor-stream-logs-to-event-hub.md).
-* A SumoLogic single sign-on enabled subscription.
-
-## Steps to integrate Azure AD logs with SumoLogic
-
-1. First, [stream the Azure AD logs to an Azure event hub](./tutorial-azure-monitor-stream-logs-to-event-hub.md).
-2. Configure your SumoLogic instance to [collect logs for Azure Active Directory](https://help.sumologic.com/docs/integrations/microsoft-azure/active-directory-azure#collecting-logs-for-azure-active-directory).
-3. [Install the Azure AD SumoLogic app](https://help.sumologic.com/docs/integrations/microsoft-azure/active-directory-azure#viewing-azure-active-directory-dashboards) to use the pre-configured dashboards that provide real-time analysis of your environment.
-
- ![Dashboard](./media/howto-integrate-activity-logs-with-sumologic/overview-dashboard.png)
-
-## Next steps
-
-* [Interpret audit logs schema in Azure Monitor](./overview-reports.md)
-* [Interpret sign-in logs schema in Azure Monitor](reference-azure-monitor-sign-ins-log-schema.md)
active-directory Howto Manage Inactive User Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-manage-inactive-user-accounts.md
+ Last updated 05/02/2023
active-directory Howto Stream Logs To Event Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-stream-logs-to-event-hub.md
+
+ Title: Stream Azure Active Directory logs to an event hub
+description: Learn how to stream Azure Active Directory logs to an event hub for SIEM tool integration.
+++++++ Last updated : 08/08/2023+++
+# How to stream activity logs to an event hub
+
+Your Azure Active Directory (Azure AD) tenant produces large amounts of data every second. Sign-in activity and logs of changes made in your tenant add up to a lot of data that can be hard to analyze. Integrating with Security Information and Event Management (SIEM) tools can help you gain insights into your environment.
+
+This article shows how you can stream your logs to an event hub, to integrate with one of several SIEM tools.
+
+## Prerequisites
+
+To stream logs to a SIEM tool, you first need to create an **Azure event hub**.
+
+Once you have an event hub that contains Azure AD activity logs, you can set up the SIEM tool integration using the **Azure AD Diagnostics Settings**.
+
+## Stream logs to an event hub
++
+6. Select the **Stream to an event hub** check box.
+
+7. Select the Azure subscription, Event Hubs namespace, and optional event hub where you want to route the logs.
+
+The subscription and Event Hubs namespace must both be associated with the Azure AD tenant from where you're streaming the logs.
+
+Once you have the Azure event hub ready, navigate to the SIEM tool you want to integrate with the activity logs. You'll finish the process in the SIEM tool.
+
+We currently support Splunk, SumoLogic, and ArcSight. Select a tab below to get started. Refer to the tool's documentation.
+
+# [Splunk](#tab/splunk)
+
+To use this feature, you need the [Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110/#/details).
+
+### Integrate Azure AD logs with Splunk
+
+1. Open your Splunk instance and select **Data Summary**.
+
+ ![The "Data Summary" button](./media/howto-stream-logs-to-event-hub/datasummary.png)
+
+1. Select the **Sourcetypes** tab, and then select **mscs:azure:eventhub**
+
+ ![The Data Summary Sourcetypes tab](./media/howto-stream-logs-to-event-hub/source-eventhub.png)
+
+Append **body.records.category=AuditLogs** to the search. The Azure AD activity logs are shown in the following figure:
+
+ ![Activity logs](./media/howto-stream-logs-to-event-hub/activity-logs.png)
+
+If you cannot install an add-on in your Splunk instance (for example, if you're using a proxy or running on Splunk Cloud), you can forward these events to the Splunk HTTP Event Collector. To do so, use this [Azure function](https://github.com/splunk/azure-functions-splunk), which is triggered by new messages in the event hub.
+
+# [SumoLogic](#tab/SumoLogic)
+
+To use this feature, you need a SumoLogic single sign-on enabled subscription.
+
+### Integrate Azure AD logs with SumoLogic
+
+1. Configure your SumoLogic instance to [collect logs for Azure Active Directory](https://help.sumologic.com/docs/integrations/microsoft-azure/active-directory-azure#collecting-logs-for-azure-active-directory).
+
+1. [Install the Azure AD SumoLogic app](https://help.sumologic.com/docs/integrations/microsoft-azure/active-directory-azure#viewing-azure-active-directory-dashboards) to use the pre-configured dashboards that provide real-time analysis of your environment.
+
+ ![Dashboard](./media/howto-stream-logs-to-event-hub/overview-dashboard.png)
+
+# [ArcSight](#tab/ArcSight)
+
+To use this feature, you need a configured instance of ArcSight Syslog NG Daemon SmartConnector (SmartConnector) or ArcSight Load Balancer. If the events are sent to ArcSight Load Balancer, they're sent to the SmartConnector by the Load Balancer.
+
+Download and open the [configuration guide for ArcSight SmartConnector for Azure Monitor Event Hubs](https://software.microfocus.com/products/siem-security-information-event-management/overview). This guide contains the steps you need to install and configure the ArcSight SmartConnector for Azure Monitor.
+
+## Integrate Azure AD logs with ArcSight
+
+1. Complete the steps in the **Prerequisites** section of the ArcSight configuration guide. This section includes the following steps:
+ * Set user permissions in Azure to ensure there's a user with the **owner** role to deploy and configure the connector.
+ * Open ports on the server with Syslog NG Daemon SmartConnector so it's accessible from Azure.
+ * The deployment runs a Windows PowerShell script, so you must enable PowerShell to run scripts on the machine where you want to deploy the connector.
+
+1. Follow the steps in the **Deploying the Connector** section of the ArcSight configuration guide to deploy the connector. This section walks you through how to download and extract the connector, configure application properties and run the deployment script from the extracted folder.
+
+1. Use the steps in the **Verifying the Deployment in Azure** to make sure the connector is set up and functions correctly. Verify the following prerequisites:
+ * The requisite Azure functions are created in your Azure subscription.
+ * The Azure AD logs are streamed to the correct destination.
+ * The application settings from your deployment are persisted in the Application Settings in Azure Function Apps.
+ * A new resource group for ArcSight is created in Azure, with an Azure AD application for the ArcSight connector and storage accounts containing the mapped files in CEF format.
+
+1. Complete the post-deployment steps in the **Post-Deployment Configurations** of the ArcSight configuration guide. This section explains how to perform another configuration if you are on an App Service Plan to prevent the function apps from going idle after a timeout period, configure streaming of resource logs from the event hub, and update the SysLog NG Daemon SmartConnector keystore certificate to associate it with the newly created storage account.
+
+1. The configuration guide also explains how to customize the connector properties in Azure, and how to upgrade and uninstall the connector. There's also a section on performance improvements, including upgrading to an [Azure Consumption plan](https://azure.microsoft.com/pricing/details/functions) and configuring an ArcSight Load Balancer if the event load is greater than what a single Syslog NG Daemon SmartConnector can handle.
+++
+## Activity log integration options and considerations
+
+If your current SIEM isn't supported in Azure Monitor diagnostics yet, you can set up **custom tooling** by using the Event Hubs API. To learn more, see the [Getting started receiving messages from an event hub](../../event-hubs/event-hubs-dotnet-standard-getstarted-send.md).
+
+**IBM QRadar** is another option for integrating with Azure AD activity logs. The DSM and Azure Event Hubs Protocol are available for download at [IBM support](https://www.ibm.com/support). For more information about integration with Azure, go to the [IBM QRadar Security Intelligence Platform 7.3.0](https://www.ibm.com/support/knowledgecenter/SS42VS_DSM/c_dsm_guide_microsoft_azure_overview.html?cp=SS42VS_7.3.0) site.
+
+Some sign-in categories contain large amounts of log data, depending on your tenantΓÇÖs configuration. In general, the non-interactive user sign-ins and service principal sign-ins can be 5 to 10 times larger than the interactive user sign-ins.
+
+## Next steps
+
+- [Analyze Azure AD activity logs with Azure Monitor logs](howto-analyze-activity-logs-log-analytics.md)
+- [Use Microsoft Graph to access Azure AD activity logs](quickstart-access-log-with-graph-api.md)
active-directory Howto Use Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-use-recommendations.md
Previously updated : 07/14/2023 Last updated : 08/10/2023
-# How to: Use Azure AD recommendations
+# How to use Azure Active Directory Recommendations
The Azure Active Directory (Azure AD) recommendations feature provides you with personalized insights with actionable guidance to:
active-directory Howto Use Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-use-workbooks.md
+
+ Title: Azure Monitor workbooks for Azure Active Directory
+description: Learn how to use Azure Monitor workbooks for Azure Active Directory reports.
+++++++ Last updated : 08/10/2023++++
+# How to use Azure Active Directory Workbooks
+
+Workbooks are found in Azure AD and in Azure Monitor. The concepts, processes, and best practices are the same for both types of workbooks, however, workbooks for Azure Active Directory (AD) cover only those identity management scenarios that are associated with Azure AD.
+
+When using workbooks, you can either start with an empty workbook, or use an existing template. Workbook templates enable you to quickly get started using workbooks without needing to build from scratch.
+
+- **Public templates** published to a [gallery](../../azure-monitor/visualize/workbooks-overview.md#the-gallery) are a good starting point when you're just getting started with workbooks.
+- **Private templates** are helpful when you start building your own workbooks and want to save one as a template to serve as the foundation for multiple workbooks in your tenant.
+
+## Prerequisites
+
+To use Azure Workbooks for Azure AD, you need:
+
+- An Azure AD tenant with a [Premium P1 license](../fundamentals/get-started-premium.md)
+- A Log Analytics workspace *and* access to that workspace
+- The appropriate roles for Azure Monitor *and* Azure AD
+
+### Log Analytics workspace
+
+You must create a [Log Analytics workspace](../../azure-monitor/logs/quick-create-workspace.md) *before* you can use Azure AD Workbooks. There are a combination of factors that determine access to Log Analytics workspaces. You need the right roles for the workspace *and* the resources sending the data.
+
+For more information, see [Manage access to Log Analytics workspaces](../../azure-monitor/logs/manage-access.md).
+
+### Azure Monitor roles
+
+Azure Monitor provides [two built-in roles](../../azure-monitor/roles-permissions-security.md#monitoring-reader) for viewing monitoring data and editing monitoring settings. Azure role-based access control (RBAC) also provides two Log Analytics built-in roles that grant similar access.
+
+- **View**:
+ - Monitoring Reader
+ - Log Analytics Reader
+
+- **View and modify settings**:
+ - Monitoring Contributor
+ - Log Analytics Contributor
+
+For more information on the Azure Monitor built-in roles, see [Roles, permissions, and security in Azure Monitor](../../azure-monitor/roles-permissions-security.md#monitoring-reader).
+
+For more information on the Log Analytics RBAC roles, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md#log-analytics-contributor)
+
+### Azure AD roles
+
+Read only access allows you to view Azure AD log data inside a workbook, query data from Log Analytics, or read logs in the Azure AD portal. Update access adds the ability to create and edit diagnostic settings to send Azure AD data to a Log Analytics workspace.
+
+- **Read**:
+ - Reports Reader
+ - Security Reader
+ - Global Reader
+
+- **Update**:
+ - Security Administrator
+
+For more information on Azure AD built-in roles, see [Azure AD built-in roles](../roles/permissions-reference.md).
+
+## How to access Azure Workbooks for Azure AD
++
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Navigate to **Azure Active Directory** > **Monitoring** > **Workbooks**.
+ - **Workbooks**: All workbooks created in your tenant
+ - **Public Templates**: Prebuilt workbooks for common or high priority scenarios
+ - **My Templates**: Templates you've created
+1. Select a report or template from the list. Workbooks may take a few moments to populate.
+ - Search for a template by name.
+ - Select the **Browse across galleries** to view templates that aren't specific to Azure AD.
+
+ ![Find the Azure Monitor workbooks in Azure AD](./media/howto-use-azure-monitor-workbooks/azure-monitor-workbooks-in-azure-ad.png)
+
+## Create a new workbook
+
+Workbooks can be created from scratch or from a template. When creating a new workbook, you can add elements as you go or use the **Advanced Editor** option to paste in the JSON representation of a workbook, copied from the [workbooks GitHub repository](https://github.com/Microsoft/Application-Insights-Workbooks/blob/master/schema/workbook.json).
+
+**To create a new workbook from scratch**:
+1. Navigate to **Azure AD** > **Monitoring** > **Workbooks**.
+1. Select **+ New**.
+1. Select an element from the **+ Add** menu.
+
+ For more information on the available elements, see [Creating an Azure Workbook](../../azure-monitor/visualize/workbooks-create-workbook.md).
+
+ ![Screenshot of the Azure Workbooks +Add menu options.](./media/howto-use-azure-monitor-workbooks/create-new-workbook-elements.png)
+
+**To create a new workbook from a template**:
+1. Navigate to **Azure AD** > **Monitoring** > **Workbooks**.
+1. Select a workbook template from the Gallery.
+1. Select **Edit** from the top of the page.
+ - Each element of the workbook has its own **Edit** button.
+ - For more information on editing workbook elements, see [Azure Workbooks Templates](../../azure-monitor/visualize/workbooks-templates.md)
+
+1. Select the **Edit** button for any element. Make your changes and select **Done editing**.
+ ![Screenshot of a workbook in Edit mode, with the Edit and Done Editing buttons highlighted.](./media/howto-use-azure-monitor-workbooks/edit-buttons.png)
+1. When you're done editing the workbook, select the **Save As** to save your workbook with a new name.
+1. In the **Save As** window:
+ - Provide a **Title**, **Subscription**, **Resource Group** (you must have the ability to save a workbook for the selected Resource Group), and **Location**.
+ - Optionally choose to save your workbook content to an [Azure Storage Account](../../azure-monitor/visualize/workbooks-bring-your-own-storage.md).
+1. Select the **Apply** button.
+
+## Next steps
+
+* [Create interactive reports by using Monitor workbooks](../../azure-monitor/visualize/workbooks-overview.md).
+* [Create custom Azure Monitor queries using Azure PowerShell](../governance/entitlement-management-logs-and-reporting.md).
active-directory Overview Monitoring Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/overview-monitoring-health.md
+
+ Title: What is Azure Active Directory monitoring and health?
+description: Provides a general overview of Azure Active Directory monitoring and health.
+++++++ Last updated : 08/15/2023+++++
+# What is Azure Active Directory monitoring and health?
+
+The features of Azure Active Directory (Azure AD) Monitoring and health provide a comprehensive view of identity related activity in your environment. This data enables you to:
+
+- Determine how your users utilize your apps and services.
+- Detect potential risks affecting the health of your environment.
+- Troubleshoot issues preventing your users from getting their work done.
+
+Sign-in and audit logs comprise the activity logs behind many Azure AD reports, which can be used to analyze, monitor, and troubleshoot activity in your tenant. Routing your activity logs to an analysis and monitoring solution provides greater insights into your tenant's health and security.
+
+This article describes the types of activity logs available in Azure AD, the reports that use the logs, and the monitoring services available to help you analyze the data.
+
+## Identity activity logs
+
+Activity logs help you understand the behavior of users in your organization. There are three types of activity logs in Azure AD:
+
+- [**Audit logs**](concept-audit-logs.md) include the history of every task performed in your tenant.
+
+- [**Sign-in logs**](concept-all-sign-ins.md) capture the sign-in attempts of your users and client applications.
+
+- [**Provisioning logs**](concept-provisioning-logs.md) provide information around users provisioned in your tenant through a third party service.
+
+The activity logs can be viewed in the Azure portal or using the Microsoft Graph API. Activity logs can also be routed to various endpoints for storage or analysis. To learn about all of the options for viewing the activity logs, see [How to access activity logs](howto-access-activity-logs.md).
+
+### Audit logs
+
+Audit logs provide you with records of system activities for compliance. This data enables you to address common scenarios such as:
+
+- Someone in my tenant got access to an admin group. Who gave them access?
+- I want to know the list of users signing into a specific app because I recently onboarded the app and want to know if itΓÇÖs doing well.
+- I want to know how many password resets are happening in my tenant.
+
+### Sign-in logs
+
+The sign-ins logs enable you to find answers to questions such as:
+
+- What is the sign-in pattern of a user?
+- How many users have users signed in over a week?
+- WhatΓÇÖs the status of these sign-ins?
+
+### Provisioning logs
+
+You can use the provisioning logs to find answers to questions like:
+
+- What groups were successfully created in ServiceNow?
+- What users were successfully removed from Adobe?
+- What users from Workday were successfully created in Active Directory?
+
+## Identity reports
+
+Reviewing the data in the Azure AD activity logs can provide helpful information for IT administrators. To streamline the process of reviewing data on key scenarios, we've created several reports on common scenarios that use the activity logs.
+
+- [Identity Protection](../identity-protection/overview-identity-protection.md) uses sign-in data to create reports on risky users and sign-in activities.
+- Activity related to your applications, such as service principal and app credential activity, are used to create reports in [Usage and insights](concept-usage-insights-report.md).
+- [Azure AD workbooks](overview-workbooks.md) provide a customizable way to view and analyze the activity logs.
+- [Monitor the status of Azure AD recommendations to improve your tenant's security.](overview-recommendations.md)
+
+## Identity monitoring and tenant health
+
+Reviewing Azure AD activity logs is the first step in maintaining and improving the health and security of your tenant. You need to analyze the data, monitor on risky scenarios, and determine where you can make improvements. Azure AD monitoring provides the necessary tools to help you make informed decisions.
+
+Monitoring Azure AD activity logs requires routing the log data to a monitoring and analysis solution. Endpoints include Azure Monitor logs, Microsoft Sentinel, or a third-party solution third-party Security Information and Event Management (SIEM) tool.
+
+- [Stream logs to an event hub to integrate with third-party SIEM tools.](howto-stream-logs-to-event-hub.md)
+- [Integrate logs with Azure Monitor logs.](howto-integrate-activity-logs-with-log-analytics.md)
+- [Analyze logs with Azure Monitor logs and Log Analytics.](howto-analyze-activity-logs-log-analytics.md)
++
+For an overview of how to access, store, and analyze activity logs, see [How to access activity logs](howto-access-activity-logs.md).
++
+## Next steps
+
+- [Learn about the sign-ins logs](concept-all-sign-ins.md)
+- [Learn about the audit logs](concept-audit-logs.md)
+- [Use Microsoft Graph to access activity logs](quickstart-access-log-with-graph-api.md)
+- [Integrate activity logs with SIEM tools](howto-stream-logs-to-event-hub.md)
active-directory Overview Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/overview-monitoring.md
-- Title: What is Azure Active Directory monitoring?
-description: Provides a general overview of Azure Active Directory monitoring.
------- Previously updated : 11/01/2022---
-# Customer intent: As an Azure AD administrator, I want to understand what monitoring solutions are available for Azure AD activity data and how they can help me manage my tenant.
---
-# What is Azure Active Directory monitoring?
-
-With Azure Active Directory (Azure AD) monitoring, you can now route your Azure AD activity logs to different endpoints. You can then either retain it for long-term use or integrate it with third-party Security Information and Event Management (SIEM) tools to gain insights into your environment.
-
-Currently, you can route the logs to:
--- An Azure storage account.-- An Azure event hub, so you can integrate with your Splunk and Sumologic instances.-- Azure Log Analytics workspace, wherein you can analyze the data, create dashboard and alert on specific events-
-**Prerequisite role**: Global Administrator
-
-> [!VIDEO https://www.youtube.com/embed/syT-9KNfug8]
--
-## Licensing and prerequisites for Azure AD reporting and monitoring
-
-You'll need an Azure AD premium license to access the Azure AD sign-in logs.
-
-For detailed feature and licensing information in the [Azure Active Directory pricing guide](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing).
-
-To deploy Azure AD monitoring and reporting you'll need a user who is a global administrator or security administrator for the Azure AD tenant.
-
-Depending on the final destination of your log data, you'll need one of the following:
-
-* An Azure storage account that you have ListKeys permissions for. We recommend that you use a general storage account and not a Blob storage account. For storage pricing information, see the [Azure Storage pricing calculator](https://azure.microsoft.com/pricing/calculator/?service=storage).
-
-* An Azure Event Hubs namespace to integrate with third-party SIEM solutions.
-
-* An Azure Log Analytics workspace to send logs to Azure Monitor logs.
-
-## Diagnostic settings configuration
-
-To configure monitoring settings for Azure AD activity logs, first sign in to the [Azure portal](https://portal.azure.com), then select **Azure Active Directory**. From here, you can access the diagnostic settings configuration page in two ways:
-
-* Select **Diagnostic settings** from the **Monitoring** section.
-
- ![Diagnostics settings](./media/overview-monitoring/diagnostic-settings.png)
-
-* Select **Audit Logs** or **Sign-ins**, then select **Export settings**.
-
- ![Export settings](./media/overview-monitoring/export-settings.png)
--
-## Route logs to storage account
-
-By routing logs to an Azure storage account, you can retain it for longer than the default retention period outlined in our [retention policies](reference-reports-data-retention.md). Learn how to [route data to your storage account](quickstart-azure-monitor-route-logs-to-storage-account.md).
-
-## Stream logs to event hub
-
-Routing logs to an Azure event hub allows you to integrate with third-party SIEM tools like Sumologic and Splunk. This integration allows you to combine Azure AD activity log data with other data managed by your SIEM, to provide richer insights into your environment. Learn how to [stream logs to an event hub](tutorial-azure-monitor-stream-logs-to-event-hub.md).
-
-## Send logs to Azure Monitor logs
-
-[Azure Monitor logs](../../azure-monitor/logs/log-query-overview.md) is a solution that consolidates monitoring data from different sources and provides a query language and analytics engine that gives you insights into the operation of your applications and resources. By sending Azure AD activity logs to Azure Monitor logs, you can quickly retrieve, monitor and alert on collected data. Learn how to [send data to Azure Monitor logs](howto-integrate-activity-logs-with-log-analytics.md).
-
-You can also install the pre-built views for Azure AD activity logs to monitor common scenarios involving sign-ins and audit events. Learn how to [install and use log analytics views for Azure AD activity logs](../../azure-monitor/visualize/workbooks-view-designer-conversion-overview.md).
-
-## Next steps
-
-* [Activity logs in Azure Monitor](concept-activity-logs-azure-monitor.md)
-* [Stream logs to event hub](tutorial-azure-monitor-stream-logs-to-event-hub.md)
-* [Send logs to Azure Monitor logs](howto-integrate-activity-logs-with-log-analytics.md)
active-directory Overview Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/overview-reports.md
-- Title: What are Azure Active Directory reports?
-description: Provides a general overview of Azure Active Directory reports.
------- Previously updated : 02/03/2023---
-# Customer intent: As an Azure AD administrator, I want to understand what Azure AD reports are available and how I can use them to gain insights into my environment.
---
-# What are Azure Active Directory reports?
-
-Azure Active Directory (Azure AD) reports provide a comprehensive view of activity in your environment. The provided data enables you to:
--- Determine how your apps and services are utilized by your users-- Detect potential risks affecting the health of your environment-- Troubleshoot issues preventing your users from getting their work done -
-## Activity reports
-
-Activity reports help you understand the behavior of users in your organization. There are two types of activity reports in Azure AD:
--- **Audit logs** - The [audit logs activity report](concept-audit-logs.md) provides you with access to the history of every task performed in your tenant.--- **Sign-ins** - With the [sign-ins activity report](concept-sign-ins.md), you can determine, who has performed the tasks reported by the audit logs report.---
-> [!VIDEO https://www.youtube.com/embed/ACVpH6C_NL8]
-
-### Audit logs report
-
-The [audit logs report](concept-audit-logs.md) provides you with records of system activities for compliance. This data enables you to address common scenarios such as:
--- Someone in my tenant got access to an admin group. Who gave them access? --- I want to know the list of users signing into a specific app since I recently onboarded the app and want to know if itΓÇÖs doing well--- I want to know how many password resets are happening in my tenant--
-#### What Azure AD license do you need to access the audit logs report?
-
-The audit logs report is available for features for which you have licenses. If you have a license for a specific feature, you also have access to the audit log information for it. A detailed feature comparison as per [different types of licenses](../fundamentals/whatis.md#what-are-the-azure-ad-licenses) can be seen on the [Azure Active Directory pricing page](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing). For more information, see [Azure Active Directory features and capabilities](../fundamentals/whatis.md#which-features-work-in-azure-ad).
-
-### Sign-ins report
-
-The [sign-ins report](concept-sign-ins.md) enables you to find answers to questions such as:
--- What is the sign-in pattern of a user?-- How many users have users signed in over a week?-- WhatΓÇÖs the status of these sign-ins?-
-#### What Azure AD license do you need to access the sign-ins activity report?
-
-To access the sign-ins activity report, your tenant must have an Azure AD Premium license associated with it.
-
-## Programmatic access
-
-In addition to the user interface, Azure AD also provides you with [programmatic access](./howto-configure-prerequisites-for-reporting-api.md) to the reports data, through a set of REST-based APIs. You can call these APIs from various programming languages and tools.
-
-## Next steps
--- [Risky sign-ins report](../identity-protection/howto-identity-protection-investigate-risk.md#risky-sign-ins)-- [Audit logs report](concept-audit-logs.md)-- [Sign-ins logs report](concept-sign-ins.md)
active-directory Overview Service Health Notifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/overview-service-health-notifications.md
- Title: What are Service Health notifications in Azure Active Directory?
-description: Learn how Service Health notifications provide you with a customizable dashboard that tracks the health of your Azure services in the regions where you use them.
------- Previously updated : 11/01/2022-----
-# What are Service Health notifications in Azure Active Directory?
-
-Azure Service Health has been updated to provide notifications to tenant admins within the Azure portal when there are Service Health events for Azure Active Directory services. Due to the criticality of these events, an alert card in the Azure AD overview page will also be provided to support the discovery of these notifications.
-
-## How it works
-
-When there happens to be a Service Health notification for an Azure Active Directory service, it will be posted to the Service Health page within the Azure portal. Previously these were subscription events that were posted to all the subscription owners/readers of subscriptions within the tenant that had an issue. To improve the targeting of these notifications, they'll now be available as tenant events to the tenant admins of the impacted tenant. For a transition period, these service events will be available as both tenant events and subscription events.
-
-Now that they're available as tenant events, they appear on the Azure AD overview page as alert cards. Any Service Health notification that has been updated within the last three days will be shown in one of the cards.
-
-
-![Screenshot of the alert cards on the Azure AD overview page.](./media/overview-service-health-notifications/service-health-overview.png)
---
-Each card:
--- Represents a currently active event, or a resolved one that will be distinguished by the icon in the card. -- Has a link to the event. You can review the event on the Azure Service Health pages. -
-
-![Screenshot of the event on the Azure Service Health page.](./media/overview-service-health-notifications/service-health-issues.png)
--
-
-
-For more information on the new Azure Service Health tenant events, see [Azure Service Health portal updates](../../service-health/service-health-portal-update.md)
-
-## Who will see the notifications
-
-Most of the built-in admin roles will have access to see these notifications. For the complete list of all authorized roles, see [Azure Service Health Tenant Admin authorized roles](../../service-health/admin-access-reference.md). Currently custom roles aren't supported.
-
-## What you should know
-
-Service Health events allow the addition of alerts and notifications to be applied to subscription events. This feature isn't yet supported with tenant events, but will be coming soon.
--
-
---
-## Next steps
--- [Service Health overview](../../service-health/service-health-overview.md)
active-directory Quickstart Azure Monitor Route Logs To Storage Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/quickstart-azure-monitor-route-logs-to-storage-account.md
- Title: Tutorial - Archive Azure Active Directory logs to a storage account
-description: Learn how to route Azure Active Directory logs to a storage account
------- Previously updated : 07/14/2023---
-# Customer intent: As an IT administrator, I want to learn how to route Azure AD logs to an Azure storage account so I can retain it for longer than the default retention period.
---
-# Tutorial: Archive Azure AD logs to an Azure storage account
-
-In this tutorial, you learn how to set up Azure Monitor diagnostics settings to route Azure Active Directory (Azure AD) logs to an Azure storage account.
-
-## Prerequisites
-
-To use this feature, you need:
-
-* An Azure subscription with an Azure storage account. If you don't have an Azure subscription, you can [sign up for a free trial](https://azure.microsoft.com/free/).
-* An Azure AD tenant.
-* A user who's a *Global Administrator* or *Security Administrator* for the Azure AD tenant.
-* To export sign-in data, you must have an Azure AD P1 or P2 license.
-
-## Archive logs to an Azure storage account
--
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. Select **Azure Active Directory** > **Monitoring** > **Audit logs**.
-
-1. Select **Export Data Settings**.
-
-1. You can either create a new setting (up to three settings are allowed) or edit an existing setting.
- - To change existing setting, select **Edit setting** next to the diagnostic setting you want to update.
- - To add new settings, select **Add diagnostic setting**.
-
- ![Export settings](./media/quickstart-azure-monitor-route-logs-to-storage-account/ExportSettings.png)
-
-1. Once in the **Diagnostic setting** pane if you're creating a new setting, enter a name for the setting to remind you of its purpose (for example, *Send to Azure storage account*). You can't change the name of an existing setting.
-
-1. Under **Destination Details** select the **Archive to a storage account** check box. Text fields for the retention period appear next to each log category.
-
-1. Select the Azure subscription and storage account for you want to route the logs.
-
-1. Select all the relevant categories in under **Category details**:
-
- ![Diagnostics settings](./media/quickstart-azure-monitor-route-logs-to-storage-account/DiagnosticSettings.png)
-
-1. In the **Retention days** field, enter the number of days of retention you need of your log data. By default, this value is *0*, which means that logs are retained in the storage account indefinitely. If you set a different value, events older than the number of days selected are automatically cleaned up.
-
-1. Select **Save**.
-
-1. After the categories have been selected, in the **Retention days** field, type in the number of days of retention you need of your log data. By default, this value is *0*, which means that logs are retained in the storage account indefinitely. If you set a different value, events older than the number of days selected are automatically cleaned up.
-
- > [!NOTE]
- > The Diagnostic settings storage retention feature is being deprecated. For details on this change, see [**Migrate from diagnostic settings storage retention to Azure Storage lifecycle management**](../../azure-monitor/essentials/migrate-to-azure-storage-lifecycle-policy.md).
-
-1. Select **Save** to save the setting.
-
-1. Close the window to return to the Diagnostic settings pane.
-
-## Next steps
-
-* [Tutorial: Configure a log analytics workspace](tutorial-log-analytics-wizard.md)
-* [Interpret audit logs schema in Azure Monitor](./overview-reports.md)
-* [Interpret sign-in logs schema in Azure Monitor](reference-azure-monitor-sign-ins-log-schema.md)
active-directory Recommendation Migrate From Adal To Msal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/recommendation-migrate-from-adal-to-msal.md
Title: Azure Active Directory recommendation - Migrate from ADAL to MSAL | Microsoft Docs
+ Title: Migrate from ADAL to MSAL recommendation
description: Learn why you should migrate from the Azure Active Directory Library to the Microsoft Authentication Libraries. -+ Previously updated : 08/10/2023 Last updated : 08/15/2023 -- # Azure AD recommendation: Migrate from the Azure Active Directory Library to the Microsoft Authentication Libraries
Existing apps that use ADAL will continue to work after the end-of-support date.
## Action plan
-The first step to migrating your apps from ADAL to MSAL is to identify all applications in your tenant that are currently using ADAL. You can identify your apps in the Azure portal or programmatically with the Microsoft Graph API or the Microsoft Graph PowerShell SDK.
-
-### [Azure portal](#tab/Azure-portal)
-
-There are four steps to identifying and updating your apps in the Azure portal. The following steps are covered in detail in the [List all apps using ADAL](../develop/howto-get-list-of-all-auth-library-apps.md) article.
-
-1. Send Azure AD sign-in event to Azure Monitor.
-1. [Access the sign-ins workbook in Azure AD.](../develop/howto-get-list-of-all-auth-library-apps.md)
-1. Identify the apps that use ADAL.
-1. Update your code.
- - The steps to update your code vary depending on the type of application.
- - For example, the steps for .NET and Python applications have separate instructions.
- - For a full list of instructions for each scenario, see [How to migrate to MSAL](../develop/msal-migration.md#how-to-migrate-to-msal).
+The first step to migrating your apps from ADAL to MSAL is to identify all applications in your tenant that are currently using ADAL. You can identify your apps programmatically with the Microsoft Graph API or the Microsoft Graph PowerShell SDK. The steps for the Microsoft Graph PowerShell SDK are provided in the Recommendation details in the Azure Active Directory portal.
### [Microsoft Graph API](#tab/Microsoft-Graph-API) You can use Microsoft Graph to identify apps that need to be migrated to MSAL. To get started, see [How to use Microsoft Graph with Azure AD recommendations](howto-use-recommendations.md#how-to-use-microsoft-graph-with-azure-active-directory-recommendations).
-Run the following query in Microsoft Graph, replacing the `<TENANT_ID>` placeholder with your tenant ID. This query returns a list of the impacted resources in your tenant.
+1. Sign in to [Graph Explorer](https://aka.ms/ge).
+1. Select **GET** as the HTTP method from the dropdown.
+1. Set the API version to **beta**.
+1. Run the following query in Microsoft Graph, replacing the `<TENANT_ID>` placeholder with your tenant ID. This query returns a list of the impacted resources in your tenant.
```http https://graph.microsoft.com/beta/directory/recommendations/<TENANT_ID>_Microsoft.Identity.IAM.Insights.AdalToMsalMigration/impactedResources
You can run the following set of commands in Windows PowerShell. These commands
+ ## Frequently asked questions ### Why does it take 30 days to change the status to completed?
To reduce false positives, the service uses a 30 day window for ADAL requests. T
### How were ADAL applications identified before the recommendation was released?
-The [Azure AD sign-ins workbook](../develop/howto-get-list-of-all-auth-library-apps.md) is an alternative method to identify these apps. The workbook is still available to you, but using the workbook requires streaming sign-in logs to Azure Monitor first. The ADAL to MSAL recommendation works out of the box. Plus, the sign-ins workbook does not capture Service Principal sign-ins, while the recommendation does.
+The [Azure AD sign-ins workbook](../develop/howto-get-list-of-all-auth-library-apps.md) was an alternative method to identify these apps. The workbook is still available to you, but using the workbook requires streaming sign-in logs to Azure Monitor first. The ADAL to MSAL recommendation works out of the box. Plus, the sign-ins workbook doesn't capture Service Principal sign-ins, while the recommendation does.
### Why is the number of ADAL applications different in the workbook and the recommendation?
active-directory Reference Powershell Reporting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/reference-powershell-reporting.md
-+ # Azure AD PowerShell cmdlets for reporting
active-directory Tutorial Configure Log Analytics Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/tutorial-configure-log-analytics-workspace.md
+
+ Title: Configure a log analytics workspace in Azure AD
+description: Learn how to configure Log Analytics workspace and run KQL queries on your identity data.
+++++ Last updated : 07/28/2023++++++
+#Customer intent: As an IT admin, I want to set up log analytics so I can analyze the health of my environment.
+++
+# Tutorial: Configure a log analytics workspace
++
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Configure a log analytics workspace for your audit and sign-in logs
+> * Run queries using the Kusto Query Language (KQL)
+> * Create an alert rule that sends alerts when a specific account is used
+> * Create a custom workbook using the quickstart template
+> * Add a query to an existing workbook template
+
+## Prerequisites
+
+- An Azure subscription with at least one P1 licensed admin. If you don't have an Azure subscription, you can [sign up for a free trial](https://azure.microsoft.com/free/).
+
+- An Azure Active Directory (Azure AD) tenant.
+
+- A user who's a Global Administrator or Security Administrator for the Azure AD tenant.
++
+Familiarize yourself with these articles:
+
+- [Tutorial: Collect and analyze resource logs from an Azure resource](../../azure-monitor/essentials/tutorial-resource-logs.md)
+
+- [How to integrate activity logs with Log Analytics](./howto-integrate-activity-logs-with-log-analytics.md)
+
+- [Manage emergency access account in Azure AD](../roles/security-emergency-access.md)
+
+- [KQL quick reference](/azure/data-explorer/kql-quick-reference)
+
+- [Azure Monitor Workbooks](../../azure-monitor/visualize/workbooks-overview.md)
+++
+## Configure a workspace
++
+This procedure outlines how to configure a log analytics workspace for your audit and sign-in logs.
+Configuring a log analytics workspace consists of two main steps:
+
+1. Creating a log analytics workspace
+2. Setting diagnostic settings
+
+**To configure a workspace:**
++
+1. Sign in to the [Azure portal](https://portal.azure.com) as a global administrator.
+
+2. Search for **log analytics workspaces**.
+
+ ![Search resources services and docs](./media/tutorial-log-analytics-wizard/search-services.png)
+
+3. On the log analytics workspaces page, click **Add**.
+
+ ![Screenshot shows the Add button in the log analytics workspaces page.](./media/tutorial-log-analytics-wizard/add.png)
+
+4. On the **Create Log Analytics workspace** page, perform the following steps:
+
+ ![Create log analytics workspace](./media/tutorial-log-analytics-wizard/create-log-analytics-workspace.png)
+
+ 1. Select your subscription.
+
+ 2. Select a resource group.
+
+ 3. In the **Name** textbox, type a name (e.g.: MytestWorkspace1).
+
+ 4. Select your region.
+
+5. Click **Review + Create**.
+
+ ![Review and create](./media/tutorial-log-analytics-wizard/review-create.png)
+
+6. Click **Create** and wait for the deployment to be succeeded. You may need to refresh the page to see the new workspace.
+
+ ![Create](./media/tutorial-log-analytics-wizard/create-workspace.png)
+
+7. Search for **Azure Active Directory**.
+
+ ![Screenshot shows Azure Active Directory in Azure search.](./media/tutorial-log-analytics-wizard/search-azure-ad.png)
+
+8. In **Monitoring** section, click **Diagnostic setting**.
+
+ ![Screenshot shows Diagnostic settings selected from Monitoring.](./media/tutorial-log-analytics-wizard/diagnostic-settings.png)
+
+9. On the **Diagnostic settings** page, click **Add diagnostic setting**.
+
+ ![Add diagnostic setting](./media/tutorial-log-analytics-wizard/add-diagnostic-setting.png)
+
+10. On the **Diagnostic setting** page, perform the following steps:
+
+ ![Select diagnostics settings](./media/tutorial-log-analytics-wizard/select-diagnostics-settings.png)
+
+ 1. Under **Category details**, select **AuditLogs** and **SigninLogs**.
+
+ 2. Under **Destination details**, select **Send to Log Analytics**, and then select your new log analytics workspace.
+
+ 3. Click **Save**.
+
+## Run queries
+
+This procedure shows how to run queries using the **Kusto Query Language (KQL)**.
++
+**To run a query:**
++
+1. Sign in to the [Azure portal](https://portal.azure.com) as a global administrator.
+
+2. Search for **Azure Active Directory**.
+
+ ![Screenshot shows Azure Active Directory in Azure search.](./media/tutorial-log-analytics-wizard/search-azure-ad.png)
+
+3. In the **Monitoring** section, click **Logs**.
+
+4. On the **Logs** page, click **Get Started**.
+
+5. In the **Search* textbox, type your query.
+
+6. Click **Run**.
++
+### KQL query examples
+
+Take 10 random entries from the input data:
+
+`SigninLogs | take 10`
+
+Look at the sign-ins where the Conditional Access was a success
+
+`SigninLogs | where ConditionalAccessStatus == "success" | project UserDisplayName, ConditionalAccessStatus`
++
+Count how many successes there have been
+
+`SigninLogs | where ConditionalAccessStatus == "success" | project UserDisplayName, ConditionalAccessStatus | count`
++
+Aggregate count of successful sign-ins by user by day:
+
+`SigninLogs | where ConditionalAccessStatus == "success" | summarize SuccessfulSign-ins = count() by UserDisplayName, bin(TimeGenerated, 1d)`
++
+View how many times a user does a certain operation in specific time period:
+
+`AuditLogs | where TimeGenerated > ago(30d) | where OperationName contains "Add member to role" | summarize count() by OperationName, Identity`
++
+Pivot the results on operation name
+
+`AuditLogs | where TimeGenerated > ago(30d) | where OperationName contains "Add member to role" | project OperationName, Identity | evaluate pivot(OperationName)`
++
+Merge together Audit and Sign in Logs using an inner join:
+
+`AuditLogs |where OperationName contains "Add User" |extend UserPrincipalName = tostring(TargetResources[0].userPrincipalName) | |project TimeGenerated, UserPrincipalName |join kind = inner (SigninLogs) on UserPrincipalName |summarize arg_min(TimeGenerated, *) by UserPrincipalName |extend SigninDate = TimeGenerated`
++
+View number of signs ins by client app type:
+
+`SigninLogs | summarize count() by ClientAppUsed`
+
+Count the sign ins by day:
+
+`SigninLogs | summarize NumberOfEntries=count() by bin(TimeGenerated, 1d)`
+
+Take 5 random entries and project the columns you wish to see in the results:
+
+`SigninLogs | take 5 | project ClientAppUsed, Identity, ConditionalAccessStatus, Status, TimeGenerated `
++
+Take the top 5 in descending order and project the columns you wish to see
+
+`SigninLogs | take 5 | project ClientAppUsed, Identity, ConditionalAccessStatus, Status, TimeGenerated `
+
+Create a new column by combining the values to two other columns:
+
+`SigninLogs | limit 10 | extend RiskUser = strcat(RiskDetail, "-", Identity) | project RiskUser, ClientAppUsed`
+
+## Create an alert rule
+
+This procedure shows how to send alerts when the breakglass account is used.
+
+**To create an alert rule:**
+
+1. Sign in to the [Azure portal](https://portal.azure.com) as a global administrator.
+
+2. Search for **Azure Active Directory**.
+
+ ![Screenshot shows Azure Active Directory in Azure search.](./media/tutorial-log-analytics-wizard/search-azure-ad.png)
+
+3. In the **Monitoring** section, click **Logs**.
+
+4. On the **Logs** page, click **Get Started**.
+
+5. In the **Search** textbox, type: `SigninLogs |where UserDisplayName contains "BreakGlass" | project UserDisplayName`
+
+6. Click **Run**.
+
+7. In the toolbar, click **New alert rule**.
+
+ ![New alert rule](./media/tutorial-log-analytics-wizard/new-alert-rule.png)
+
+8. On the **Create alert rule** page, verify that the scope is correct.
+
+9. Under **Condition**, click: **Whenever the average custom log search is greater than `logic undefined` count**
+
+ ![Default condition](./media/tutorial-log-analytics-wizard/default-condition.png)
+
+10. On the **Configure signal logic** page, in the **Alert logic** section, perform the following steps:
+
+ ![Alert logic](./media/tutorial-log-analytics-wizard/alert-logic.png)
+
+ 1. As **Based on**, select **Number of results**.
+
+ 2. As **Operator**, select **Greater than**.
+
+ 3. As **Threshold value**, select **0**.
+
+11. On the **Configure signal logic** page, in the **Evaluated based on** section, perform the following steps:
+
+ ![Evaluated based on](./media/tutorial-log-analytics-wizard/evaluated-based-on.png)
+
+ 1. As **Period (in minutes)**, select **5**.
+
+ 2. As **Frequency (in minutes)**, select **5**.
+
+ 3. Click **Done**.
+
+12. Under **Action group**, click **Select action group**.
+
+ ![Action group](./media/tutorial-log-analytics-wizard/action-group.png)
+
+13. On the **Select an action group to attach to this alert rule**, click **Create action group**.
+
+ ![Create action group](./media/tutorial-log-analytics-wizard/create-action-group.png)
+
+14. On the **Create action group** page, perform the following steps:
+
+ ![Instance details](./media/tutorial-log-analytics-wizard/instance-details.png)
+
+ 1. In the **Action group name** textbox, type **My action group**.
+
+ 2. In the **Display name** textbox, type **My action**.
+
+ 3. Click **Review + create**.
+
+ 4. Click **Create**.
++
+15. Under **Customize action**, perform the following steps:
+
+ ![Customize actions](./media/tutorial-log-analytics-wizard/customize-actions.png)
+
+ 1. Select **Email subject**.
+
+ 2. In the **Subject line** textbox, type: `Breakglass account has been used`
+
+16. Under **Alert rule details**, perform the following steps:
+
+ ![Alert rule details](./media/tutorial-log-analytics-wizard/alert-rule-details.png)
+
+ 1. In the **Alert rule name** textbox, type: `Breakglass account`
+
+ 2. In the **Description** textbox, type: `Your emergency access account has been used`
+
+17. Click **Create alert rule**.
++
+## Create a custom workbook
+
+This procedure shows how to create a new workbook using the quickstart template.
++++
+1. Sign in to the [Azure portal](https://portal.azure.com) as a global administrator.
+
+2. Search for **Azure Active Directory**.
+
+ ![Screenshot shows Azure Active Directory in Azure search.](./media/tutorial-log-analytics-wizard/search-azure-ad.png)
+
+3. In the **Monitoring** section, click **Workbooks**.
+
+ ![Screenshot shows Monitoring in the Azure portal menu with Workbooks selected.](./media/tutorial-log-analytics-wizard/workbooks.png)
+
+4. In the **Quickstart** section, click **Empty**.
+
+ ![Quick start](./media/tutorial-log-analytics-wizard/quick-start.png)
+
+5. Click **Add**.
+
+ ![Add workbook](./media/tutorial-log-analytics-wizard/add-workbook.png)
+
+6. Click **Add text**.
+
+ ![Add text](./media/tutorial-log-analytics-wizard/add-text.png)
++
+7. In the textbox, type: `# Client apps used in the past week`, and then click **Done Editing**.
+
+ ![Workbook text](./media/tutorial-log-analytics-wizard/workbook-text.png)
+
+8. In the new workbook, click **Add**, and then click **Add query**.
+
+ ![Add query](./media/tutorial-log-analytics-wizard/add-query.png)
+
+9. In the query textbox, type: `SigninLogs | where TimeGenerated > ago(7d) | project TimeGenerated, UserDisplayName, ClientAppUsed | summarize count() by ClientAppUsed`
+
+10. Click **Run Query**.
+
+ ![Screenshot shows the Run Query button.](./media/tutorial-log-analytics-wizard/run-workbook-query.png)
+
+11. In the toolbar, under **Visualization**, click **Pie chart**.
+
+ ![Pie chart](./media/tutorial-log-analytics-wizard/pie-chart.png)
+
+12. Click **Done Editing**.
+
+ ![Done editing](./media/tutorial-log-analytics-wizard/done-workbook-editing.png)
+++
+## Add a query to a workbook template
+
+This procedure shows how to add a query to an existing workbook template. The example is based on a query that shows the distribution of conditional access success to failures.
++
+1. Sign in to the [Azure portal](https://portal.azure.com) as a global administrator.
+
+2. Search for **Azure Active Directory**.
+
+ ![Screenshot shows Azure Active Directory in Azure search.](./media/tutorial-log-analytics-wizard/search-azure-ad.png)
+
+3. In the **Monitoring** section, click **Workbooks**.
+
+ ![Screenshot shows Monitoring in the menu with Workbooks selected.](./media/tutorial-log-analytics-wizard/workbooks.png)
+
+4. In the **conditional access** section, click **Conditional Access Insights and Reporting**.
+
+ ![Screenshot shows the Conditional Access Insights and Reporting option.](./media/tutorial-log-analytics-wizard/conditional-access-template.png)
+
+5. In the toolbar, click **Edit**.
+
+ ![Screenshot shows the Edit button.](./media/tutorial-log-analytics-wizard/edit-workbook-template.png)
+
+6. In the toolbar, click the three dots, then **Add**, and then **Add query**.
+
+ ![Add workbook query](./media/tutorial-log-analytics-wizard/add-custom-workbook-query.png)
+
+7. In the query textbox, type: `SigninLogs | where TimeGenerated > ago(20d) | where ConditionalAccessPolicies != "[]" | summarize dcount(UserDisplayName) by bin(TimeGenerated, 1d), ConditionalAccessStatus`
+
+8. Click **Run Query**.
+
+ ![Screenshot shows the Run Query button to run this query.](./media/tutorial-log-analytics-wizard/run-workbook-insights-query.png)
+
+9. Click **Time Range**, and then select **Set in query**.
+
+10. Click **Visualization**, and then select **Bar chart**.
+
+11. Click **Advanced Settings**, as chart title, type `Conditional Access status over the last 20 days`, and then click **Done Editing**.
+
+ ![Set chart title](./media/tutorial-log-analytics-wizard/set-chart-title.png)
++++++++
+## Next steps
+
+Advance to the next article to learn how to manage device identities by using the Azure portal.
+> [!div class="nextstepaction"]
+> [Monitoring](overview-monitoring.md)
active-directory Admin Units Assign Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/admin-units-assign-roles.md
Last updated 11/15/2022 -+
active-directory Admin Units Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/admin-units-manage.md
Last updated 06/09/2023 -+
active-directory Admin Units Members Dynamic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/admin-units-members-dynamic.md
Last updated 05/13/2022 -+
active-directory Admin Units Members List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/admin-units-members-list.md
Last updated 06/09/2023 -+
active-directory Admin Units Members Remove https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/admin-units-members-remove.md
Last updated 06/09/2023 -+
active-directory Assign Roles Different Scopes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/assign-roles-different-scopes.md
Last updated 02/04/2022 -+
active-directory Concept Understand Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/concept-understand-roles.md
Last updated 04/22/2022 -+
active-directory Custom Assign Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/custom-assign-powershell.md
Last updated 05/10/2022 -+ # Assign custom roles with resource scope using PowerShell in Azure Active Directory
active-directory Custom Available Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/custom-available-permissions.md
Last updated 11/04/2020 -+
active-directory Custom Consent Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/custom-consent-permissions.md
Last updated 01/31/2023 -+ # App consent permissions for custom roles in Azure Active Directory
active-directory Custom Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/custom-create.md
Last updated 12/09/2022 -+ # Create and assign a custom role in Azure Active Directory
active-directory Custom Enterprise App Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/custom-enterprise-app-permissions.md
Last updated 01/31/2023 -+ # Enterprise application permissions for custom roles in Azure Active Directory
active-directory Custom Enterprise Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/custom-enterprise-apps.md
Last updated 02/04/2022 -+
active-directory Custom Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/custom-overview.md
Last updated 04/10/2023 -+
active-directory Groups Assign Role https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/groups-assign-role.md
Last updated 04/10/2023 -+
active-directory Groups Create Eligible https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/groups-create-eligible.md
Last updated 04/10/2023 -+
active-directory Groups Remove Assignment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/groups-remove-assignment.md
Last updated 02/04/2022 -+
active-directory Groups View Assignments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/groups-view-assignments.md
Last updated 08/08/2023 -+
active-directory Manage Roles Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/manage-roles-portal.md
Last updated 02/06/2023 -+
active-directory Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/prerequisites.md
Last updated 03/17/2022 -+
active-directory Protected Actions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/protected-actions-overview.md
+ Last updated 04/10/2023
active-directory Quickstart App Registration Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/quickstart-app-registration-limits.md
Last updated 02/04/2022 -+ # Quickstart: Grant permission to create unlimited app registrations
active-directory Role Definitions List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/role-definitions-list.md
Last updated 02/04/2022 -+
active-directory Security Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/security-planning.md
-+
active-directory View Assignments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/view-assignments.md
Last updated 04/15/2022 -+ # List Azure AD role assignments
active-directory Canva Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/canva-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure Canva for automatic user provisioning with Azure Active Directory'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to Canva.
++
+writer: twimmers
+
+ms.assetid: 9bf62920-d9e0-4ed4-a4f6-860cb9563b00
++++ Last updated : 08/16/2023+++
+# Tutorial: Configure Canva for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both Canva and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Canva](https://www.canva.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Supported capabilities
+> [!div class="checklist"]
+> * Create users in Canva.
+> * Remove users in Canva when they do not require access anymore.
+> * Keep user attributes synchronized between Azure AD and Canva.
+> * Provision groups and group memberships in Canva.
+> * [Single sign-on](canva-tutorial.md) to Canva (recommended).
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md).
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* An Canva tenant.
+* A user account in Canva with Admin permissions.
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and Canva](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure Canva to support provisioning with Azure AD
+Contact Canva support to configure Canva to support provisioning with Azure AD.
+
+## Step 3. Add Canva from the Azure AD application gallery
+
+Add Canva from the Azure AD application gallery to start managing provisioning to Canva. If you have previously setup Canva for SSO you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
++
+## Step 5. Configure automatic user provisioning to Canva
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for Canva in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Screenshot of Enterprise applications blade.](common/enterprise-applications.png)
+
+1. In the applications list, select **Canva**.
+
+ ![Screenshot of the Canva link in the Applications list.](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Screenshot of Provisioning tab.](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png)
+
+1. Under the **Admin Credentials** section, input your Canva Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Canva. If the connection fails, ensure your Canva account has Admin permissions and try again.
+
+ ![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Screenshot of Notification Email.](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Canva**.
+
+1. Review the user attributes that are synchronized from Azure AD to Canva in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Canva for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the Canva API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Canva|
+ |||||
+ |userName|String|&check;|&check;
+ |active|Boolean||
+ |externalId|String||
+ |emails[type eq "work"].value|String||&check;
+ |name.givenName|String||
+ |name.familyName|String||
+ |displayName|String||
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Canva**.
+
+1. Review the group attributes that are synchronized from Azure AD to Canva in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Canva for update operations. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Canva|
+ |||||
+ |displayName|String|&check;|&check;
+ |members|Reference||
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for Canva, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Screenshot of Provisioning Status Toggled On.](common/provisioning-toggle-on.png)
+
+1. Define the users and/or groups that you would like to provision to Canva by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Screenshot of Provisioning Scope.](common/provisioning-scope.png)
+
+1. When you're ready to provision, click **Save**.
+
+ ![Screenshot of Saving Provisioning Configuration.](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Forcepoint Cloud Security Gateway Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/forcepoint-cloud-security-gateway-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure Forcepoint Cloud Security Gateway - User Authentication for automatic user provisioning with Azure Active Directory'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to Forcepoint Cloud Security Gateway - User Authentication.
++
+writer: twimmers
+
+ms.assetid: 415b2ba3-a9a5-439a-963a-7c2c0254ced1
++++ Last updated : 08/16/2023+++
+# Tutorial: Configure Forcepoint Cloud Security Gateway - User Authentication for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both Forcepoint Cloud Security Gateway - User Authentication and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Forcepoint Cloud Security Gateway - User Authentication](https://admin.forcepoint.net) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Supported capabilities
+> [!div class="checklist"]
+> * Create users in Forcepoint Cloud Security Gateway - User Authentication.
+> * Remove users in Forcepoint Cloud Security Gateway - User Authentication when they do not require access anymore.
+> * Keep user attributes synchronized between Azure AD and Forcepoint Cloud Security Gateway - User Authentication.
+> * Provision groups and group memberships in Forcepoint Cloud Security Gateway - User Authentication.
+> * [Single sign-on](forcepoint-cloud-security-gateway-tutorial.md) to Forcepoint Cloud Security Gateway - User Authentication (recommended).
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md).
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* An Forcepoint Cloud Security Gateway - User Authentication tenant.
+* A user account in Forcepoint Cloud Security Gateway - User Authentication with Admin permissions.
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and Forcepoint Cloud Security Gateway - User Authentication](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure Forcepoint Cloud Security Gateway - User Authentication to support provisioning with Azure AD
+Contact Forcepoint Cloud Security Gateway - User Authentication support to configure Forcepoint Cloud Security Gateway - User Authentication to support provisioning with Azure AD.
+
+## Step 3. Add Forcepoint Cloud Security Gateway - User Authentication from the Azure AD application gallery
+
+Add Forcepoint Cloud Security Gateway - User Authentication from the Azure AD application gallery to start managing provisioning to Forcepoint Cloud Security Gateway - User Authentication. If you have previously setup Forcepoint Cloud Security Gateway - User Authentication for SSO you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
++
+## Step 5. Configure automatic user provisioning to Forcepoint Cloud Security Gateway - User Authentication
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for Forcepoint Cloud Security Gateway - User Authentication in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Screenshot of Enterprise applications blade.](common/enterprise-applications.png)
+
+1. In the applications list, select **Forcepoint Cloud Security Gateway - User Authentication**.
+
+ ![Screenshot of the Forcepoint Cloud Security Gateway - User Authentication link in the Applications list.](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Screenshot of Provisioning tab.](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png)
+
+1. Under the **Admin Credentials** section, input your Forcepoint Cloud Security Gateway - User Authentication Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Forcepoint Cloud Security Gateway - User Authentication. If the connection fails, ensure your Forcepoint Cloud Security Gateway - User Authentication account has Admin permissions and try again.
+
+ ![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Screenshot of Notification Email.](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Forcepoint Cloud Security Gateway - User Authentication**.
+
+1. Review the user attributes that are synchronized from Azure AD to Forcepoint Cloud Security Gateway - User Authentication in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Forcepoint Cloud Security Gateway - User Authentication for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the Forcepoint Cloud Security Gateway - User Authentication API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Forcepoint Cloud Security Gateway - User Authentication|
+ |||||
+ |userName|String|&check;|&check;
+ |externalId|String||&check;
+ |displayName|String||&check;
+ |urn:ietf:params:scim:schemas:extension:forcepoint:2.0:User:ntlmId|String||
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Forcepoint Cloud Security Gateway - User Authentication**.
+
+1. Review the group attributes that are synchronized from Azure AD to Forcepoint Cloud Security Gateway - User Authentication in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Forcepoint Cloud Security Gateway - User Authentication for update operations. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Forcepoint Cloud Security Gateway - User Authentication|
+ |||||
+ |displayName|String|&check;|&check;
+ |externalId|String||
+ |members|Reference||
+
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for Forcepoint Cloud Security Gateway - User Authentication, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Screenshot of Provisioning Status Toggled On.](common/provisioning-toggle-on.png)
+
+1. Define the users and/or groups that you would like to provision to Forcepoint Cloud Security Gateway - User Authentication by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Screenshot of Provisioning Scope.](common/provisioning-scope.png)
+
+1. When you're ready to provision, click **Save**.
+
+ ![Screenshot of Saving Provisioning Configuration.](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Hypervault Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/hypervault-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure Hypervault for automatic user provisioning with Azure Active Directory'
+description: Learn how to automatically provision and deprovision user accounts from Azure AD to Hypervault.
++
+writer: twimmers
+
+ms.assetid: eca2ff9e-a09d-4bb4-88f6-6021a93d2c9d
++++ Last updated : 08/16/2023+++
+# Tutorial: Configure Hypervault for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both Hypervault and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and deprovisions users to [Hypervault](https://hypervault.com) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Supported capabilities
+> [!div class="checklist"]
+> * Create users in Hypervault.
+> * Remove users in Hypervault when they do not require access anymore.
+> * Keep user attributes synchronized between Azure AD and Hypervault.
+> * [Single sign-on](../manage-apps/add-application-portal-setup-oidc-sso.md) to Hypervault (recommended).
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md)
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A user account in Hypervault with Admin permissions.
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who is in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and Hypervault](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure Hypervault to support provisioning with Azure AD
+Contact Hypervault support to configure Hypervault to support provisioning with Azure AD.
+
+## Step 3. Add Hypervault from the Azure AD application gallery
+
+Add Hypervault from the Azure AD application gallery to start managing provisioning to Hypervault. If you have previously setup Hypervault for SSO, you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who is in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who is provisioned based on assignment to the application and/or based on attributes of the user. If you choose to scope who is provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users to the application. If you choose to scope who is provisioned based solely on attributes of the user, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* Start small. Test with a small set of users before rolling out to everyone. When scope for provisioning is set to assigned users, you can control this by assigning one or two users to the app. When scope is set to all users, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
++
+## Step 5. Configure automatic user provisioning to Hypervault
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users in TestApp based on user assignments in Azure AD.
+
+### To configure automatic user provisioning for Hypervault in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Screenshot of Enterprise applications blade.](common/enterprise-applications.png)
+
+1. In the applications list, select **Hypervault**.
+
+ ![Screenshot of the Hypervault link in the Applications list.](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Screenshot of Provisioning tab.](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png)
+
+1. Under the **Admin Credentials** section, input your Hypervault Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Hypervault. If the connection fails, ensure your Hypervault account has Admin permissions and try again.
+
+ ![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Screenshot of Notification Email.](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Hypervault**.
+
+1. Review the user attributes that are synchronized from Azure AD to Hypervault in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Hypervault for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you need to ensure that the Hypervault API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Hypervault|
+ |||||
+ |userName|String|&check;|&check;
+ |active|Boolean||&check;
+ |displayName|String||&check;
+ |name.givenName|String||&check;
+ |name.familyName|String||&check;
+ |emails[type eq "work"].value|String||&check;
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for Hypervault, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Screenshot of Provisioning Status Toggled On.](common/provisioning-toggle-on.png)
+
+1. Define the users that you would like to provision to Hypervault by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Screenshot of Provisioning Scope.](common/provisioning-scope.png)
+
+1. When you're ready to provision, click **Save**.
+
+ ![Screenshot of Saving Provisioning Configuration.](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Oneflow Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/oneflow-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure Oneflow for automatic user provisioning with Azure Active Directory'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to Oneflow.
++
+writer: twimmers
+
+ms.assetid: 6af89cdd-956c-4cc2-9a61-98afe7814470
++++ Last updated : 08/16/2023+++
+# Tutorial: Configure Oneflow for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both Oneflow and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Oneflow](https://oneflow.com) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Supported capabilities
+> [!div class="checklist"]
+> * Create users in Oneflow.
+> * Remove users in Oneflow when they do not require access anymore.
+> * Keep user attributes synchronized between Azure AD and Oneflow.
+> * Provision groups and group memberships in Oneflow.
+> * [Single sign-on](oneflow-tutorial.md) to Oneflow (recommended).
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md).
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* An Oneflow tenant.
+* A user account in Oneflow with Admin permissions.
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and Oneflow](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure Oneflow to support provisioning with Azure AD
+Contact Oneflow support to configure Oneflow to support provisioning with Azure AD.
+
+## Step 3. Add Oneflow from the Azure AD application gallery
+
+Add Oneflow from the Azure AD application gallery to start managing provisioning to Oneflow. If you have previously setup Oneflow for SSO you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
++
+## Step 5. Configure automatic user provisioning to Oneflow
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for Oneflow in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Screenshot of Enterprise applications blade.](common/enterprise-applications.png)
+
+1. In the applications list, select **Oneflow**.
+
+ ![Screenshot of the Oneflow link in the Applications list.](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Screenshot of Provisioning tab.](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png)
+
+1. Under the **Admin Credentials** section, input your Oneflow Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Oneflow. If the connection fails, ensure your Oneflow account has Admin permissions and try again.
+
+ ![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Screenshot of Notification Email.](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Oneflow**.
+
+1. Review the user attributes that are synchronized from Azure AD to Oneflow in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Oneflow for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the Oneflow API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Oneflow|
+ |||||
+ |userName|String|&check;|&check;
+ |active|Boolean||&check;
+ |externalId|String||
+ |emails[type eq "work"].value|String||
+ |name.givenName|String||
+ |name.familyName|String||
+ |phoneNumbers[type eq \"work\"].value|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:employeeNumber|String||
+ |nickName|String||
+ |title|String||
+ |profileUrl|String||
+ |displayName|String||
+ |addresses[type eq \"work\"].streetAddress|String||
+ |addresses[type eq \"work\"].locality|String||
+ |addresses[type eq \"work\"].region|String||
+ |addresses[type eq \"work\"].postalCode|String||
+ |addresses[type eq \"work\"].country|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:costCenter|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:division|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:organization|String||
+ |urn:ietf:params:scim:schemas:extension:ws1b:2.0:User:adSourceAnchor|String||
+ |urn:ietf:params:scim:schemas:extension:ws1b:2.0:User:customAttribute1|String||
+ |urn:ietf:params:scim:schemas:extension:ws1b:2.0:User:customAttribute2|String||
+ |urn:ietf:params:scim:schemas:extension:ws1b:2.0:User:customAttribute3|String||
+ |urn:ietf:params:scim:schemas:extension:ws1b:2.0:User:customAttribute4|String||
+ |urn:ietf:params:scim:schemas:extension:ws1b:2.0:User:customAttribute5|String||
+ |urn:ietf:params:scim:schemas:extension:ws1b:2.0:User:distinguishedName|String||
+ |urn:ietf:params:scim:schemas:extension:ws1b:2.0:User:domain|String||
+ |urn:ietf:params:scim:schemas:extension:ws1b:2.0:User:userPrincipalName|String||
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Oneflow**.
+
+1. Review the group attributes that are synchronized from Azure AD to Oneflow in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Oneflow for update operations. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Oneflow|
+ |||||
+ |displayName|String|&check;|&check;
+ |externalId|String|&check;|&check;
+ |members|Reference||
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for Oneflow, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Screenshot of Provisioning Status Toggled On.](common/provisioning-toggle-on.png)
+
+1. Define the users and/or groups that you would like to provision to Oneflow by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Screenshot of Provisioning Scope.](common/provisioning-scope.png)
+
+1. When you're ready to provision, click **Save**.
+
+ ![Screenshot of Saving Provisioning Configuration.](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Postman Provisioning Tutorialy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/postman-provisioning-tutorialy.md
+
+ Title: 'Tutorial: Configure Postman for automatic user provisioning with Azure Active Directory'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to Postman.
++
+writer: twimmers
+
+ms.assetid: f3687101-9bec-4f18-9884-61833f4f58c3
++++ Last updated : 08/16/2023+++
+# Tutorial: Configure Postman for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both Postman and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Postman](https://www.postman.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Supported capabilities
+> [!div class="checklist"]
+> * Create users in Postman.
+> * Remove users in Postman when they do not require access anymore.
+> * Keep user attributes synchronized between Azure AD and Postman.
+> * Provision groups and group memberships in Postman.
+> * [Single sign-on](postman-tutorial.md) to Postman (recommended).
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md).
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* An Postman tenant.
+* A user account in Postman with Admin permissions.
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and Postman](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure Postman to support provisioning with Azure AD
+Contact Postman support to configure Postman to support provisioning with Azure AD.
+
+## Step 3. Add Postman from the Azure AD application gallery
+
+Add Postman from the Azure AD application gallery to start managing provisioning to Postman. If you have previously setup Postman for SSO you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
++
+## Step 5. Configure automatic user provisioning to Postman
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for Postman in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Screenshot of Enterprise applications blade.](common/enterprise-applications.png)
+
+1. In the applications list, select **Postman**.
+
+ ![Screenshot of the Postman link in the Applications list.](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Screenshot of Provisioning tab.](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png)
+
+1. Under the **Admin Credentials** section, input your Postman Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Postman. If the connection fails, ensure your Postman account has Admin permissions and try again.
+
+ ![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Screenshot of Notification Email.](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Postman**.
+
+1. Review the user attributes that are synchronized from Azure AD to Postman in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Postman for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the Postman API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Postman|
+ |||||
+ |userName|String|&check;|&check;
+ |active|Boolean||&check;
+ |externalId|String||&check;
+ |name.givenName|String||&check;
+ |name.familyName|String||&check;
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Postman**.
+
+1. Review the group attributes that are synchronized from Azure AD to Postman in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Postman for update operations. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Postman|
+ |||||
+ |displayName|String|&check;|&check;
+ |externalId|String||&check;
+ |members|Reference||
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for Postman, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Screenshot of Provisioning Status Toggled On.](common/provisioning-toggle-on.png)
+
+1. Define the users and/or groups that you would like to provision to Postman by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Screenshot of Provisioning Scope.](common/provisioning-scope.png)
+
+1. When you're ready to provision, click **Save**.
+
+ ![Screenshot of Saving Provisioning Configuration.](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Sap Fiori Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sap-fiori-tutorial.md
+ Last updated 11/21/2022
active-directory Sap Netweaver Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sap-netweaver-tutorial.md
+ Last updated 11/21/2022
active-directory Servicely Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/servicely-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure Servicely for automatic user provisioning with Azure Active Directory'
+description: Learn how to automatically provision and deprovision user accounts from Azure AD to Servicely.
++
+writer: twimmers
+
+ms.assetid: be3af02b-da77-4a88-bec3-e634e2af38b3
++++ Last updated : 08/16/2023+++
+# Tutorial: Configure Servicely for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both Servicely and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and deprovisions users and groups to [Servicely](https://servicely.ai/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Supported capabilities
+> [!div class="checklist"]
+> * Create users in Servicely.
+> * Remove users in Servicely when they do not require access anymore.
+> * Keep user attributes synchronized between Azure AD and Servicely.
+> * Provision groups and group memberships in Servicely.
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md).
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* An Servicely tenant.
+* A user account in Servicely with Admin permissions.
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who is in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and Servicely](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure Servicely to support provisioning with Azure AD
+Contact Servicely support to configure Servicely to support provisioning with Azure AD.
+
+## Step 3. Add Servicely from the Azure AD application gallery
+
+Add Servicely from the Azure AD application gallery to start managing provisioning to Servicely. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who is in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who is provisioned based on assignment to the application and/or based on attributes of the user. If you choose to scope who is provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who is provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
++
+## Step 5. Configure automatic user provisioning to Servicely
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for Servicely in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Screenshot of Enterprise applications blade.](common/enterprise-applications.png)
+
+1. In the applications list, select **Servicely**.
+
+ ![Screenshot of the Servicely link in the Applications list.](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Screenshot of Provisioning tab.](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png)
+
+1. Under the **Admin Credentials** section, input your Servicely Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Servicely. If the connection fails, ensure your Servicely account has Admin permissions and try again.
+
+ ![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Screenshot of Notification Email.](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Servicely**.
+
+1. Review the user attributes that are synchronized from Azure AD to Servicely in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Servicely for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you need to ensure that the Servicely API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Servicely|
+ |||||
+ |userName|String|&check;|&check;
+ |active|Boolean||
+ |externalId|String||
+ |emails[type eq "work"].value|String||
+ |name.givenName|String||
+ |name.familyName|String||
+ |title|String||
+ |preferredLanguage|String||
+ |phoneNumbers[type eq "work"].value|String||
+ |phoneNumbers[type eq "mobile"].value|String||
+ |timezone|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:employeeNumber|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager|String||
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Servicely**.
+
+1. Review the group attributes that are synchronized from Azure AD to Servicely in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Servicely for update operations. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Servicely|
+ |||||
+ |displayName|String|&check;|&check;
+ |externalId|String|&check;|&check;
+ |members|Reference||
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for Servicely, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Screenshot of Provisioning Status Toggled On.](common/provisioning-toggle-on.png)
+
+1. Define the users and/or groups that you would like to provision to Servicely by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Screenshot of Provisioning Scope.](common/provisioning-scope.png)
+
+1. When you're ready to provision, click **Save**.
+
+ ![Screenshot of Saving Provisioning Configuration.](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Sharepoint On Premises Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sharepoint-on-premises-tutorial.md
+ Last updated 11/21/2022
active-directory Configure Cmmc Level 2 Identification And Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/configure-cmmc-level-2-identification-and-authentication.md
Last updated 1/3/2023 -+
active-directory How To Issuer Revoke https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-issuer-revoke.md
Verifiable credential data isn't stored by Microsoft. Therefore, the issuer need
## How does revocation work?
-Microsoft Entra Verified ID implements the [W3C StatusList2021](https://github.com/w3c-ccg/vc-status-list-2021/tree/343b8b59cddba4525e1ef355356ae760fc75904e). When presentation to the Request Service API happens, the API will do the revocation check for you. The revocation check happens over an anonymous API call to Identity Hub and does not contain any data who is checking if the verifiable credential is still valid or revoked. With the **statusList2021**, Microsoft Entra Verified ID just keeps a flag by the hashed value of the indexed claim to keep track of the revocation status.
+Microsoft Entra Verified ID implements the [W3C StatusList2021](https://github.com/w3c/vc-status-list-2021/tree/343b8b59cddba4525e1ef355356ae760fc75904e). When presentation to the Request Service API happens, the API will do the revocation check for you. The revocation check happens over an anonymous API call to Identity Hub and does not contain any data who is checking if the verifiable credential is still valid or revoked. With the **statusList2021**, Microsoft Entra Verified ID just keeps a flag by the hashed value of the indexed claim to keep track of the revocation status.
### Verifiable credential data
ai-services Cognitive Services Virtual Networks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/cognitive-services-virtual-networks.md
Previously updated : 07/04/2023 Last updated : 08/10/2023 # Configure Azure AI services virtual networks
-Azure AI services provides a layered security model. This model enables you to secure your Azure AI services accounts to a specific subset of networksΓÇï. When network rules are configured, only applications requesting data over the specified set of networks can access the account. You can limit access to your resources with request filtering. Allowing only requests originating from specified IP addresses, IP ranges or from a list of subnets in [Azure Virtual Networks](../virtual-network/virtual-networks-overview.md).
+Azure AI services provide a layered security model. This model enables you to secure your Azure AI services accounts to a specific subset of networksΓÇï. When network rules are configured, only applications that request data over the specified set of networks can access the account. You can limit access to your resources with *request filtering*, which allows requests that originate only from specified IP addresses, IP ranges, or from a list of subnets in [Azure Virtual Networks](../virtual-network/virtual-networks-overview.md).
An application that accesses an Azure AI services resource when network rules are in effect requires authorization. Authorization is supported with [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md) (Azure AD) credentials or with a valid API key. > [!IMPORTANT]
-> Turning on firewall rules for your Azure AI services account blocks incoming requests for data by default. In order to allow requests through, one of the following conditions needs to be met:
+> Turning on firewall rules for your Azure AI services account blocks incoming requests for data by default. To allow requests through, one of the following conditions needs to be met:
>
-> * The request should originate from a service operating within an Azure Virtual Network (VNet) on the allowed subnet list of the target Azure AI services account. The endpoint in requests originated from VNet needs to be set as the [custom subdomain](cognitive-services-custom-subdomains.md) of your Azure AI services account.
-> * Or the request should originate from an allowed list of IP addresses.
+> - The request originates from a service that operates within an Azure Virtual Network on the allowed subnet list of the target Azure AI services account. The endpoint request that originated from the virtual network needs to be set as the [custom subdomain](cognitive-services-custom-subdomains.md) of your Azure AI services account.
+> - The request originates from an allowed list of IP addresses.
>
-> Requests that are blocked include those from other Azure services, from the Azure portal, from logging and metrics services, and so on.
+> Requests that are blocked include those from other Azure services, from the Azure portal, and from logging and metrics services.
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)] ## Scenarios
-To secure your Azure AI services resource, you should first configure a rule to deny access to traffic from all networks (including internet traffic) by default. Then, you should configure rules that grant access to traffic from specific VNets. This configuration enables you to build a secure network boundary for your applications. You can also configure rules to grant access to traffic from select public internet IP address ranges, enabling connections from specific internet or on-premises clients.
+To secure your Azure AI services resource, you should first configure a rule to deny access to traffic from all networks, including internet traffic, by default. Then, configure rules that grant access to traffic from specific virtual networks. This configuration enables you to build a secure network boundary for your applications. You can also configure rules to grant access to traffic from select public internet IP address ranges and enable connections from specific internet or on-premises clients.
-Network rules are enforced on all network protocols to Azure AI services, including REST and WebSocket. To access data using tools such as the Azure test consoles, explicit network rules must be configured. You can apply network rules to existing Azure AI services resources, or when you create new Azure AI services resources. Once network rules are applied, they're enforced for all requests.
+Network rules are enforced on all network protocols to Azure AI services, including REST and WebSocket. To access data by using tools such as the Azure test consoles, explicit network rules must be configured. You can apply network rules to existing Azure AI services resources, or when you create new Azure AI services resources. After network rules are applied, they're enforced for all requests.
## Supported regions and service offerings
-Virtual networks (VNETs) are supported in [regions where Azure AI services are available](https://azure.microsoft.com/global-infrastructure/services/). Azure AI services supports service tags for network rules configuration. The services listed below are included in the **CognitiveServicesManagement** service tag.
+Virtual networks are supported in [regions where Azure AI services are available](https://azure.microsoft.com/global-infrastructure/services/). Azure AI services support service tags for network rules configuration. The services listed here are included in the `CognitiveServicesManagement` service tag.
> [!div class="checklist"]
-> * Anomaly Detector
-> * Azure OpenAI
-> * Azure AI Vision
-> * Content Moderator
-> * Custom Vision
-> * Face
-> * Language Understanding (LUIS)
-> * Personalizer
-> * Speech service
-> * Language service
-> * QnA Maker
-> * Translator Text
-
+> - Anomaly Detector
+> - Azure OpenAI
+> - Content Moderator
+> - Custom Vision
+> - Face
+> - Language Understanding (LUIS)
+> - Personalizer
+> - Speech service
+> - Language
+> - QnA Maker
+> - Translator
> [!NOTE]
-> If you're using, Azure OpenAI, LUIS, Speech Services, or Language services, the **CognitiveServicesManagement** tag only enables you use the service using the SDK or REST API. To access and use Azure OpenAI Studio, LUIS portal , Speech Studio or Language Studio from a virtual network, you will need to use the following tags:
+> If you use Azure OpenAI, LUIS, Speech Services, or Language services, the `CognitiveServicesManagement` tag only enables you to use the service by using the SDK or REST API. To access and use Azure OpenAI Studio, LUIS portal, Speech Studio, or Language Studio from a virtual network, you need to use the following tags:
>
-> * **AzureActiveDirectory**
-> * **AzureFrontDoor.Frontend**
-> * **AzureResourceManager**
-> * **CognitiveServicesManagement**
-> * **CognitiveServicesFrontEnd**
-
+> - `AzureActiveDirectory`
+> - `AzureFrontDoor.Frontend`
+> - `AzureResourceManager`
+> - `CognitiveServicesManagement`
+> - `CognitiveServicesFrontEnd`
## Change the default network access rule By default, Azure AI services resources accept connections from clients on any network. To limit access to selected networks, you must first change the default action. > [!WARNING]
-> Making changes to network rules can impact your applications' ability to connect to Azure AI services. Setting the default network rule to **deny** blocks all access to the data unless specific network rules that **grant** access are also applied. Be sure to grant access to any allowed networks using network rules before you change the default rule to deny access. If you are allow listing IP addresses for your on-premises network, be sure to add all possible outgoing public IP addresses from your on-premises network.
+> Making changes to network rules can impact your applications' ability to connect to Azure AI services. Setting the default network rule to *deny* blocks all access to the data unless specific network rules that *grant* access are also applied.
+>
+> Before you change the default rule to deny access, be sure to grant access to any allowed networks by using network rules. If you allow listing for the IP addresses for your on-premises network, be sure to add all possible outgoing public IP addresses from your on-premises network.
-### Managing default network access rules
+### Manage default network access rules
You can manage default network access rules for Azure AI services resources through the Azure portal, PowerShell, or the Azure CLI.
You can manage default network access rules for Azure AI services resources thro
1. Go to the Azure AI services resource you want to secure.
-1. Select the **RESOURCE MANAGEMENT** menu called **Virtual network**.
+1. Select **Resource Management** to expand it, then select **Networking**.
- ![Virtual network option](media/vnet/virtual-network-blade.png)
+ :::image type="content" source="media/vnet/virtual-network-blade.png" alt-text="Screenshot shows the Networking page with Selected Networks and Private Endpoints selected." lightbox="media/vnet/virtual-network-blade.png":::
-1. To deny access by default, choose to allow access from **Selected networks**. With the **Selected networks** setting alone, unaccompanied by configured **Virtual networks** or **Address ranges** - all access is effectively denied. When all access is denied, requests attempting to consume the Azure AI services resource aren't permitted. The Azure portal, Azure PowerShell or, Azure CLI can still be used to configure the Azure AI services resource.
-1. To allow traffic from all networks, choose to allow access from **All networks**.
+1. To deny access by default, under **Firewalls and virtual networks**, select **Selected Networks and Private Endpoints**.
- ![Virtual networks deny](media/vnet/virtual-network-deny.png)
+ With this setting alone, unaccompanied by configured virtual networks or address ranges, all access is effectively denied. When all access is denied, requests that attempt to consume the Azure AI services resource aren't permitted. The Azure portal, Azure PowerShell, or the Azure CLI can still be used to configure the Azure AI services resource.
+
+1. To allow traffic from all networks, select **All networks**.
+
+ :::image type="content" source="media/vnet/virtual-network-deny.png" alt-text="Screenshot shows the Networking page with All networks selected." lightbox="media/vnet/virtual-network-deny.png":::
1. Select **Save** to apply your changes. # [PowerShell](#tab/powershell)
-1. Install the [Azure PowerShell](/powershell/azure/install-azure-powershell) and [sign in](/powershell/azure/authenticate-azureps), or select **Try it**.
+1. Install the [Azure PowerShell](/powershell/azure/install-azure-powershell) and [sign in](/powershell/azure/authenticate-azureps), or select **Open Cloudshell**.
1. Display the status of the default rule for the Azure AI services resource.
- ```azurepowershell-interactive
- $parameters = @{
- "ResourceGroupName"= "myresourcegroup"
- "Name"= "myaccount"
-}
- (Get-AzCognitiveServicesAccountNetworkRuleSet @parameters).DefaultAction
- ```
+ ```azurepowershell-interactive
+ $parameters = @{
+ "ResourceGroupName" = "myresourcegroup"
+ "Name" = "myaccount"
+ }
+ (Get-AzCognitiveServicesAccountNetworkRuleSet @parameters).DefaultAction
+ ```
-1. Set the default rule to deny network access by default.
+ You can get values for your resource group `myresourcegroup` and the name of your Azure services resource `myaccount` from the Azure portal.
+
+1. Set the default rule to deny network access.
```azurepowershell-interactive $parameters = @{
- -ResourceGroupName "myresourcegroup"
- -Name "myaccount"
- -DefaultAction Deny
+ "ResourceGroupName" = "myresourcegroup"
+ "Name" = "myaccount"
+ "DefaultAction" = "Deny"
} Update-AzCognitiveServicesAccountNetworkRuleSet @parameters ```
-1. Set the default rule to allow network access by default.
+1. Set the default rule to allow network access.
```azurepowershell-interactive $parameters = @{
- -ResourceGroupName "myresourcegroup"
- -Name "myaccount"
- -DefaultAction Allow
+ "ResourceGroupName" = "myresourcegroup"
+ "Name" = "myaccount"
+ "DefaultAction" = "Allow"
} Update-AzCognitiveServicesAccountNetworkRuleSet @parameters ``` # [Azure CLI](#tab/azure-cli)
-1. Install the [Azure CLI](/cli/azure/install-azure-cli) and [sign in](/cli/azure/authenticate-azure-cli), or select **Try it**.
+1. Install the [Azure CLI](/cli/azure/install-azure-cli) and [sign in](/cli/azure/authenticate-azure-cli), or select **Open Cloudshell**.
1. Display the status of the default rule for the Azure AI services resource. ```azurecli-interactive az cognitiveservices account show \
- -g "myresourcegroup" -n "myaccount" \
- --query networkRuleSet.defaultAction
+ --resource-group "myresourcegroup" --name "myaccount" \
+ --query properties.networkAcls.defaultAction
```
+1. Get the resource ID for use in the later steps.
+
+ ```azurecli-interactive
+ resourceId=$(az cognitiveservices account show
+ --resource-group "myresourcegroup" \
+ --name "myaccount" --query id --output tsv)
+ ```
+ 1. Set the default rule to deny network access by default. ```azurecli-interactive az resource update \
- --ids {resourceId} \
+ --ids $resourceId \
--set properties.networkAcls="{'defaultAction':'Deny'}" ```
You can manage default network access rules for Azure AI services resources thro
```azurecli-interactive az resource update \
- --ids {resourceId} \
+ --ids $resourceId \
--set properties.networkAcls="{'defaultAction':'Allow'}" ```
You can manage default network access rules for Azure AI services resources thro
## Grant access from a virtual network
-You can configure Azure AI services resources to allow access only from specific subnets. The allowed subnets may belong to a VNet in the same subscription, or in a different subscription, including subscriptions belonging to a different Azure Active Directory tenant.
+You can configure Azure AI services resources to allow access from specific subnets only. The allowed subnets might belong to a virtual network in the same subscription or in a different subscription. The other subscription can belong to a different Azure AD tenant.
+
+Enable a *service endpoint* for Azure AI services within the virtual network. The service endpoint routes traffic from the virtual network through an optimal path to the Azure AI services service. For more information, see [Virtual Network service endpoints](../virtual-network/virtual-network-service-endpoints-overview.md).
-Enable a [service endpoint](../virtual-network/virtual-network-service-endpoints-overview.md) for Azure AI services within the VNet. The service endpoint routes traffic from the VNet through an optimal path to the Azure AI services service. The identities of the subnet and the virtual network are also transmitted with each request. Administrators can then configure network rules for the Azure AI services resource that allow requests to be received from specific subnets in a VNet. Clients granted access via these network rules must continue to meet the authorization requirements of the Azure AI services resource to access the data.
+The identities of the subnet and the virtual network are also transmitted with each request. Administrators can then configure network rules for the Azure AI services resource to allow requests from specific subnets in a virtual network. Clients granted access by these network rules must continue to meet the authorization requirements of the Azure AI services resource to access the data.
-Each Azure AI services resource supports up to 100 virtual network rules, which may be combined with [IP network rules](#grant-access-from-an-internet-ip-range).
+Each Azure AI services resource supports up to 100 virtual network rules, which can be combined with IP network rules. For more information, see [Grant access from an internet IP range](#grant-access-from-an-internet-ip-range) later in this article.
-### Required permissions
+### Set required permissions
-To apply a virtual network rule to an Azure AI services resource, the user must have the appropriate permissions for the subnets being added. The required permission is the default *Contributor* role, or the *Cognitive Services Contributor* role. Required permissions can also be added to custom role definitions.
+To apply a virtual network rule to an Azure AI services resource, you need the appropriate permissions for the subnets to add. The required permission is the default *Contributor* role or the *Cognitive Services Contributor* role. Required permissions can also be added to custom role definitions.
-Azure AI services resource and the virtual networks granted access may be in different subscriptions, including subscriptions that are a part of a different Azure AD tenant.
+The Azure AI services resource and the virtual networks that are granted access might be in different subscriptions, including subscriptions that are part of a different Azure AD tenant.
> [!NOTE]
-> Configuration of rules that grant access to subnets in virtual networks that are a part of a different Azure Active Directory tenant are currently only supported through PowerShell, CLI and REST APIs. Such rules cannot be configured through the Azure portal, though they may be viewed in the portal.
+> Configuration of rules that grant access to subnets in virtual networks that are a part of a different Azure AD tenant are currently supported only through PowerShell, the Azure CLI, and the REST APIs. You can view these rules in the Azure portal, but you can't configure them.
-### Managing virtual network rules
+### Configure virtual network rules
You can manage virtual network rules for Azure AI services resources through the Azure portal, PowerShell, or the Azure CLI. # [Azure portal](#tab/portal)
+To grant access to a virtual network with an existing network rule:
+ 1. Go to the Azure AI services resource you want to secure.
-1. Select the **RESOURCE MANAGEMENT** menu called **Virtual network**.
+1. Select **Resource Management** to expand it, then select **Networking**.
-1. Check that you've selected to allow access from **Selected networks**.
+1. Confirm that you selected **Selected Networks and Private Endpoints**.
-1. To grant access to a virtual network with an existing network rule, under **Virtual networks**, select **Add existing virtual network**.
+1. Under **Allow access from**, select **Add existing virtual network**.
- ![Add existing vNet](media/vnet/virtual-network-add-existing.png)
+ :::image type="content" source="media/vnet/virtual-network-add-existing.png" alt-text="Screenshot shows the Networking page with Selected Networks and Private Endpoints selected and Add existing virtual network highlighted." lightbox="media/vnet/virtual-network-add-existing.png":::
1. Select the **Virtual networks** and **Subnets** options, and then select **Enable**.
- ![Add existing vNet details](media/vnet/virtual-network-add-existing-details.png)
+ :::image type="content" source="media/vnet/virtual-network-add-existing-details.png" alt-text="Screenshot shows the Add networks dialog box where you can enter a virtual network and subnet.":::
-1. To create a new virtual network and grant it access, select **Add new virtual network**.
+ > [!NOTE]
+ > If a service endpoint for Azure AI services wasn't previously configured for the selected virtual network and subnets, you can configure it as part of this operation.
+ >
+ > Currently, only virtual networks that belong to the same Azure AD tenant are available for selection during rule creation. To grant access to a subnet in a virtual network that belongs to another tenant, use PowerShell, the Azure CLI, or the REST APIs.
- ![Add new vNet](media/vnet/virtual-network-add-new.png)
+1. Select **Save** to apply your changes.
+
+To create a new virtual network and grant it access:
+
+1. On the same page as the previous procedure, select **Add new virtual network**.
+
+ :::image type="content" source="media/vnet/virtual-network-add-new.png" alt-text="Screenshot shows the Networking page with Selected Networks and Private Endpoints selected and Add new virtual network highlighted." lightbox="media/vnet/virtual-network-add-new.png":::
1. Provide the information necessary to create the new virtual network, and then select **Create**.
- ![Create vNet](media/vnet/virtual-network-create.png)
+ :::image type="content" source="media/vnet/virtual-network-create.png" alt-text="Screenshot shows the Create virtual network dialog box.":::
- > [!NOTE]
- > If a service endpoint for Azure AI services wasn't previously configured for the selected virtual network and subnets, you can configure it as part of this operation.
- >
- > Presently, only virtual networks belonging to the same Azure Active Directory tenant are shown for selection during rule creation. To grant access to a subnet in a virtual network belonging to another tenant, please use PowerShell, CLI or REST APIs.
+1. Select **Save** to apply your changes.
-1. To remove a virtual network or subnet rule, select **...** to open the context menu for the virtual network or subnet, and select **Remove**.
+To remove a virtual network or subnet rule:
- ![Remove vNet](media/vnet/virtual-network-remove.png)
+1. On the same page as the previous procedures, select **...(More options)** to open the context menu for the virtual network or subnet, and select **Remove**.
+
+ :::image type="content" source="media/vnet/virtual-network-remove.png" alt-text="Screenshot shows the option to remove a virtual network." lightbox="media/vnet/virtual-network-remove.png":::
1. Select **Save** to apply your changes. # [PowerShell](#tab/powershell)
-1. Install the [Azure PowerShell](/powershell/azure/install-azure-powershell) and [sign in](/powershell/azure/authenticate-azureps), or select **Try it**.
+1. Install the [Azure PowerShell](/powershell/azure/install-azure-powershell) and [sign in](/powershell/azure/authenticate-azureps), or select **Open Cloudshell**.
-1. List virtual network rules.
+1. List the configured virtual network rules.
```azurepowershell-interactive
- $parameters = @{
- "ResourceGroupName"= "myresourcegroup"
- "Name"= "myaccount"
-}
+ $parameters = @{
+ "ResourceGroupName" = "myresourcegroup"
+ "Name" = "myaccount"
+ }
(Get-AzCognitiveServicesAccountNetworkRuleSet @parameters).VirtualNetworkRules ```
-1. Enable service endpoint for Azure AI services on an existing virtual network and subnet.
+1. Enable a service endpoint for Azure AI services on an existing virtual network and subnet.
```azurepowershell-interactive Get-AzVirtualNetwork -ResourceGroupName "myresourcegroup" ` -Name "myvnet" | Set-AzVirtualNetworkSubnetConfig -Name "mysubnet" `
- -AddressPrefix "10.0.0.0/24" `
+ -AddressPrefix "CIDR" `
-ServiceEndpoint "Microsoft.CognitiveServices" | Set-AzVirtualNetwork ```
You can manage virtual network rules for Azure AI services resources through the
```azurepowershell-interactive $subParameters = @{
- -ResourceGroupName "myresourcegroup"
- -Name "myvnet"
+ "ResourceGroupName" = "myresourcegroup"
+ "Name" = "myvnet"
} $subnet = Get-AzVirtualNetwork @subParameters | Get-AzVirtualNetworkSubnetConfig -Name "mysubnet"
You can manage virtual network rules for Azure AI services resources through the
``` > [!TIP]
- > To add a network rule for a subnet in a VNet belonging to another Azure AD tenant, use a fully-qualified **VirtualNetworkResourceId** parameter in the form "/subscriptions/subscription-ID/resourceGroups/resourceGroup-Name/providers/Microsoft.Network/virtualNetworks/vNet-name/subnets/subnet-name".
+ > To add a network rule for a subnet in a virtual network that belongs to another Azure AD tenant, use a fully-qualified `VirtualNetworkResourceId` parameter in the form `/subscriptions/subscription-ID/resourceGroups/resourceGroup-Name/providers/Microsoft.Network/virtualNetworks/vNet-name/subnets/subnet-name`.
1. Remove a network rule for a virtual network and subnet. ```azurepowershell-interactive $subParameters = @{
- -ResourceGroupName "myresourcegroup"
- -Name "myvnet"
+ "ResourceGroupName" = "myresourcegroup"
+ "Name" = "myvnet"
} $subnet = Get-AzVirtualNetwork @subParameters | Get-AzVirtualNetworkSubnetConfig -Name "mysubnet" $parameters = @{
- -ResourceGroupName "myresourcegroup"
- -Name "myaccount"
- -VirtualNetworkResourceId $subnet.Id
+ "ResourceGroupName" = "myresourcegroup"
+ "Name" = "myaccount"
+ "VirtualNetworkResourceId" = $subnet.Id
} Remove-AzCognitiveServicesAccountNetworkRule @parameters ``` # [Azure CLI](#tab/azure-cli)
-1. Install the [Azure CLI](/cli/azure/install-azure-cli) and [sign in](/cli/azure/authenticate-azure-cli), or select **Try it**.
+1. Install the [Azure CLI](/cli/azure/install-azure-cli) and [sign in](/cli/azure/authenticate-azure-cli), or select **Open Cloudshell**.
-1. List virtual network rules.
+1. List the configured virtual network rules.
```azurecli-interactive az cognitiveservices account network-rule list \
- -g "myresourcegroup" -n "myaccount" \
+ --resource-group "myresourcegroup" --name "myaccount" \
--query virtualNetworkRules ```
-1. Enable service endpoint for Azure AI services on an existing virtual network and subnet.
+1. Enable a service endpoint for Azure AI services on an existing virtual network and subnet.
```azurecli-interactive
- az network vnet subnet update -g "myresourcegroup" -n "mysubnet" \
+ az network vnet subnet update --resource-group "myresourcegroup" --name "mysubnet" \
--vnet-name "myvnet" --service-endpoints "Microsoft.CognitiveServices" ``` 1. Add a network rule for a virtual network and subnet. ```azurecli-interactive
- $subnetid=(az network vnet subnet show \
- -g "myresourcegroup" -n "mysubnet" --vnet-name "myvnet" \
+ subnetid=$(az network vnet subnet show \
+ --resource-group "myresourcegroup" --name "mysubnet" --vnet-name "myvnet" \
--query id --output tsv) # Use the captured subnet identifier as an argument to the network rule addition az cognitiveservices account network-rule add \
- -g "myresourcegroup" -n "myaccount" \
+ --resource-group "myresourcegroup" --name "myaccount" \
--subnet $subnetid ``` > [!TIP]
- > To add a rule for a subnet in a VNet belonging to another Azure AD tenant, use a fully-qualified subnet ID in the form "/subscriptions/subscription-ID/resourceGroups/resourceGroup-Name/providers/Microsoft.Network/virtualNetworks/vNet-name/subnets/subnet-name".
+ > To add a rule for a subnet in a virtual network that belongs to another Azure AD tenant, use a fully-qualified subnet ID in the form `/subscriptions/subscription-ID/resourceGroups/resourceGroup-Name/providers/Microsoft.Network/virtualNetworks/vNet-name/subnets/subnet-name`.
>
- > You can use the **subscription** parameter to retrieve the subnet ID for a VNet belonging to another Azure AD tenant.
+ > You can use the `--subscription` parameter to retrieve the subnet ID for a virtual network that belongs to another Azure AD tenant.
1. Remove a network rule for a virtual network and subnet. ```azurecli-interactive $subnetid=(az network vnet subnet show \
- -g "myresourcegroup" -n "mysubnet" --vnet-name "myvnet" \
+ --resource-group "myresourcegroup" --name "mysubnet" --vnet-name "myvnet" \
--query id --output tsv) # Use the captured subnet identifier as an argument to the network rule removal az cognitiveservices account network-rule remove \
- -g "myresourcegroup" -n "myaccount" \
+ --resource-group "myresourcegroup" --name "myaccount" \
--subnet $subnetid ``` *** > [!IMPORTANT]
-> Be sure to [set the default rule](#change-the-default-network-access-rule) to **deny**, or network rules have no effect.
+> Be sure to [set the default rule](#change-the-default-network-access-rule) to *deny*, or network rules have no effect.
## Grant access from an internet IP range
-You can configure Azure AI services resources to allow access from specific public internet IP address ranges. This configuration grants access to specific services and on-premises networks, effectively blocking general internet traffic.
+You can configure Azure AI services resources to allow access from specific public internet IP address ranges. This configuration grants access to specific services and on-premises networks, which effectively block general internet traffic.
-Provide allowed internet address ranges using [CIDR notation](https://tools.ietf.org/html/rfc4632) in the form `16.17.18.0/24` or as individual IP addresses like `16.17.18.19`.
+You can specify the allowed internet address ranges by using [CIDR format (RFC 4632)](https://tools.ietf.org/html/rfc4632) in the form `192.168.0.0/16` or as individual IP addresses like `192.168.0.1`.
> [!Tip]
- > Small address ranges using "/31" or "/32" prefix sizes are not supported. These ranges should be configured using individual IP address rules.
+ > Small address ranges that use `/31` or `/32` prefix sizes aren't supported. Configure these ranges by using individual IP address rules.
+
+IP network rules are only allowed for *public internet* IP addresses. IP address ranges reserved for private networks aren't allowed in IP rules. Private networks include addresses that start with `10.*`, `172.16.*` - `172.31.*`, and `192.168.*`. For more information, see [Private Address Space (RFC 1918)](https://tools.ietf.org/html/rfc1918#section-3).
+
+Currently, only IPv4 addresses are supported. Each Azure AI services resource supports up to 100 IP network rules, which can be combined with [virtual network rules](#grant-access-from-a-virtual-network).
-IP network rules are only allowed for **public internet** IP addresses. IP address ranges reserved for private networks (as defined in [RFC 1918](https://tools.ietf.org/html/rfc1918#section-3)) aren't allowed in IP rules. Private networks include addresses that start with `10.*`, `172.16.*` - `172.31.*`, and `192.168.*`.
+### Configure access from on-premises networks
-Only IPV4 addresses are supported at this time. Each Azure AI services resource supports up to 100 IP network rules, which may be combined with [Virtual network rules](#grant-access-from-a-virtual-network).
+To grant access from your on-premises networks to your Azure AI services resource with an IP network rule, identify the internet-facing IP addresses used by your network. Contact your network administrator for help.
-### Configuring access from on-premises networks
+If you use Azure ExpressRoute on-premises for public peering or Microsoft peering, you need to identify the NAT IP addresses. For more information, see [What is Azure ExpressRoute](../expressroute/expressroute-introduction.md).
-To grant access from your on-premises networks to your Azure AI services resource with an IP network rule, you must identify the internet facing IP addresses used by your network. Contact your network administrator for help.
+For public peering, each ExpressRoute circuit by default uses two NAT IP addresses. Each is applied to Azure service traffic when the traffic enters the Microsoft Azure network backbone. For Microsoft peering, the NAT IP addresses that are used are either customer provided or supplied by the service provider. To allow access to your service resources, you must allow these public IP addresses in the resource IP firewall setting.
-If you're using [ExpressRoute](../expressroute/expressroute-introduction.md) on-premises for public peering or Microsoft peering, you need to identify the NAT IP addresses. For public peering, each ExpressRoute circuit by default uses two NAT IP addresses. Each is applied to Azure service traffic when the traffic enters the Microsoft Azure network backbone. For Microsoft peering, the NAT IP addresses that are used are either customer provided or are provided by the service provider. To allow access to your service resources, you must allow these public IP addresses in the resource IP firewall setting. To find your public peering ExpressRoute circuit IP addresses, [open a support ticket with ExpressRoute](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview) via the Azure portal. Learn more about [NAT for ExpressRoute public and Microsoft peering.](../expressroute/expressroute-nat.md#nat-requirements-for-azure-public-peering)
+To find your public peering ExpressRoute circuit IP addresses, [open a support ticket with ExpressRoute](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview) use the Azure portal. For more information, see [NAT requirements for Azure public peering](../expressroute/expressroute-nat.md#nat-requirements-for-azure-public-peering).
### Managing IP network rules
You can manage IP network rules for Azure AI services resources through the Azur
1. Go to the Azure AI services resource you want to secure.
-1. Select the **RESOURCE MANAGEMENT** menu called **Virtual network**.
+1. Select **Resource Management** to expand it, then select **Networking**.
-1. Check that you've selected to allow access from **Selected networks**.
+1. Confirm that you selected **Selected Networks and Private Endpoints**.
-1. To grant access to an internet IP range, enter the IP address or address range (in [CIDR format](https://tools.ietf.org/html/rfc4632)) under **Firewall** > **Address Range**. Only valid public IP (non-reserved) addresses are accepted.
+1. Under **Firewalls and virtual networks**, locate the **Address range** option. To grant access to an internet IP range, enter the IP address or address range (in [CIDR format](https://tools.ietf.org/html/rfc4632)). Only valid public IP (nonreserved) addresses are accepted.
- ![Add IP range](media/vnet/virtual-network-add-ip-range.png)
+ :::image type="content" source="media/vnet/virtual-network-add-ip-range.png" alt-text="Screenshot shows the Networking page with Selected Networks and Private Endpoints selected and the Address range highlighted." lightbox="media/vnet/virtual-network-add-ip-range.png":::
-1. To remove an IP network rule, select the trash can <span class="docon docon-delete x-hidden-focus"></span> icon next to the address range.
-
- ![Delete IP range](media/vnet/virtual-network-delete-ip-range.png)
+ To remove an IP network rule, select the trash can <span class="docon docon-delete x-hidden-focus"></span> icon next to the address range.
1. Select **Save** to apply your changes. # [PowerShell](#tab/powershell)
-1. Install the [Azure PowerShell](/powershell/azure/install-azure-powershell) and [sign in](/powershell/azure/authenticate-azureps), or select **Try it**.
+1. Install the [Azure PowerShell](/powershell/azure/install-azure-powershell) and [sign in](/powershell/azure/authenticate-azureps), or select **Open Cloudshell**.
-1. List IP network rules.
+1. List the configured IP network rules.
- ```azurepowershell-interactive
- $parameters = @{
- "ResourceGroupName"= "myresourcegroup"
- "Name"= "myaccount"
-}
+ ```azurepowershell-interactive
+ $parameters = @{
+ "ResourceGroupName" = "myresourcegroup"
+ "Name" = "myaccount"
+ }
(Get-AzCognitiveServicesAccountNetworkRuleSet @parameters).IPRules ```
You can manage IP network rules for Azure AI services resources through the Azur
```azurepowershell-interactive $parameters = @{
- -ResourceGroupName "myresourcegroup"
- -Name "myaccount"
- -IPAddressOrRange "16.17.18.19"
+ "ResourceGroupName" = "myresourcegroup"
+ "Name" = "myaccount"
+ "IPAddressOrRange" = "ipaddress"
} Add-AzCognitiveServicesAccountNetworkRule @parameters ```
You can manage IP network rules for Azure AI services resources through the Azur
```azurepowershell-interactive $parameters = @{
- -ResourceGroupName "myresourcegroup"
- -Name "myaccount"
- -IPAddressOrRange "16.17.18.0/24"
+ "ResourceGroupName" = "myresourcegroup"
+ "Name" = "myaccount"
+ "IPAddressOrRange" = "CIDR"
} Add-AzCognitiveServicesAccountNetworkRule @parameters ```
You can manage IP network rules for Azure AI services resources through the Azur
```azurepowershell-interactive $parameters = @{
- -ResourceGroupName "myresourcegroup"
- -Name "myaccount"
- -IPAddressOrRange "16.17.18.19"
+ "ResourceGroupName" = "myresourcegroup"
+ "Name" = "myaccount"
+ "IPAddressOrRange" = "ipaddress"
} Remove-AzCognitiveServicesAccountNetworkRule @parameters ```
You can manage IP network rules for Azure AI services resources through the Azur
```azurepowershell-interactive $parameters = @{
- -ResourceGroupName "myresourcegroup"
- -Name "myaccount"
- -IPAddressOrRange "16.17.18.0/24"
+ "ResourceGroupName" = "myresourcegroup"
+ "Name" = "myaccount"
+ "IPAddressOrRange" = "CIDR"
} Remove-AzCognitiveServicesAccountNetworkRule @parameters ``` # [Azure CLI](#tab/azure-cli)
-1. Install the [Azure CLI](/cli/azure/install-azure-cli) and [sign in](/cli/azure/authenticate-azure-cli), or select **Try it**.
+1. Install the [Azure CLI](/cli/azure/install-azure-cli) and [sign in](/cli/azure/authenticate-azure-cli), or select **Open Cloudshell**.
-1. List IP network rules.
+1. List the configured IP network rules.
```azurecli-interactive az cognitiveservices account network-rule list \
- -g "myresourcegroup" -n "myaccount" --query ipRules
+ --resource-group "myresourcegroup" --name "myaccount" --query ipRules
``` 1. Add a network rule for an individual IP address. ```azurecli-interactive az cognitiveservices account network-rule add \
- -g "myresourcegroup" -n "myaccount" \
- --ip-address "16.17.18.19"
+ --resource-group "myresourcegroup" --name "myaccount" \
+ --ip-address "ipaddress"
``` 1. Add a network rule for an IP address range. ```azurecli-interactive az cognitiveservices account network-rule add \
- -g "myresourcegroup" -n "myaccount" \
- --ip-address "16.17.18.0/24"
+ --resource-group "myresourcegroup" --name "myaccount" \
+ --ip-address "CIDR"
``` 1. Remove a network rule for an individual IP address. ```azurecli-interactive az cognitiveservices account network-rule remove \
- -g "myresourcegroup" -n "myaccount" \
- --ip-address "16.17.18.19"
+ --resource-group "myresourcegroup" --name "myaccount" \
+ --ip-address "ipaddress"
``` 1. Remove a network rule for an IP address range. ```azurecli-interactive az cognitiveservices account network-rule remove \
- -g "myresourcegroup" -n "myaccount" \
- --ip-address "16.17.18.0/24"
+ --resource-group "myresourcegroup" --name "myaccount" \
+ --ip-address "CIDR"
``` *** > [!IMPORTANT]
-> Be sure to [set the default rule](#change-the-default-network-access-rule) to **deny**, or network rules have no effect.
+> Be sure to [set the default rule](#change-the-default-network-access-rule) to *deny*, or network rules have no effect.
## Use private endpoints
-You can use [private endpoints](../private-link/private-endpoint-overview.md) for your Azure AI services resources to allow clients on a virtual network (VNet) to securely access data over a [Private Link](../private-link/private-link-overview.md). The private endpoint uses an IP address from the VNet address space for your Azure AI services resource. Network traffic between the clients on the VNet and the resource traverses the VNet and a private link on the Microsoft backbone network, eliminating exposure from the public internet.
+You can use [private endpoints](../private-link/private-endpoint-overview.md) for your Azure AI services resources to allow clients on a virtual network to securely access data over [Azure Private Link](../private-link/private-link-overview.md). The private endpoint uses an IP address from the virtual network address space for your Azure AI services resource. Network traffic between the clients on the virtual network and the resource traverses the virtual network and a private link on the Microsoft Azure backbone network, which eliminates exposure from the public internet.
Private endpoints for Azure AI services resources let you:
-* Secure your Azure AI services resource by configuring the firewall to block all connections on the public endpoint for the Azure AI services service.
-* Increase security for the VNet, by enabling you to block exfiltration of data from the VNet.
-* Securely connect to Azure AI services resources from on-premises networks that connect to the VNet using [VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md) or [ExpressRoutes](../expressroute/expressroute-locations.md) with private-peering.
+- Secure your Azure AI services resource by configuring the firewall to block all connections on the public endpoint for the Azure AI services service.
+- Increase security for the virtual network, by enabling you to block exfiltration of data from the virtual network.
+- Securely connect to Azure AI services resources from on-premises networks that connect to the virtual network by using [Azure VPN Gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md) or [ExpressRoutes](../expressroute/expressroute-locations.md) with private-peering.
-### Conceptual overview
+### Understand private endpoints
-A private endpoint is a special network interface for an Azure resource in your [VNet](../virtual-network/virtual-networks-overview.md). Creating a private endpoint for your Azure AI services resource provides secure connectivity between clients in your VNet and your resource. The private endpoint is assigned an IP address from the IP address range of your VNet. The connection between the private endpoint and the Azure AI services service uses a secure private link.
+A private endpoint is a special network interface for an Azure resource in your [virtual network](../virtual-network/virtual-networks-overview.md). Creating a private endpoint for your Azure AI services resource provides secure connectivity between clients in your virtual network and your resource. The private endpoint is assigned an IP address from the IP address range of your virtual network. The connection between the private endpoint and the Azure AI services service uses a secure private link.
-Applications in the VNet can connect to the service over the private endpoint seamlessly, using the same connection strings and authorization mechanisms that they would use otherwise. The exception is the Speech Services, which require a separate endpoint. See the section on [Private endpoints with the Speech Services](#private-endpoints-with-the-speech-services). Private endpoints can be used with all protocols supported by the Azure AI services resource, including REST.
+Applications in the virtual network can connect to the service over the private endpoint seamlessly. Connections use the same connection strings and authorization mechanisms that they would use otherwise. The exception is Speech Services, which require a separate endpoint. For more information, see [Private endpoints with the Speech Services](#use-private-endpoints-with-the-speech-service) in this article. Private endpoints can be used with all protocols supported by the Azure AI services resource, including REST.
-Private endpoints can be created in subnets that use [Service Endpoints](../virtual-network/virtual-network-service-endpoints-overview.md). Clients in a subnet can connect to one Azure AI services resource using private endpoint, while using service endpoints to access others.
+Private endpoints can be created in subnets that use service endpoints. Clients in a subnet can connect to one Azure AI services resource using private endpoint, while using service endpoints to access others. For more information, see [Virtual Network service endpoints](../virtual-network/virtual-network-service-endpoints-overview.md).
-When you create a private endpoint for an Azure AI services resource in your VNet, a consent request is sent for approval to the Azure AI services resource owner. If the user requesting the creation of the private endpoint is also an owner of the resource, this consent request is automatically approved.
+When you create a private endpoint for an Azure AI services resource in your virtual network, Azure sends a consent request for approval to the Azure AI services resource owner. If the user who requests the creation of the private endpoint is also an owner of the resource, this consent request is automatically approved.
-Azure AI services resource owners can manage consent requests and the private endpoints, through the '*Private endpoints*' tab for the Azure AI services resource in the [Azure portal](https://portal.azure.com).
+Azure AI services resource owners can manage consent requests and the private endpoints through the **Private endpoint connection** tab for the Azure AI services resource in the [Azure portal](https://portal.azure.com).
-### Private endpoints
+### Specify private endpoints
-When creating the private endpoint, you must specify the Azure AI services resource it connects to. For more information on creating a private endpoint, see:
+When you create a private endpoint, specify the Azure AI services resource that it connects to. For more information on creating a private endpoint, see:
-* [Create a private endpoint using the Private Link Center in the Azure portal](../private-link/create-private-endpoint-portal.md)
-* [Create a private endpoint using Azure CLI](../private-link/create-private-endpoint-cli.md)
-* [Create a private endpoint using Azure PowerShell](../private-link/create-private-endpoint-powershell.md)
+- [Create a private endpoint by using the Azure portal](../private-link/create-private-endpoint-portal.md)
+- [Create a private endpoint by using Azure PowerShell](../private-link/create-private-endpoint-powershell.md)
+- [Create a private endpoint by using the Azure CLI](../private-link/create-private-endpoint-cli.md)
-### Connecting to private endpoints
+### Connect to private endpoints
> [!NOTE]
-> Azure OpenAI Service uses a different private DNS zone and public DNS zone forwarder than other Azure AI services. Refer to the [Azure services DNS zone configuration article](../private-link/private-endpoint-dns.md#azure-services-dns-zone-configuration) for the correct zone and forwarder names.
+> Azure OpenAI Service uses a different private DNS zone and public DNS zone forwarder than other Azure AI services. For the correct zone and forwarder names, see [Azure services DNS zone configuration](../private-link/private-endpoint-dns.md#azure-services-dns-zone-configuration).
-Clients on a VNet using the private endpoint should use the same connection string for the Azure AI services resource as clients connecting to the public endpoint. The exception is the Speech Services, which require a separate endpoint. See the section on [Private endpoints with the Speech Services](#private-endpoints-with-the-speech-services). We rely upon DNS resolution to automatically route the connections from the VNet to the Azure AI services resource over a private link.
+Clients on a virtual network that use the private endpoint use the same connection string for the Azure AI services resource as clients connecting to the public endpoint. The exception is the Speech service, which requires a separate endpoint. For more information, see [Use private endpoints with the Speech service](#use-private-endpoints-with-the-speech-service) in this article. DNS resolution automatically routes the connections from the virtual network to the Azure AI services resource over a private link.
-We create a [private DNS zone](../dns/private-dns-overview.md) attached to the VNet with the necessary updates for the private endpoints, by default. However, if you're using your own DNS server, you may need to make more changes to your DNS configuration. The section on [DNS changes](#dns-changes-for-private-endpoints) below describes the updates required for private endpoints.
+By default, Azure creates a [private DNS zone](../dns/private-dns-overview.md) attached to the virtual network with the necessary updates for the private endpoints. If you use your own DNS server, you might need to make more changes to your DNS configuration. For updates that might be required for private endpoints, see [Apply DNS changes for private endpoints](#apply-dns-changes-for-private-endpoints) in this article.
-### Private endpoints with the Speech Services
+### Use private endpoints with the Speech service
-See [Using Speech Services with private endpoints provided by Azure Private Link](Speech-Service/speech-services-private-link.md).
+See [Use Speech service through a private endpoint](Speech-Service/speech-services-private-link.md).
-### DNS changes for private endpoints
+### Apply DNS changes for private endpoints
-When you create a private endpoint, the DNS CNAME resource record for the Azure AI services resource is updated to an alias in a subdomain with the prefix `privatelink`. By default, we also create a [private DNS zone](../dns/private-dns-overview.md), corresponding to the `privatelink` subdomain, with the DNS A resource records for the private endpoints.
+When you create a private endpoint, the DNS `CNAME` resource record for the Azure AI services resource is updated to an alias in a subdomain with the prefix `privatelink`. By default, Azure also creates a private DNS zone that corresponds to the `privatelink` subdomain, with the DNS A resource records for the private endpoints. For more information, see [What is Azure Private DNS](../dns/private-dns-overview.md).
-When you resolve the endpoint URL from outside the VNet with the private endpoint, it resolves to the public endpoint of the Azure AI services resource. When resolved from the VNet hosting the private endpoint, the endpoint URL resolves to the private endpoint's IP address.
+When you resolve the endpoint URL from outside the virtual network with the private endpoint, it resolves to the public endpoint of the Azure AI services resource. When it's resolved from the virtual network hosting the private endpoint, the endpoint URL resolves to the private endpoint's IP address.
-This approach enables access to the Azure AI services resource using the same connection string for clients in the VNet hosting the private endpoints and clients outside the VNet.
+This approach enables access to the Azure AI services resource using the same connection string for clients in the virtual network that hosts the private endpoints and clients outside the virtual network.
-If you're using a custom DNS server on your network, clients must be able to resolve the fully qualified domain name (FQDN) for the Azure AI services resource endpoint to the private endpoint IP address. Configure your DNS server to delegate your private link subdomain to the private DNS zone for the VNet.
+If you use a custom DNS server on your network, clients must be able to resolve the fully qualified domain name (FQDN) for the Azure AI services resource endpoint to the private endpoint IP address. Configure your DNS server to delegate your private link subdomain to the private DNS zone for the virtual network.
> [!TIP]
-> When using a custom or on-premises DNS server, you should configure your DNS server to resolve the Azure AI services resource name in the 'privatelink' subdomain to the private endpoint IP address. You can do this by delegating the 'privatelink' subdomain to the private DNS zone of the VNet, or configuring the DNS zone on your DNS server and adding the DNS A records.
+> When you use a custom or on-premises DNS server, you should configure your DNS server to resolve the Azure AI services resource name in the `privatelink` subdomain to the private endpoint IP address. Delegate the `privatelink` subdomain to the private DNS zone of the virtual network. Alternatively, configure the DNS zone on your DNS server and add the DNS A records.
-For more information on configuring your own DNS server to support private endpoints, see the following articles:
+For more information on configuring your own DNS server to support private endpoints, see the following resources:
-* [Name resolution for resources in Azure virtual networks](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server)
-* [DNS configuration for private endpoints](../private-link/private-endpoint-overview.md#dns-configuration)
+- [Name resolution that uses your own DNS server](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server)
+- [DNS configuration](../private-link/private-endpoint-overview.md#dns-configuration)
### Pricing
For pricing details, see [Azure Private Link pricing](https://azure.microsoft.co
## Next steps
-* Explore the various [Azure AI services](./what-are-ai-services.md)
-* Learn more about [Azure Virtual Network Service Endpoints](../virtual-network/virtual-network-service-endpoints-overview.md)
+- Explore the various [Azure AI services](./what-are-ai-services.md)
+- Learn more about [Virtual Network service endpoints](../virtual-network/virtual-network-service-endpoints-overview.md)
ai-services Network Isolation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/how-to/network-isolation.md
This will establish a private endpoint connection between language resource and
Follow the steps below to restrict public access to question answering language resources. Protect an Azure AI services resource from public access by [configuring the virtual network](../../../cognitive-services-virtual-networks.md?tabs=portal). After restricting access to an Azure AI services resource based on VNet, To browse projects on Language Studio from your on-premises network or your local browser.-- Grant access to [on-premises network](../../../cognitive-services-virtual-networks.md?tabs=portal#configuring-access-from-on-premises-networks).
+- Grant access to [on-premises network](../../../cognitive-services-virtual-networks.md?tabs=portal#configure-access-from-on-premises-networks).
- Grant access to your [local browser/machine](../../../cognitive-services-virtual-networks.md?tabs=portal#managing-ip-network-rules). - Add the **public IP address of the machine under the Firewall** section of the **Networking** tab. By default `portal.azure.com` shows the current browsing machine's public IP (select this entry) and then select **Save**.
ai-services Network Isolation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/How-To/network-isolation.md
The Cognitive Search instance can be isolated via a private endpoint after the Q
Follow the steps below to restrict public access to QnA Maker resources. Protect an Azure AI services resource from public access by [configuring the virtual network](../../cognitive-services-virtual-networks.md?tabs=portal). After restricting access to the Azure AI service resource based on VNet, To browse knowledgebases on the https://qnamaker.ai portal from your on-premises network or your local browser.-- Grant access to [on-premises network](../../cognitive-services-virtual-networks.md?tabs=portal#configuring-access-from-on-premises-networks).
+- Grant access to [on-premises network](../../cognitive-services-virtual-networks.md?tabs=portal#configure-access-from-on-premises-networks).
- Grant access to your [local browser/machine](../../cognitive-services-virtual-networks.md?tabs=portal#managing-ip-network-rules). - Add the **public IP address of the machine under the Firewall** section of the **Networking** tab. By default `portal.azure.com` shows the current browsing machine's public IP (select this entry) and then select **Save**.
ai-services Batch Transcription Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/batch-transcription-create.md
Batch transcription requests for expired models will fail with a 4xx error. You'
The transcription result can be stored in an Azure container. If you don't specify a container, the Speech service stores the results in a container managed by Microsoft. In that case, when the transcription job is deleted, the transcription result data is also deleted.
-You can store the results of a batch transcription to a writable Azure Blob storage container using option `destinationContainerUrl` in the [batch transcription creation request](#create-a-transcription-job). Note however that this option is only using [ad hoc SAS](batch-transcription-audio-data.md#sas-url-for-batch-transcription) URI and doesn't support [Trusted Azure services security mechanism](batch-transcription-audio-data.md#trusted-azure-services-security-mechanism). The Storage account resource of the destination container must allow all external traffic.
+You can store the results of a batch transcription to a writable Azure Blob storage container using option `destinationContainerUrl` in the [batch transcription creation request](#create-a-transcription-job). Note however that this option is only using [ad hoc SAS](batch-transcription-audio-data.md#sas-url-for-batch-transcription) URI and doesn't support [Trusted Azure services security mechanism](batch-transcription-audio-data.md#trusted-azure-services-security-mechanism). This option also doesn't support Access policy based SAS. The Storage account resource of the destination container must allow all external traffic.
If you would like to store the transcription results in an Azure Blob storage container via the [Trusted Azure services security mechanism](batch-transcription-audio-data.md#trusted-azure-services-security-mechanism), then you should consider using [Bring-your-own-storage (BYOS)](bring-your-own-storage-speech-resource.md). See details on how to use BYOS-enabled Speech resource for Batch transcription in [this article](bring-your-own-storage-speech-resource-speech-to-text.md).
ai-services Get Started Stt Diarization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/get-started-stt-diarization.md
Last updated 7/27/2023
-zone_pivot_groups: programming-languages-set-twenty-two
+zone_pivot_groups: programming-languages-speech-services
keywords: speech to text, speech to text software
keywords: speech to text, speech to text software
[!INCLUDE [C++ include](includes/quickstarts/stt-diarization/cpp.md)] ::: zone-end + ::: zone pivot="programming-language-java" [!INCLUDE [Java include](includes/quickstarts/stt-diarization/java.md)] ::: zone-end +++ ::: zone pivot="programming-language-python" [!INCLUDE [Python include](includes/quickstarts/stt-diarization/python.md)] ::: zone-end +++ ## Next steps > [!div class="nextstepaction"]
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/overview.md
The base model may not be sufficient if the audio contains ambient noise or incl
With [real-time speech to text](get-started-speech-to-text.md), the audio is transcribed as speech is recognized from a microphone or file. Use real-time speech to text for applications that need to transcribe audio in real-time such as: - Transcriptions, captions, or subtitles for live meetings
+- [Diarization](get-started-stt-diarization.md)
+- [Pronunciation assessment](how-to-pronunciation-assessment.md)
- Contact center agent assist - Dictation - Voice agents-- Pronunciation assessment ### Batch transcription
ai-services Speech Services Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-services-private-link.md
Use these parameters instead of the parameters in the article that you chose:
| Resource | **\<your-speech-resource-name>** | | Target sub-resource | **account** |
-**DNS for private endpoints:** Review the general principles of [DNS for private endpoints in Azure AI services resources](../cognitive-services-virtual-networks.md#dns-changes-for-private-endpoints). Then confirm that your DNS configuration is working correctly by performing the checks described in the following sections.
+**DNS for private endpoints:** Review the general principles of [DNS for private endpoints in Azure AI services resources](../cognitive-services-virtual-networks.md#apply-dns-changes-for-private-endpoints). Then confirm that your DNS configuration is working correctly by performing the checks described in the following sections.
### Resolve DNS from the virtual network
ai-services Speech Services Quotas And Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-services-quotas-and-limits.md
These limits aren't adjustable.
| Max number of simultaneous dataset uploads | N/A | 5 | | Max data file size for data import per dataset | N/A | 2 GB | | Upload of long audios or audios without script | N/A | Yes |
-| Max number of simultaneous model trainings | N/A | 3 |
+| Max number of simultaneous model trainings | N/A | 4 |
| Max number of custom endpoints | N/A | 50 | #### Audio Content Creation tool
ai-services Deploy User Managed Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/containers/deploy-user-managed-glossary.md
+
+ Title: Deploy a user-managed glossary in Translator container
+
+description: How to deploy a user-managed glossary in the Translator container environment.
++++++ Last updated : 08/15/2023+
+recommendations: false
++
+<!-- markdownlint-disable MD036 -->
+<!-- markdownlint-disable MD046 -->
+
+# Deploy a user-managed glossary
+
+Microsoft Translator containers enable you to run several features of the Translator service in your own environment and are great for specific security and data governance requirements.
+
+There may be times when you're running a container with a multi-layered ingestion process when you discover that you need to implement an update to sentence and/or phrase files. Since the standard phrase and sentence files are encrypted and read directly into memory at runtime, you need to implement a quick-fix engineering solution to implement a dynamic update. This update can be implemented using our user-managed glossary feature:
+
+* To deploy the **phrase&#8203;fix** solution, you need to create a **phrase&#8203;fix** glossary file to specify that a listed phrase is translated in a specified way.
+
+* To deploy the **sent&#8203;fix** solution, you need to create a **sent&#8203;fix** glossary file to specify an exact target translation for a source sentence.
+
+* The **phrase&#8203;fix** and **sent&#8203;fix** files are then included with your translation request and read directly into memory at runtime.
+
+## Managed glossary workflow
+
+ > [!IMPORTANT]
+ > **UTF-16 LE** is the only accepted file format for the managed-glossary folders. For more information about encoding your files, *see* [Encoding](/powershell/module/microsoft.powershell.management/set-content?view=powershell-7.2#-encoding&preserve-view=true)
+
+1. To get started manually creating the folder structure, you need to create and name your folder. The managed-glossary folder is encoded in **UTF-16 LE BOM** format and nests **phrase&#8203;fix** or **sent&#8203;fix** source and target language files. Let's name our folder `customhotfix`. Each folder can have **phrase&#8203;fix** and **sent&#8203;fix** files. You provide the source (`src`) and target (`tgt`) language codes with the following naming convention:
+
+ |Glossary file name format|Example file name |
+ |--|--|
+ |{`src`}.{`tgt`}.{container-glossary}.{phrase&#8203;fix}.src.snt|en.es.container-glossary.phrasefix.src.snt|
+ |{`src`}.{`tgt`}.{container-glossary}.{phrase&#8203;fix}.tgt.snt|en.es.container-glossary.phrasefix.tgt.snt|
+ |{`src`}.{`tgt`}.{container-glossary}.{sent&#8203;fix}.src.snt|en.es.container-glossary.sentfix.src.snt|
+ |{`src`}.{`tgt`}.{container-glossary}.{sent&#8203;fix}.tgt.snt|en.es.container-glossary.sentfix.tgt.snt|
+
+ > [!NOTE]
+ >
+ > * The **phrase&#8203;fix** solution is an exact find-and-replace operation. Any word or phrase listed is translated in the way specified.
+ > * The **sent&#8203;fix** solution is more precise and allows you to specify an exact target translation for a source sentence. For a sentence match to occur, the entire submitted sentence must match the **sent&#8203;fix** entry. If only a portion of the sentence matches, the entry won't match.
+ > * If you're hesitant about making sweeping find-and-replace changes, we recommend, at the outset, solely using the **sent&#8203;fix** solution.
+
+1. Next, to dynamically reload glossary entry updates, create a `version.json` file within the `customhotfix` folder. The `version.json` file should contain the following parameters: **VersionId**. An integer value.
+
+ ***Sample version.json file***
+
+ ```json
+ {
+
+ "VersionId": 5
+
+ }
+
+ ```
+
+ > [!TIP]
+ >
+ > Reload can be controlled by setting the following environmental variables when starting the container:
+ >
+ > * **HotfixReloadInterval=**. Default value is 5 minutes.
+ > * **HotfixReloadEnabled=**. Default value is true.
+
+1. Use the **docker run** command
+
+ **Docker run command required options**
+
+ ```dockerfile
+ docker run --rm -it -p 5000:5000 \
+
+ -e eula=accept \
+
+ -e billing={ENDPOINT_URI} \
+
+ -e apikey={API_KEY} \
+
+ -e Languages={LANGUAGES_LIST} \
+
+ -e HotfixDataFolder={path to glossary folder}
+
+ {image}
+ ```
+
+ **Example docker run command**
+
+ ```dockerfile
+
+ docker run -rm -it -p 5000:5000 \
+ -v /mnt/d/models:/usr/local/models -v /mnt/d /customerhotfix:/usr/local/customhotfix \
+ -e EULA=accept \
+ -e billing={ENDPOINT_URI} \
+ -e apikey={API_Key} \
+ -e Languages=en,es \
+ -e HotfixDataFolder=/usr/local/customhotfix\
+ mcr.microsoft.com/azure-cognitive-services/translator/text-translation:latest
+
+ ```
+
+## Learn more
+
+> [!div class="nextstepaction"]
+> [Create a dynamic dictionary](../dynamic-dictionary.md) [Use a custom dictionary](../custom-translator/concepts/dictionaries.md)
ai-services Translator How To Install Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/containers/translator-how-to-install-container.md
keywords: on-premises, Docker, container, identify
# Install and run Translator containers
-Containers enable you to run several features of the Translator service in your own environment. Containers are great for specific security and data governance requirements. In this article you'll learn how to download, install, and run a Translator container.
+Containers enable you to run several features of the Translator service in your own environment. Containers are great for specific security and data governance requirements. In this article you learn how to download, install, and run a Translator container.
Translator container enables you to build a translator application architecture that is optimized for both robust cloud capabilities and edge locality.
See the list of [languages supported](../language-support.md) when using Transla
> [!IMPORTANT] >
-> * To use the Translator container, you must submit an online request, and have it approved. For more information, _see_ [Request approval to run container](#request-approval-to-run-container) below.
-> * Translator container supports limited features compared to the cloud offerings. Form more information, _see_ [**Container translate methods**](translator-container-supported-parameters.md).
+> * To use the Translator container, you must submit an online request and have it approved. For more information, _see_ [Request approval to run container](#request-approval-to-run-container).
+> * Translator container supports limited features compared to the cloud offerings. For more information, _see_ [**Container translate methods**](translator-container-supported-parameters.md).
<!-- markdownlint-disable MD033 --> ## Prerequisites
-To get started, you'll need an active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/).
+To get started, you need an active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/).
-You'll also need to have:
+You also need:
| Required | Purpose | |--|--|
-| Familiarity with Docker | <ul><li>You should have a basic understanding of Docker concepts, like registries, repositories, containers, and container images, as well as knowledge of basic `docker` [terminology and commands](/dotnet/architecture/microservices/container-docker-introduction/docker-terminology).</li></ul> |
+| Familiarity with Docker | <ul><li>You should have a basic understanding of Docker concepts like registries, repositories, containers, and container images, as well as knowledge of basic `docker` [terminology and commands](/dotnet/architecture/microservices/container-docker-introduction/docker-terminology).</li></ul> |
| Docker Engine | <ul><li>You need the Docker Engine installed on a [host computer](#host-computer). Docker provides packages that configure the Docker environment on [macOS](https://docs.docker.com/docker-for-mac/), [Windows](https://docs.docker.com/docker-for-windows/), and [Linux](https://docs.docker.com/engine/installation/#supported-platforms). For a primer on Docker and container basics, see the [Docker overview](https://docs.docker.com/engine/docker-overview/).</li><li> Docker must be configured to allow the containers to connect with and send billing data to Azure. </li><li> On **Windows**, Docker must also be configured to support **Linux** containers.</li></ul> |
-| Translator resource | <ul><li>An Azure [Translator](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) resource with region other than 'global', associated API key and endpoint URI. Both values are required to start the container and can be found on the resource overview page.</li></ul>|
+| Translator resource | <ul><li>An Azure [Translator](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) regional resource (not `global`) with an associated API key and endpoint URI. Both values are required to start the container and can be found on the resource overview page.</li></ul>|
|Optional|Purpose| ||-|
curl -X POST "http://localhost:5000/translate?api-version=3.0&from=en&to=zh-HANS
There are several ways to validate that the container is running:
-* The container provides a homepage at `\` as a visual validation that the container is running.
+* The container provides a homepage at `/` as a visual validation that the container is running.
-* You can open your favorite web browser and navigate to the external IP address and exposed port of the container in question. Use the various request URLs below to validate the container is running. The example request URLs listed below are `http://localhost:5000`, but your specific container may vary. Keep in mind that you're navigating to your container's **External IP address** and exposed port.
+* You can open your favorite web browser and navigate to the external IP address and exposed port of the container in question. Use the following request URLs to validate the container is running. The example request URLs listed point to `http://localhost:5000`, but your specific container may vary. Keep in mind that you're navigating to your container's **External IP address** and exposed port.
| Request URL | Purpose | |--|--|
aks Azure Ad Integration Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-ad-integration-cli.md
description: Learn how to use the Azure CLI to create and Azure Active Directory
Previously updated : 07/07/2023 Last updated : 08/15/2023 # Integrate Azure Active Directory with Azure Kubernetes Service (AKS) using the Azure CLI (legacy) > [!WARNING]
-> The feature described in this document, Azure AD Integration (legacy) was **deprecated on June 1st, 2023**. At this time, no new clusters can be created with Azure AD Integration (legacy). All Azure AD Integration (legacy) AKS clusters will be migrated to AKS-managed Azure AD automatically starting from August 1st, 2023.
+> The feature described in this document, Azure AD Integration (legacy) was **deprecated on June 1st, 2023**. At this time, no new clusters can be created with Azure AD Integration (legacy). All Azure AD Integration (legacy) AKS clusters will be migrated to AKS-managed Azure AD automatically starting from December 1st, 2023.
> > AKS has a new improved [AKS-managed Azure AD][managed-aad] experience that doesn't require you to manage server or client applications. If you want to migrate follow the instructions [here][managed-aad-migrate].
aks Azure Csi Blob Storage Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-blob-storage-provision.md
description: Learn how to create a static or dynamic persistent volume with Azure Blob storage for use with multiple concurrent pods in Azure Kubernetes Service (AKS) Previously updated : 05/17/2023 Last updated : 08/16/2023 # Create and use a volume with Azure Blob storage in Azure Kubernetes Service (AKS)
This section provides guidance for cluster administrators who want to provision
|location | Specify an Azure location. | `eastus` | No | If empty, driver will use the same location name as current cluster.| |resourceGroup | Specify an Azure resource group name. | myResourceGroup | No | If empty, driver will use the same resource group name as current cluster.| |storageAccount | Specify an Azure storage account name.| storageAccountName | - No for blobfuse mount </br> - Yes for NFSv3 mount. | - For blobfuse mount: if empty, driver finds a suitable storage account that matches `skuName` in the same resource group. If a storage account name is provided, storage account must exist. </br> - For NFSv3 mount, storage account name must be provided.|
+|networkEndpointType| Specify network endpoint type for the storage account created by driver. If privateEndpoint is specified, a [private endpoint][storage-account-private-endpoint] is created for the storage account. For other cases, a service endpoint will be created for NFS protocol.<sup>1</sup> | `privateEndpoint` | No | For an AKS cluster, add the AKS cluster name to the Contributor role in the resource group hosting the VNET.|
|protocol | Specify blobfuse mount or NFSv3 mount. | `fuse`, `nfs` | No | `fuse`| |containerName | Specify the existing container (directory) name. | container | No | If empty, driver creates a new container name, starting with `pvc-fuse` for blobfuse or `pvc-nfs` for NFS v3. | |containerNamePrefix | Specify Azure storage directory prefix created by driver. | my |Can only contain lowercase letters, numbers, hyphens, and length should be fewer than 21 characters. | No |
This section provides guidance for cluster administrators who want to provision
| | **Following parameters are only for NFS protocol** | | | | |mountPermissions | Specify mounted folder permissions. |The default is `0777`. If set to `0`, driver won't perform `chmod` after mount. | `0777` | No |
+<sup>1</sup> If the storage account is created by the driver, then you only need to specify `networkEndpointType: privateEndpoint` parameter in storage class. The CSI driver creates the private endpoint together with the account. If you bring your own storage account, then you need to [create the private endpoint][storage-account-private-endpoint] for the storage account.
+ ### Create a persistent volume claim using built-in storage class A persistent volume claim (PVC) uses the storage class object to dynamically provision an Azure Blob storage container. The following YAML can be used to create a persistent volume claim 5 GB in size with *ReadWriteMany* access, using the built-in storage class. For more information on access modes, see the [Kubernetes persistent volume][kubernetes-volumes] documentation.
This section provides guidance for cluster administrators who want to create one
### Create a Blob storage container
-When you create an Azure Blob storage resource for use with AKS, you can create the resource in the node resource group. This approach allows the AKS cluster to access and manage the blob storage resource. If instead you create the blob storage resource in a separate resource group, you must grant the Azure Kubernetes Service managed identity for your cluster the [Contributor][rbac-contributor-role] role to the blob storage resource group.
+When you create an Azure Blob storage resource for use with AKS, you can create the resource in the node resource group. This approach allows the AKS cluster to access and manage the blob storage resource.
For this article, create the container in the node resource group. First, get the resource group name with the [az aks show][az-aks-show] command and add the `--query nodeResourceGroup` query parameter. The following example gets the node resource group for the AKS cluster named **myAKSCluster** in the resource group named **myResourceGroup**:
The following YAML creates a pod that uses the persistent volume or persistent v
[az-tags]: ../azure-resource-manager/management/tag-resources.md [sas-tokens]: ../storage/common/storage-sas-overview.md [azure-datalake-storage-account]: ../storage/blobs/upgrade-to-data-lake-storage-gen2-how-to.md
+[storage-account-private-endpoint]: ../storage/common/storage-private-endpoints.md
aks Azure Csi Files Storage Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-files-storage-provision.md
description: Learn how to create a static or dynamic persistent volume with Azure Files for use with multiple concurrent pods in Azure Kubernetes Service (AKS) Previously updated : 05/17/2023 Last updated : 08/16/2023 # Create and use a volume with Azure Files in Azure Kubernetes Service (AKS)
The following YAML creates a pod that uses the persistent volume claim *my-azure
metadata: name: mypod spec:
- containers:
- - name: mypod
- image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
- resources:
- requests:
- cpu: 100m
- memory: 128Mi
- limits:
- cpu: 250m
- memory: 256Mi
- volumeMounts:
- - mountPath: "/mnt/azure"
- name: volume
- volumes:
- - name: volume
- persistentVolumeClaim:
- claimName: my-azurefile
+ containers:
+ - name: mypod
+ image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
+ resources:
+ requests:
+ cpu: 100m
+ memory: 128Mi
+ limits:
+ cpu: 250m
+ memory: 256Mi
+ volumeMounts:
+ - mountPath: /mnt/azure
+ name: volume
+ volumes:
+ - name: volume
+ persistentVolumeClaim:
+ claimName: my-azurefile
``` 2. Create the pod using the [`kubectl apply`][kubectl-apply] command.
aks Cluster Autoscaler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-autoscaler.md
This article requires Azure CLI version 2.0.76 or later. Run `az --version` to f
To adjust to changing application demands, such as between workdays and evenings or weekends, clusters often need a way to automatically scale. AKS clusters can scale in one of two ways:
-* The **cluster autoscaler** watches for pods that can't be scheduled on nodes because of resource constraints. The cluster then automatically increases the number of nodes.
+* The **cluster autoscaler** watches for pods that can't be scheduled on nodes because of resource constraints. The cluster then automatically increases the number of nodes. For more information, see [How does scale-up work?](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-does-scale-up-work)
* The **horizontal pod autoscaler** uses the Metrics Server in a Kubernetes cluster to monitor the resource demand of pods. If an application needs more resources, the number of pods is automatically increased to meet the demand. ![The cluster autoscaler and horizontal pod autoscaler often work together to support the required application demands](media/autoscaler/cluster-autoscaler.png)
aks Configure Kubenet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-kubenet.md
For more information to help you decide which network model to use, see [Compare
--service-cidr 10.0.0.0/16 \ --dns-service-ip 10.0.0.10 \ --pod-cidr 10.244.0.0/16 \
- --docker-bridge-address 172.17.0.1/16 \
--vnet-subnet-id $SUBNET_ID ```
For more information to help you decide which network model to use, see [Compare
* This address range must be large enough to accommodate the number of nodes that you expect to scale up to. You can't change this address range once the cluster is deployed. * The pod IP address range is used to assign a */24* address space to each node in the cluster. In the following example, the *--pod-cidr* of *10.244.0.0/16* assigns the first node *10.244.0.0/24*, the second node *10.244.1.0/24*, and the third node *10.244.2.0/24*. * As the cluster scales or upgrades, the Azure platform continues to assign a pod IP address range to each new node.
- * *--docker-bridge-address* is optional. The address lets the AKS nodes communicate with the underlying management platform. This IP address must not be within the virtual network IP address range of your cluster and shouldn't overlap with other address ranges in use on your network. The default value is 172.17.0.1/16.
> [!NOTE] > If you want to enable an AKS cluster to include a [Calico network policy][calico-network-policies], you can use the following command:
For more information to help you decide which network model to use, see [Compare
> --resource-group myResourceGroup \ > --name myAKSCluster \ > --node-count 3 \
-> --network-plugin kubenet --network-policy calico \
+> --network-plugin kubenet \
+> --network-policy calico \
> --vnet-subnet-id $SUBNET_ID > ```
aks Create Node Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/create-node-pools.md
The Azure Linux container host for AKS is an open-source Linux distribution avai
az aks nodepool add \ --resource-group myResourceGroup \ --cluster-name myAKSCluster \
- --name azurelinuxpool \
+ --name azlinuxpool \
--os-sku AzureLinux ```
aks Load Balancer Standard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/load-balancer-standard.md
spec:
This example updates the rule to allow inbound external traffic only from the `MY_EXTERNAL_IP_RANGE` range. If you replace `MY_EXTERNAL_IP_RANGE` with the internal subnet IP address, traffic is restricted to only cluster internal IPs. If traffic is restricted to cluster internal IPs, clients outside your Kubernetes cluster are unable to access the load balancer. > [!NOTE]
-> Inbound, external traffic flows from the load balancer to the virtual network for your AKS cluster. The virtual network has a network security group (NSG) which allows all inbound traffic from the load balancer. This NSG uses a [service tag][service-tags] of type *LoadBalancer* to allow traffic from the load balancer.
+> * Inbound, external traffic flows from the load balancer to the virtual network for your AKS cluster. The virtual network has a network security group (NSG) which allows all inbound traffic from the load balancer. This NSG uses a [service tag][service-tags] of type *LoadBalancer* to allow traffic from the load balancer.
+> * Pod CIDR should be added to loadBalancerSourceRanges if there are Pods needing to access the service's LoadBalancer IP for clusters with version v1.25 or above.
## Maintain the client's IP on inbound connections
aks Use Azure Ad Pod Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-azure-ad-pod-identity.md
Title: Use Azure Active Directory pod-managed identities in Azure Kubernetes Ser
description: Learn how to use Azure AD pod-managed identities in Azure Kubernetes Service (AKS) Previously updated : 04/28/2023 Last updated : 08/15/2023 # Use Azure Active Directory pod-managed identities in Azure Kubernetes Service (Preview)
Azure Active Directory (Azure AD) pod-managed identities use Kubernetes primitiv
> Kubernetes native capabilities to federate with any external identity providers on behalf of the > application. >
-> The open source Azure AD pod-managed identity (preview) in Azure Kubernetes Service has been deprecated as of 10/24/2022, and the project will be archived in Sept. 2023. For more information, see the [deprecation notice](https://github.com/Azure/aad-pod-identity#-announcement). The AKS Managed add-on begins deprecation in Sept. 2023.
+> The open source Azure AD pod-managed identity (preview) in Azure Kubernetes Service has been deprecated as of 10/24/2022, and the project will be archived in Sept. 2023. For more information, see the [deprecation notice](https://github.com/Azure/aad-pod-identity#-announcement). The AKS Managed add-on begins deprecation in Sept. 2024.
> > To disable the AKS Managed add-on, use the following command: `az feature unregister --namespace "Microsoft.ContainerService" --name "EnablePodIdentityPreview"`.
aks Use Pod Security Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-pod-security-policies.md
description: Learn how to control pod admissions using PodSecurityPolicy in Azur
Last updated 08/01/2023+ # Secure your cluster using pod security policies in Azure Kubernetes Service (AKS) (preview)
api-center Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/overview.md
For more information about the information assets and capabilities in API Center
## Preview limitations * In preview, API Center is available in the following Azure regions:-
- * East US
- * UK South
- * Central India
- * Australia East
-
+ * Australia East
+ * Central India
+ * East US
+ * UK South
+ * West Europe
+
## Frequently asked questions ### Q: Is API Center part of Azure API Management?
A: Yes, all data in API Center is encrypted at rest.
> [!div class="nextstepaction"] > [Provide feedback](https://aka.ms/apicenter/preview/feedback)+
api-management Cache Lookup Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/cache-lookup-policy.md
Use the `cache-lookup` policy to perform cache lookup and return a valid cached
### Usage notes
+- API Management only performs cache lookup for HTTP GET requests.
* When using `vary-by-query-parameter`, you might want to declare the parameters in the rewrite-uri template or set the attribute `copy-unmatched-params` to `false`. By deactivating this flag, parameters that aren't declared are sent to the backend. - This policy can only be used once in a policy section.
api-management Cache Store Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/cache-store-policy.md
The `cache-store` policy caches responses according to the specified cache setti
### Usage notes
+- API Management only caches responses to HTTP GET requests.
- This policy can only be used once in a policy section.
app-service App Service Web Tutorial Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-web-tutorial-custom-domain.md
ms.assetid: dc446e0e-0958-48ea-8d99-441d2b947a7c
Last updated 01/31/2023 + # Map an existing custom DNS name to Azure App Service
app-service App Service Web Tutorial Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-web-tutorial-rest-api.md
ms.devlang: csharp
Last updated 01/31/2023 + # Tutorial: Host a RESTful API with CORS in Azure App Service
app-service Configure Language Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-php.md
description: Learn how to configure a PHP app in a pre-built PHP container, in A
ms.devlang: php Previously updated : 05/09/2023 Last updated : 08/31/2023 zone_pivot_groups: app-service-platform-windows-linux+
For more information on how App Service runs and builds PHP apps in Linux, see [
## Customize start-up
-By default, the built-in PHP container runs the Apache server. At start-up, it runs `apache2ctl -D FOREGROUND"`. If you like, you can run a different command at start-up, by running the following command in the [Cloud Shell](https://shell.azure.com):
+If you want, you can run a custom command at the container start-up time, by running the following command in the [Cloud Shell](https://shell.azure.com):
```azurecli-interactive az webapp config set --resource-group <resource-group-name> --name <app-name> --startup-file "<custom-command>"
By default, Azure App Service points the root virtual application path (*/*) to
The web framework of your choice may use a subdirectory as the site root. For example, [Laravel](https://laravel.com/), uses the `public/` subdirectory as the site root.
-The default PHP image for App Service uses Apache, and it doesn't let you customize the site root for your app. To work around this limitation, add an *.htaccess* file to your repository root with the following content:
+The default PHP image for App Service uses Nginx, and you change the site root by [configuring the Nginx server with the `root` directive](https://docs.nginx.com/nginx/admin-guide/web-server/serving-static-content/). This [example configuration file](https://github.com/Azure-Samples/laravel-tasks/blob/main/default) contains the following snippets that changes the `root` directive:
```
-<IfModule mod_rewrite.c>
- RewriteEngine on
- RewriteCond %{REQUEST_URI} ^(.*)
- RewriteRule ^(.*)$ /public/$1 [NC,L,QSA]
-</IfModule>
+server {
+ #proxy_cache cache;
+ #proxy_cache_valid 200 1s;
+ listen 8080;
+ listen [::]:8080;
+ root /home/site/wwwroot/public; # Changed for Laravel
+
+ location / {
+ index index.php https://docsupdatetracker.net/index.html index.htm hostingstart.html;
+ try_files $uri $uri/ /index.php?$args; # Changed for Laravel
+ }
+ ...
+```
+
+The default container uses the configuration file found at */etc/nginx/sites-available/default*. Keep in mind that any edit you make to this file is erased when the app restarts. To make a change that is effective across app restarts, [add a custom start-up command](#customize-start-up) like this example:
+
+```
+cp /home/site/wwwroot/default /etc/nginx/sites-available/default && service nginx reload
```
-If you would rather not use *.htaccess* rewrite, you can deploy your Laravel application with a [custom Docker image](quickstart-custom-container.md) instead.
+This command replaces the default Nginx configuration file with a file named *default* in your repository root and restarts Nginx.
::: zone-end
Then, go to the Azure portal and add an Application Setting to scan the "ini" di
::: zone pivot="platform-windows"
-To customize PHP_INI_SYSTEM directives (see [php.ini directives](https://www.php.net/manual/ini.list.php)), you can't use the *.htaccess* approach. App Service provides a separate mechanism using the `PHP_INI_SCAN_DIR` app setting.
+To customize PHP_INI_SYSTEM directives (see [php.ini directives](https://www.php.net/manual/ini.list.php)), use the `PHP_INI_SCAN_DIR` app setting.
First, run the following command in the [Cloud Shell](https://shell.azure.com) to add an app setting called `PHP_INI_SCAN_DIR`:
app-service Configure Language Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-python.md
Title: Configure Linux Python apps
description: Learn how to configure the Python container in which web apps are run, using both the Azure portal and the Azure CLI. Last updated 11/16/2022-++ ms.devlang: python adobe-target: true
app-service Configure Ssl Bindings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-ssl-bindings.md
Last updated 04/20/2023 + # Secure a custom DNS name with a TLS/SSL binding in Azure App Service
app-service Configure Ssl Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-ssl-certificate.md
Last updated 07/28/2023 + # Add and manage TLS/SSL certificates in Azure App Service
app-service Deploy Zip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-zip.md
For more information, see [Kudu documentation](https://github.com/projectkudu/ku
You can deploy your [WAR](https://wikipedia.org/wiki/WAR_(file_format)), [JAR](https://wikipedia.org/wiki/JAR_(file_format)), or [EAR](https://wikipedia.org/wiki/EAR_(file_format)) package to App Service to run your Java web app using the Azure CLI, PowerShell, or the Kudu publish API.
-The deployment process places the package on the shared file drive correctly (see [Kudu publish API reference](#kudu-publish-api-reference)). For that reason, deploying WAR/JAR/EAR packages using [FTP](deploy-ftp.md) or WebDeploy is not recommended.
+The deployment process used by the steps here places the package on the app's content share with the right naming convention and directory structure (see [Kudu publish API reference](#kudu-publish-api-reference)), and it's the recommended approach. If you deploy WAR/JAR/EAR packages using [FTP](deploy-ftp.md) or WebDeploy instead, you may see unkown failures due to mistakes in the naming or structure.
# [Azure CLI](#tab/cli)
app-service Manage Custom Dns Buy Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-custom-dns-buy-domain.md
ms.assetid: 70fb0e6e-8727-4cca-ba82-98a4d21586ff
Last updated 01/31/2023 + # Buy an App Service domain and configure an app with it
app-service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview.md
ms.assetid: 94af2caf-a2ec-4415-a097-f60694b860b3
Last updated 07/19/2023 + # App Service overview
app-service Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-python.md
Title: 'Quickstart: Deploy a Python (Django or Flask) web app to Azure'
description: Get started with Azure App Service by deploying your first Python app to Azure App Service. Last updated 07/26/2023--+ ms.devlang: python
app-service Tutorial Connect Msi Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-msi-sql-database.md
You're now ready to develop and debug your app with the SQL Database as the back
> It is replaced with new **Azure Identity client library** available for .NET, Java, TypeScript and Python and should be used for all new development. > Information about how to migrate to `Azure Identity`can be found here: [AppAuthentication to Azure.Identity Migration Guidance](/dotnet/api/overview/azure/app-auth-migration).
-The steps you follow for your project depends on whether you're using [Entity Framework](/ef/ef6/) (default for ASP.NET) or [Entity Framework Core](/ef/core/) (default for ASP.NET Core).
+The steps you follow for your project depends on whether you're using [Entity Framework Core](/ef/core/) (default for ASP.NET Core) or [Entity Framework](/ef/ef6/) (default for ASP.NET).
+
+# [Entity Framework Core](#tab/efcore)
+
+1. In Visual Studio, open the Package Manager Console and add the NuGet package [Microsoft.Data.SqlClient](https://www.nuget.org/packages/Microsoft.Data.SqlClient):
+
+ ```powershell
+ Install-Package Microsoft.Data.SqlClient -Version 5.1.0
+ ```
+
+1. In the [ASP.NET Core and SQL Database tutorial](tutorial-dotnetcore-sqldb-app.md), the `MyDbConnection` connection string in *appsettings.json* isn't used at all yet. The local environment and the Azure environment both get connection strings from their respective environment variables in order to keep connection secrets out of the source file. But now with Active Directory authentication, there are no more secrets. In *appsettings.json*, replace the value of the `MyDbConnection` connection string with:
+
+ ```json
+ "Server=tcp:<server-name>.database.windows.net;Authentication=Active Directory Default; Database=<database-name>;"
+ ```
+
+ > [!NOTE]
+ > The [Active Directory Default](/sql/connect/ado-net/sql/azure-active-directory-authentication#using-active-directory-default-authentication) authentication type can be used both on your local machine and in Azure App Service. The driver attempts to acquire a token from Azure Active Directory using various means. If the app is deployed, it gets a token from the app's managed identity. If the app is running locally, it tries to get a token from Visual Studio, Visual Studio Code, and Azure CLI.
+ >
+
+ That's everything you need to connect to SQL Database. When you debug in Visual Studio, your code uses the Azure AD user you configured in [2. Set up your dev environment](#2-set-up-your-dev-environment). You'll set up SQL Database later to allow connection from the managed identity of your App Service app. The `DefaultAzureCredential` class caches the token in memory and retrieves it from Azure AD just before expiration. You don't need any custom code to refresh the token.
+
+1. Type `Ctrl+F5` to run the app again. The same CRUD app in your browser is now connecting to the Azure SQL Database directly, using Azure AD authentication. This setup lets you run database migrations from Visual Studio.
# [Entity Framework](#tab/ef)
The steps you follow for your project depends on whether you're using [Entity Fr
1. Type `Ctrl+F5` to run the app again. The same CRUD app in your browser is now connecting to the Azure SQL Database directly, using Azure AD authentication. This setup lets you run database migrations from Visual Studio.
-# [Entity Framework Core](#tab/efcore)
-
-1. In Visual Studio, open the Package Manager Console and add the NuGet package [Microsoft.Data.SqlClient](https://www.nuget.org/packages/Microsoft.Data.SqlClient):
-
- ```powershell
- Install-Package Microsoft.Data.SqlClient -Version 5.1.0
- ```
-
-1. In the [ASP.NET Core and SQL Database tutorial](tutorial-dotnetcore-sqldb-app.md), the `MyDbConnection` connection string in *appsettings.json* isn't used at all yet. The local environment and the Azure environment both get connection strings from their respective environment variables in order to keep connection secrets out of the source file. But now with Active Directory authentication, there are no more secrets. In *appsettings.json*, replace the value of the `MyDbConnection` connection string with:
-
- ```json
- "Server=tcp:<server-name>.database.windows.net;Authentication=Active Directory Default; Database=<database-name>;"
- ```
-
- > [!NOTE]
- > The [Active Directory Default](/sql/connect/ado-net/sql/azure-active-directory-authentication#using-active-directory-default-authentication) authentication type can be used both on your local machine and in Azure App Service. The driver attempts to acquire a token from Azure Active Directory using various means. If the app is deployed, it gets a token from the app's managed identity. If the app is running locally, it tries to get a token from Visual Studio, Visual Studio Code, and Azure CLI.
- >
-
- That's everything you need to connect to SQL Database. When you debug in Visual Studio, your code uses the Azure AD user you configured in [2. Set up your dev environment](#2-set-up-your-dev-environment). You'll set up SQL Database later to allow connection from the managed identity of your App Service app. The `DefaultAzureCredential` class caches the token in memory and retrieves it from Azure AD just before expiration. You don't need any custom code to refresh the token.
-
-1. Type `Ctrl+F5` to run the app again. The same CRUD app in your browser is now connecting to the Azure SQL Database directly, using Azure AD authentication. This setup lets you run database migrations from Visual Studio.
- -- ## 4. Use managed identity connectivity
app-service Tutorial Nodejs Mongodb App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-nodejs-mongodb-app.md
Last updated 09/06/2022
ms.role: developer ms.devlang: javascript+
app-service Tutorial Python Postgresql App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-python-postgresql-app.md
description: Create a Python Django or Flask web app with a PostgreSQL database
ms.devlang: python Last updated 02/28/2023+ zone_pivot_groups: deploy-python-web-app-postgresql
automation Automation Use Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-use-azure-ad.md
description: This article tells how to use Azure AD within Azure Automation as t
Last updated 05/26/2023 -+ # Use Azure AD to authenticate to Azure
automation Manage Office 365 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/manage-office-365.md
description: This article tells how to use Azure Automation to manage Office 365
Last updated 11/05/2020 + # Manage Office 365 services
To publish and then schedule your runbook, see [Manage runbooks in Azure Automat
* For details of credential use, see [Manage credentials in Azure Automation](shared-resources/credentials.md). * For information about modules, see [Manage modules in Azure Automation](shared-resources/modules.md). * If you need to start a runbook, see [Start a runbook in Azure Automation](start-runbooks.md).
-* For PowerShell details, see [PowerShell Docs](/powershell/scripting/overview).
+* For PowerShell details, see [PowerShell Docs](/powershell/scripting/overview).
automation Hybrid Runbook Worker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/hybrid-runbook-worker.md
description: This article tells how to troubleshoot and resolve issues that aris
Last updated 04/26/2023 -+ # Troubleshoot agent-based Hybrid Runbook Worker issues in Automation
azure-cache-for-redis Cache Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-configure.md
By default, cache metrics in Azure Monitor are [stored for 30 days](../azure-mon
>[!NOTE] >In addition to archiving your cache metrics to storage, you can also [stream them to an Event hub or send them to Azure Monitor logs](../azure-monitor/essentials/stream-monitoring-data-event-hubs.md). >+ ### Advisor recommendations The **Advisor recommendations** on the left displays recommendations for your cache. During normal operations, no recommendations are displayed.
Further information can be found on the **Recommendations** in the working pane
You can monitor these metrics on the [Monitoring](cache-how-to-monitor.md) section of the Resource menu.
-Each pricing tier has different limits for client connections, memory, and bandwidth. If your cache approaches maximum capacity for these metrics over a sustained period of time, a recommendation is created. For more information about the metrics and limits reviewed by the **Recommendations** tool, see the following table:
- | Azure Cache for Redis metric | More information | | | | | Network bandwidth usage |[Cache performance - available bandwidth](./cache-planning-faq.yml#azure-cache-for-redis-performance) |
Configuration and management of Azure Cache for Redis instances is managed by Mi
- ACL - BGREWRITEAOF - BGSAVE-- CLUSTER - Cluster write commands are disabled, but read-only Cluster commands are permitted.
+- CLUSTER - Cluster write commands are disabled, but read-only cluster commands are permitted.
- CONFIG - DEBUG - MIGRATE - PSYNC - REPLICAOF
+- REPLCONF - Azure cache for Redis instances don't allow customers to add external replicas. This [command](https://redis.io/commands/replconf/) is normally only sent by servers.
- SAVE - SHUTDOWN - SLAVEOF
For more information about Redis commands, see [https://redis.io/commands](https
- [How can I run Redis commands?](cache-development-faq.yml#how-can-i-run-redis-commands-) - [Monitor Azure Cache for Redis](cache-how-to-monitor.md)+
azure-functions Dotnet Isolated In Process Differences https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-in-process-differences.md
Use the following table to compare feature and functional differences between th
<sup>3</sup> C# Script functions also run in-process and use the same libraries as in-process class library functions. For more information, see the [Azure Functions C# script (.csx) developer reference](functions-reference-csharp.md).
-<sup>4</sup> Service SDK types include types from the [Azure SDK for .NET](/dotnet/azure/sdk/azure-sdk-for-dotnet) such as [BlobClient](/dotnet/api/azure.storage.blobs.blobclient). For the isolated process model, support from some extensions is currently in preview, and Service Bus triggers do not yet support message settlement scenarios.
+<sup>4</sup> Service SDK types include types from the [Azure SDK for .NET](/dotnet/azure/sdk/azure-sdk-for-dotnet) such as [BlobClient](/dotnet/api/azure.storage.blobs.blobclient). For the isolated process model, Service Bus triggers do not yet support message settlement scenarios.
<sup>5</sup> ASP.NET Core types are not supported for .NET Framework.
azure-functions Dotnet Isolated Process Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-process-guide.md
For some service-specific binding types, binding data can be provided using type
| Dependency | Version requirement | |-|-|
-|[Microsoft.Azure.Functions.Worker]| For **Generally Available** extensions in the table below: 1.18.0 or later<br/>For extensions that have **preview support**: 1.15.0-preview1 |
-|[Microsoft.Azure.Functions.Worker.Sdk]|For **Generally Available** extensions in the table below: 1.13.0 or later<br/>For extensions that have **preview support**: 1.11.0-preview1 |
+|[Microsoft.Azure.Functions.Worker]| 1.18.0 or later |
+|[Microsoft.Azure.Functions.Worker.Sdk]| 1.13.0 or later |
When testing SDK types locally on your machine, you will also need to use [Azure Functions Core Tools version 4.0.5000 or later](./functions-run-local.md). You can check your current version using the command `func version`.
Each trigger and binding extension also has its own minimum version requirement,
| [Azure Service Bus][servicebus-sdk-types] | **Generally Available**<sup>2</sup> | _Input binding does not exist_ | _SDK types not recommended.<sup>1</sup>_ | | [Azure Event Hubs][eventhub-sdk-types] | **Generally Available** | _Input binding does not exist_ | _SDK types not recommended.<sup>1</sup>_ | | [Azure Cosmos DB][cosmos-sdk-types] | _SDK types not used<sup>3</sup>_ | **Generally Available** | _SDK types not recommended.<sup>1</sup>_ |
-| [Azure Tables][tables-sdk-types] | _Trigger does not exist_ | **Preview support** | _SDK types not recommended.<sup>1</sup>_ |
+| [Azure Tables][tables-sdk-types] | _Trigger does not exist_ | **Generally Available** | _SDK types not recommended.<sup>1</sup>_ |
| [Azure Event Grid][eventgrid-sdk-types] | **Generally Available** | _Input binding does not exist_ | _SDK types not recommended.<sup>1</sup>_ | [blob-sdk-types]: ./functions-bindings-storage-blob.md?tabs=isolated-process%2Cextensionv5&pivots=programming-language-csharp#binding-types
You can configure your isolated process application to emit logs directly [Appli
```dotnetcli dotnet add package Microsoft.ApplicationInsights.WorkerService
-dotnet add package Microsoft.Azure.Functions.Worker.ApplicationInsights --prerelease
+dotnet add package Microsoft.Azure.Functions.Worker.ApplicationInsights
``` You then need to call to `AddApplicationInsightsTelemetryWorkerService()` and `ConfigureFunctionsApplicationInsights()` during service configuration in your `Program.cs` file:
azure-functions Functions Bindings Storage Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-table.md
Functions version 1.x doesn't support isolated worker process. To use the isolat
[ITableEntity]: /dotnet/api/azure.data.tables.itableentity [TableClient]: /dotnet/api/azure.data.tables.tableclient
-[TableEntity]: /dotnet/api/azure.data.tables.tableentity
[CloudTable]: /dotnet/api/microsoft.azure.cosmos.table.cloudtable
Functions version 1.x doesn't support isolated worker process. To use the isolat
[Microsoft.Azure.Cosmos.Table]: /dotnet/api/microsoft.azure.cosmos.table [Microsoft.WindowsAzure.Storage.Table]: /dotnet/api/microsoft.windowsazure.storage.table
-[NuGet package]: https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Storage
[storage-4.x]: https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Storage/4.0.5
-[storage-5.x]: https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Storage/5.0.0
[table-api-package]: https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Tables/ [extension bundle]: ./functions-bindings-register.md#extension-bundles
azure-functions Functions Reference Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-csharp.md
The following table lists the .NET attributes for each binding type and the pack
> | Storage table | [`Microsoft.Azure.WebJobs.TableAttribute`](https://github.com/Azure/azure-webjobs-sdk/blob/master/src/Microsoft.Azure.WebJobs), [`Microsoft.Azure.WebJobs.StorageAccountAttribute`](https://github.com/Azure/azure-webjobs-sdk/blob/master/src/Microsoft.Azure.WebJobs/StorageAccountAttribute.cs) | | > | Twilio | [`Microsoft.Azure.WebJobs.TwilioSmsAttribute`](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions.Twilio/TwilioSMSAttribute.cs) | `#r "Microsoft.Azure.WebJobs.Extensions.Twilio"` |
+## Convert a C# script app to a C# project
+
+The easiest way to convert an application using C# script to a C# project is to start with a new project and migrate the code and configuration from your .csx and function.json files.
+
+If you are using C# scripting for portal editing, you may wish to start by [downloading the app content to your local machine](./deployment-zip-push.md#download-your-function-app-files). Choose the "Site content" option instead of "Content and Visual Studio project". The project that the portal provides isn't needed because in the later steps of this section, you will be creating a new Visual Studio project. Similarly, do not include app settings in the download. You are defining a new development environment, and this environment should not have the same permissions as your hosted app environment.
+
+Once you have your C# script code ready, you can begin creating the new project:
+
+1. Follow the instructions to create a new function project [using Visual Studio](./functions-create-your-first-function-visual-studio.md), using [Visual Studio Code](./create-first-function-vs-code-csharp.md), or [using the command line](./create-first-function-cli-csharp.md). You don't need to publish the project yet.
+1. If your C# script code included an `extensions.csproj` file or any `function.proj` files, copy the package references from these files, and add them to the new project's `.csproj` file alongside it's core dependencies.
+
+ The conversion activity a good opportunity to update to the latest versions of your dependencies. Doing so may require additional code changes in a later step.
+
+1. Copy the contents of the C# scripting `host.json` file into the project `host.json` file. If you are combining this with any other migration activities, note that the [`host.json`](./functions-host-json.md) schema depends on the version you are targeting. The contents of the `extensions` section are also informed by the versions of triggers and bindings that you are using. Refer to the reference for each extension to identify the right properties to configure.
+1. For any [shared files referenced by a `#load` directive](#reusing-csx-code), create new `.cs` files for their contents. You can structure this in any way you prefer. For most apps, it is simplest to create a new `.cs` file for each class that you defined. For any static methods created without a class, you'll need to define a new class or classes that they can be defined in.
+1. Migrate each function to a `.cs` file. This file will combine the `run.csx` and the `function.json` for that function. For example, if you had a function named `HelloWorld`, in C# script this would be represented with `HelloWorld/run.csx` and `HelloWorld/function.json`. For the new project, you would create a `HelloWorld.cs`. Perform the following steps for each function:
+
+ 1. Create a new file named `<FUNCTION_NAME>.cs`, replacing `<FUNCTION_NAME>` with the name of the folder that defined your C# script function. It is often easiest to start by creating a new function in the project model, which will cover some of the later steps. From the CLI, you can use the command `func new --name <FUNCTION_NAME>` making a similar substitution, and selecting the target template when prompted.
+ 1. Copy the `using` statements from your `run.csx` file and add them to the new file. You do not need any `#r` directives.
+ 1. For any `#load` statement in your `run.csx` file, add a new `using` statement for the namespace you used for the shared code.
+ 1. In the new file, define a class for your function under the namespace you are using for the project.
+ 1. Create a new method named `RunHandler` or something similar. This new method will serve as the new entry point for the function.
+ 1. Copy the static method that represents your function, along with any functions it calls, from `run.csx` into your new class as a second method. From the new method you created in the previous step, call into this static method. This indirection step is helpful for navigating any differences as you continue the upgrade. You can keep the original method exactly the same and simply control its inputs from the new context. You may need to create parameters on the new method which you then pass into the static method call. After you have confirmed that the migration has worked as intended, you can remove this extra level of indirection.
+ 1. For each binding in the `function.json` file, add the corresponding attribute to your new method. This may require you to add additional package dependencies. Consult the reference for each binding for specific requirements in the new model.
+
+1. Verify that your project runs locally.
+1. Republish the app to Azure.
+
+### Example function conversion
+
+This section shows an example of the migration for a single function.
+
+The original function in C# scripting has two files:
+- `HelloWorld/function.json`
+- `HelloWorld/run.csx`
+
+The contents of `HelloWorld/function.json` are:
+
+```json
+{
+ "bindings": [
+ {
+ "authLevel": "FUNCTION",
+ "name": "req",
+ "type": "httpTrigger",
+ "direction": "in",
+ "methods": [
+ "get",
+ "post"
+ ]
+ },
+ {
+ "name": "$return",
+ "type": "http",
+ "direction": "out"
+ }
+ ]
+}
+```
+
+The contents of `HelloWorld/run.csx` are:
+
+```csharp
+#r "Newtonsoft.Json"
+
+using System.Net;
+using Microsoft.AspNetCore.Mvc;
+using Microsoft.Extensions.Primitives;
+using Newtonsoft.Json;
+
+public static async Task<IActionResult> Run(HttpRequest req, ILogger log)
+{
+ log.LogInformation("C# HTTP trigger function processed a request.");
+
+ string name = req.Query["name"];
+
+ string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
+ dynamic data = JsonConvert.DeserializeObject(requestBody);
+ name = name ?? data?.name;
+
+ string responseMessage = string.IsNullOrEmpty(name)
+ ? "This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response."
+ : $"Hello, {name}. This HTTP triggered function executed successfully.";
+
+ return new OkObjectResult(responseMessage);
+}
+```
+
+After migrating to the isolated worker model with ASP.NET Core integration, these are replaced by a single `HelloWorld.cs`:
+
+```csharp
+using System.Net;
+using Microsoft.Azure.Functions.Worker;
+using Microsoft.AspNetCore.Http;
+using Microsoft.AspNetCore.Mvc;
+using Microsoft.Extensions.Logging;
+using Microsoft.AspNetCore.Routing;
+using Microsoft.Extensions.Primitives;
+using Newtonsoft.Json;
+
+namespace MyFunctionApp
+{
+ public class HelloWorld
+ {
+ private readonly ILogger _logger;
+
+ public HelloWorld(ILoggerFactory loggerFactory)
+ {
+ _logger = loggerFactory.CreateLogger<HelloWorld>();
+ }
+
+ [Function("HelloWorld")]
+ public async Task<IActionResult> RunHandler([HttpTrigger(AuthorizationLevel.Function, "get")] HttpRequest req)
+ {
+ return await Run(req, _logger);
+ }
+
+ // From run.csx
+ public static async Task<IActionResult> Run(HttpRequest req, ILogger log)
+ {
+ log.LogInformation("C# HTTP trigger function processed a request.");
+
+ string name = req.Query["name"];
+
+ string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
+ dynamic data = JsonConvert.DeserializeObject(requestBody);
+ name = name ?? data?.name;
+
+ string responseMessage = string.IsNullOrEmpty(name)
+ ? "This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response."
+ : $"Hello, {name}. This HTTP triggered function executed successfully.";
+
+ return new OkObjectResult(responseMessage);
+ }
+ }
+}
+```
+ ## Binding configuration and examples ### Blob trigger
azure-functions Functions Reference Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-node.md
When running on Windows, the Node.js version is set by the [`WEBSITE_NODE_DEFAUL
# [Linux](#tab/linux)
-When running on Windows, the Node.js version is set by the [linuxfxversion](./functions-app-settings.md#linuxfxversion) site setting. This setting can be updated using the Azure CLI.
+When running on Linux, the Node.js version is set by the [linuxfxversion](./functions-app-settings.md#linuxfxversion) site setting. This setting can be updated using the Azure CLI.
azure-functions Migrate Dotnet To Isolated Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-dotnet-to-isolated-model.md
To upgrade the application, you will:
## Upgrade your local project
-The section outlines the various changes that you need to make to your local project to move it to the isolated worker model. Some of the steps change based on your target version of .NET. Use the tabs to select the instructions which match your desired version.
+The section outlines the various changes that you need to make to your local project to move it to the isolated worker model. Some of the steps change based on your target version of .NET. Use the tabs to select the instructions which match your desired version. These steps assume a local C# project, and if your app is instead using C# script (`.csx` files), you should [convert to the project model](./functions-reference-csharp.md#convert-a-c-script-app-to-a-c-project) before continuing.
> [!TIP] > The [.NET Upgrade Assistant] can be used to automatically make many of the changes mentioned in the following sections.
azure-functions Run Functions From Deployment Package https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/run-functions-from-deployment-package.md
This section provides information about how to run your function app from a pack
+ When running a function app on Windows, the app setting `WEBSITE_RUN_FROM_PACKAGE = <URL>` gives worse cold-start performance and isn't recommended. + When you specify a URL, you must also [manually sync triggers](functions-deployment-technologies.md#trigger-syncing) after you publish an updated package. + The Functions runtime must have permissions to access the package URL.
-+ You shouldn't deploy your package to Azure Blob Storage as a public blob. Instead, use a private container with a [Shared Access Signature (SAS)](../vs-azure-tools-storage-manage-with-storage-explorer.md#generate-a-sas-in-storage-explorer) or [use a managed identity](#fetch-a-package-from-azure-blob-storage-using-a-managed-identity) to enable the Functions runtime to access the package.
++ You shouldn't deploy your package to Azure Blob Storage as a public blob. Instead, use a private container with a [Shared Access Signature (SAS)](../storage/common/storage-sas-overview.md) or [use a managed identity](#fetch-a-package-from-azure-blob-storage-using-a-managed-identity) to enable the Functions runtime to access the package.++ You must maintain any SAS URLs used for deployment. When an SAS expires, the package can no longer be deployed. In this case, you must generate a new SAS and update the setting in your function app. You can eliminate this management burden by [using a managed identity](#fetch-a-package-from-azure-blob-storage-using-a-managed-identity). + When running on a Premium plan, make sure to [eliminate cold starts](functions-premium-plan.md#eliminate-cold-starts). + When running on a Dedicated plan, make sure you've enabled [Always On](dedicated-plan.md#always-on). + You can use the [Azure Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md) to upload package files to blob containers in your storage account.
azure-maps Creator Onboarding Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-onboarding-tool.md
The following steps demonstrate how to create an indoor map in your Azure Maps a
:::image type="content" source="./media/creator-indoor-maps/onboarding-tool/package-upload.png" alt-text="Screenshot showing the package upload screen of the Azure Maps Creator onboarding tool.":::
-<!--
- > [!NOTE]
- > If the manifest included in the drawing package is incomplete or contains errors, the onboarding tool will not go directly to the **Review + Create** tab, but instead goes to the tab where you are best able to address the issue.
>- 1. Once the package is uploaded, the onboarding tool uses the [Conversion service] to validate the data then convert the geometry and data from the drawing package into a digital indoor map. For more information about the conversion process, see [Convert a drawing package] in the Creator concepts article. :::image type="content" source="./media/creator-indoor-maps/onboarding-tool/package-conversion.png" alt-text="Screenshot showing the package conversion screen of the Azure Maps Creator onboarding tool, including the Conversion ID value.":::
azure-maps How To Secure Spa Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-secure-spa-users.md
Create the web application in Azure AD for users to sign in. The web application
6. Copy the Azure AD app ID and the Azure AD tenant ID from the app registration to use in the Web SDK. Add the Azure AD app registration details and the `x-ms-client-id` from the Azure Map account to the Web SDK. ```javascript
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js" />
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js" />
<script> var map = new atlas.Map("map", { center: [-122.33, 47.64],
azure-maps How To Use Indoor Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-indoor-module.md
Your file should now look similar to the following HTML:
<meta name="viewport" content="width=device-width, user-scalable=no" /> <title>Indoor Maps App</title>
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
<link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/indoor/0.2/atlas-indoor.min.css" type="text/css"/>
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
<script src="https://atlas.microsoft.com/sdk/javascript/indoor/0.2/atlas-indoor.min.js"></script> <style>
azure-maps How To Use Map Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-map-control.md
You can embed a map in a web page by using the Map Control client-side JavaScrip
* Use the globally hosted CDN version of the Azure Maps Web SDK by adding references to the JavaScript and `stylesheet` in the `<head>` element of the HTML file: ```html
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css">
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css">
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
``` * Load the Azure Maps Web SDK source code locally using the [azure-maps-control] npm package and host it with your app. This package also includes TypeScript definitions.
You can embed a map in a web page by using the Map Control client-side JavaScrip
Then add references to the Azure Maps `stylesheet` to the `<head>` element of the file: ```html
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
``` > [!NOTE]
You can embed a map in a web page by using the Map Control client-side JavaScrip
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css">
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css">
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
<script type="text/javascript">
azure-maps How To Use Spatial Io Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-spatial-io-module.md
You can load the Azure Maps spatial IO module using one of the two options:
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.js"></script>
<script type='text/javascript'>
You can load the Azure Maps spatial IO module using one of the two options:
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.js"></script>
<!-- Add reference to the Azure Maps Spatial IO module. --> <script src="https://atlas.microsoft.com/sdk/javascript/spatial/0/atlas-spatial.js"></script>
azure-maps Map Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-create.md
In the following code, the first code block creates a map and sets the enter and
<head> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
<script type="text/javascript">
azure-maps Migrate From Bing Maps Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps-web-app.md
The following code shows how to load a map with the same view in Azure Maps alon
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
<script type='text/javascript'> var map;
When using a Symbol layer, the data must be added to a data source, and the data
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
<script type='text/javascript'> var map, datasource;
Symbol layers in Azure Maps support custom images as well, but the image needs t
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
<script type='text/javascript'> var map, datasource;
GeoJSON data can be directly imported in Azure Maps using the `importDataFromUrl
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
<script type='text/javascript'> var map, datasource;
In Azure Maps, load the GeoJSON data into a data source and connect the data sou
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
<script type='text/javascript'> var map;
In Azure Maps, georeferenced images can be overlaid using the `atlas.layer.Image
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
<script type='text/javascript'> var map;
In Azure Maps, GeoJSON is the main data format used in the web SDK, more spatial
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
<!-- Add reference to the Azure Maps Spatial IO module. --> <script src="https://atlas.microsoft.com/sdk/javascript/spatial/0/atlas-spatial.js"></script>
In Azure Maps, the drawing tools module needs to be loaded by loading the JavaSc
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
<!-- Add references to the Azure Maps Map Drawing Tools JavaScript and CSS files. --> <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/drawing/0/atlas-drawing.min.css" type="text/css" />
azure-maps Migrate From Google Maps Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-google-maps-web-app.md
Load a map with the same view in Azure Maps along with a map style control and z
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
<script type='text/javascript'> var map;
For a Symbol layer, add the data to a data source. Attach the data source to the
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
<script type='text/javascript'> var map, datasource;
Symbol layers in Azure Maps support custom images as well. First, load the image
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
<script type='text/javascript'> var map, datasource;
GeoJSON is the base data type in Azure Maps. Import it into a data source using
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
<script type='text/javascript'> var map;
Directly import GeoJSON data using the `importDataFromUrl` function on the `Data
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
<script type='text/javascript'> var map, datasource;
Load the GeoJSON data into a data source and connect the data source to a heat m
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
<script type='text/javascript'> var map;
Use the `atlas.layer.ImageLayer` class to overlay georeferenced images. This cla
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
<script type='text/javascript'> var map;
In Azure Maps, GeoJSON is the main data format used in the web SDK, more spatial
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
<!-- Add reference to the Azure Maps Spatial IO module. --> <script src="https://atlas.microsoft.com/sdk/javascript/spatial/0/atlas-spatial.js"></script>
azure-maps Release Notes Map Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/release-notes-map-control.md
This document contains information about new features and other changes to the Map Control.
-## v3 (preview)
+## v3 (latest)
+
+### [3.0.0] (August 18, 2023)
+
+#### Bug fixes (3.0.0)
+
+- Fixed zoom control to take into account the `maxBounds` [CameraOptions].
+
+- Fixed an issue that mouse positions are shifted after a css scale transform on the map container.
+
+#### Other changes (3.0.0)
+
+- Phased out the style definition version `2022-08-05` and switched the default `styleDefinitionsVersion` to `2023-01-01`.
+
+- Added the `mvc` parameter to encompass the map control version in both definitions and style requests.
+
+#### Installation (3.0.0)
+
+The version is available on [npm][3.0.0] and CDN.
+
+- **NPM:** Refer to the instructions at [azure-maps-control@3.0.0][3.0.0]
+
+- **CDN:** Reference the following CSS and JavaScript in the `<head>` element of an HTML file:
+
+ ```html
+ <link href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3.0/atlas.min.css" rel="stylesheet" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3.0/atlas.min.js"></script>
+ ```
### [3.0.0-preview.10] (July 11, 2023)
This update is the first preview of the upcoming 3.0.0 release. The underlying [
}) ```
-## v2 (latest)
+## v2
### [2.3.2] (August 11, 2023)
Stay up to date on Azure Maps:
> [!div class="nextstepaction"] > [Azure Maps Blog]
+[3.0.0]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0
[3.0.0-preview.10]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.10 [3.0.0-preview.9]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.9 [3.0.0-preview.8]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.8
azure-maps Tutorial Create Store Locator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-create-store-locator.md
To create the HTML:
```HTML <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css">
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css">
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
``` 3. Next, add a reference to the Azure Maps Services module. This module is a JavaScript library that wraps the Azure Maps REST services, making them easy to use in JavaScript. The Services module is useful for powering search functionality.
azure-maps Tutorial Prioritized Routes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-prioritized-routes.md
The following steps show you how to create and display the Map control in a web
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css">
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css">
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
<!-- Add a reference to the Azure Maps Services Module JavaScript file. --> <script src="https://atlas.microsoft.com/sdk/javascript/service/2/atlas-service.min.js"></script>
azure-maps Tutorial Route Location https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-route-location.md
The following steps show you how to create and display the Map control in a web
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css">
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css">
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
<!-- Add a reference to the Azure Maps Services Module JavaScript file. --> <script src="https://atlas.microsoft.com/sdk/javascript/service/2/atlas-service.min.js"></script>
azure-maps Tutorial Search Location https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-search-location.md
The Map Control API is a convenient client library. This API allows you to easil
<meta charset="utf-8" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->
- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" />
- <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script>
+ <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script>
<!-- Add a reference to the Azure Maps Services Module JavaScript file. --> <script src="https://atlas.microsoft.com/sdk/javascript/service/2/atlas-service.min.js"></script>
azure-monitor Action Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/action-groups.md
description: Find out how to create and manage action groups. Learn about notifi
Last updated 05/02/2023 -+ # Action groups
azure-monitor Opentelemetry Add Modify https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-add-modify.md
You can collect more data automatically when you include instrumentation librari
### [ASP.NET Core](#tab/aspnetcore)
-To add a community library, use the `ConfigureOpenTelemetryMeterProvider` or `ConfigureOpenTelemetryTraceProvider` methods.
+To add a community library, use the `ConfigureOpenTelemetryMeterProvider` or `ConfigureOpenTelemetryTracerProvider` methods.
The following example demonstrates how the [Runtime Instrumentation](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.Runtime) can be added to collect extra metrics.
azure-monitor Opentelemetry Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-configuration.md
The sampler expects a sample rate of between 0 and 1 inclusive. A rate of 0.1 me
```csharp var builder = WebApplication.CreateBuilder(args);
-builder.Services.AddOpenTelemetry().UseAzureMonitor();
-builder.Services.Configure<ApplicationInsightsSamplerOptions>(options => { options.SamplingRatio = 0.1F; });
+builder.Services.AddOpenTelemetry().UseAzureMonitor(o =>
+{
+ o.SamplingRatio = 0.1F;
+});
var app = builder.Build();
azure-monitor Container Insights Metric Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-metric-alerts.md
The following metrics have unique behavior characteristics:
- The `oomKilledContainerCount` metric is only sent when there are OOM killed containers. - The `cpuExceededPercentage`, `memoryRssExceededPercentage`, and `memoryWorkingSetExceededPercentage` metrics are sent when the CPU, memory RSS, and memory working set values exceed the configured threshold. The default threshold is 95%. The `cpuThresholdViolated`, `memoryRssThresholdViolated`, and `memoryWorkingSetThresholdViolated` metrics are equal to 0 if the usage percentage is below the threshold and are equal to 1 if the usage percentage is above the threshold. These thresholds are exclusive of the alert condition threshold specified for the corresponding alert rule. - The `pvUsageExceededPercentage` metric is sent when the persistent volume usage percentage exceeds the configured threshold. The default threshold is 60%. The `pvUsageThresholdViolated` metric is equal to 0 when the persistent volume usage percentage is below the threshold and is equal to 1 if the usage is above the threshold. This threshold is exclusive of the alert condition threshold specified for the corresponding alert rule.-- The `pvUsageExceededPercentage` metric is sent when the persistent volume usage percentage exceeds the configured threshold. The default threshold is 60%. The `pvUsageThresholdViolated` metric is equal to 0 when the persistent volume usage percentage is below the threshold and is equal to 1 if the usage is above the threshold. This threshold is exclusive of the alert condition threshold specified for the corresponding alert rule. **Prometheus only**
azure-monitor Migrate To Azure Storage Lifecycle Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/migrate-to-azure-storage-lifecycle-policy.md
Previously updated : 07/27/2022 Last updated : 08/16/2023 #Customer intent: As a dev-ops administrator I want to migrate my retention setting from diagnostic setting retention storage to Azure Storage lifecycle management so that it continues to work after the feature has been deprecated.
This guide walks you through migrating from using Azure diagnostic settings stor
> [!IMPORTANT] > **Deprecation Timeline.**
-> - March 31, 2023 ΓÇô The Diagnostic Settings Storage Retention feature will no longer be available to configure new retention rules for log data. If you have configured retention settings, you'll still be able to see and change them.
-> - September 30, 2023 ΓÇô You will no longer be able to use the API or Azure portal to configure retention setting unless you're changing them to *0*. Existing retention rules will still be respected.
+> - March 31, 2023 ΓÇô The Diagnostic Settings Storage Retention feature will no longer be available to configure new retention rules for log data. This includes using the portal, CLI PowerShell, and ARM and Bicep templates. If you have configured retention settings, you'll still be able to see and change them in the portal.
+> - September 30, 2023 ΓÇô You will no longer be able to use the API (CLI, Powershell, or templates), or Azure portal to configure retention setting unless you're changing them to *0*. Existing retention rules will still be respected.
> - September 30, 2025 ΓÇô All retention functionality for the Diagnostic Settings Storage Retention feature will be disabled across all environments.
To migrate your diagnostics settings retention rules, follow the steps below:
1. Set your retention time, then select **Next** :::image type="content" source="./media/retention-migration/lifecycle-management-add-rule-base-blobs.png" alt-text="A screenshot showing the Base blobs tab for adding a lifecycle rule.":::
-1. On the **Filters** tab, under **Blob prefix** set path or prefix to the container or logs you want the retention rule to apply to.
-For example, for all Function App logs, you could use the container *insights-logs-functionapplogs* to set the retention for all Function App logs.
-To set the rule for a specific subscription, resource group, and function app name, use *insights-logs-functionapplogs/resourceId=/SUBSCRIPTIONS/\<your subscription Id\>/RESOURCEGROUPS/\<your resource group\>/PROVIDERS/MICROSOFT.WEB/SITES/\<your function app name\>*.
+1. On the **Filters** tab, under **Blob prefix** set path or prefix to the container or logs you want the retention rule to apply to. The path or prefix can be at any level within the container and will apply to all blobs under that path or prefix.
+For example, for *all* insight activity logs, use the container *insights-activity-logs* to set the retention for all of the log in that container logs.
+To set the rule for a specific webapp app, use *insights-activity-logs/ResourceId=/SUBSCRIPTIONS/\<your subscription Id\>/RESOURCEGROUPS/\<your resource group\>/PROVIDERS/MICROSOFT.WEB/SITES/\<your webapp name\>*.
+
+ Use the Storage browser to help you find the path or prefix.
+ The example below shows the prefix for a specific web app: **insights-activity-logs/ResourceId=/SUBSCRIPTIONS/d05145d-4a5d-4a5d-4a5d-5267eae1bbc7/RESOURCEGROUPS/rg-001/PROVIDERS/MICROSOFT.WEB/SITES/appfromdocker1*.
+ To set the rule for all resources in the resource group, use *insights-activity-logs/ResourceId=/SUBSCRIPTIONS/d05145d-4a5d-4a5d-4a5d-5267eae1bbc7/RESOURCEGROUPS/rg-001*.
+ :::image type="content" source="./media/retention-migration/blob-prefix.png" alt-text="A screenshot showing the Storage browser and resource path." lightbox="./media/retention-migration/blob-prefix.png":::
1. Select **Add** to save the rule. ## Next steps
azure-monitor Tutorial Logs Ingestion Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-logs-ingestion-api.md
Last updated 03/20/2023
The [Logs Ingestion API](logs-ingestion-api-overview.md) in Azure Monitor allows you to send custom data to a Log Analytics workspace. This tutorial uses Azure Resource Manager templates (ARM templates) to walk through configuration of the components required to support the API and then provides a sample application using both the REST API and client libraries for [.NET](/dotnet/api/overview/azure/Monitor.Ingestion-readme), [Java](/java/api/overview/azure/monitor-ingestion-readme), [JavaScript](/javascript/api/overview/azure/monitor-ingestion-readme), and [Python](/python/api/overview/azure/monitor-ingestion-readme). > [!NOTE]
-> This tutorial uses ARM templates to configure the components required to support the Logs ingestion API. See [Tutorial: Send data to Azure Monitor Logs with Logs ingestion API (Azure portal)](tutorial-logs-ingestion-api.md) for a similar tutorial that uses Azure Resource Manager templates to configure these components.
+> This tutorial uses ARM templates to configure the components required to support the Logs ingestion API. See [Tutorial: Send data to Azure Monitor Logs with Logs ingestion API (Azure portal)](tutorial-logs-ingestion-portal.md) for a similar tutorial that uses the Azure portal UI to configure these components.
The steps required to configure the Logs ingestion API are as follows:
azure-netapp-files Azure Government https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-government.md
ms.assetid:
na-+ Last updated 03/08/2023
azure-netapp-files Azure Netapp Files Solution Architectures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md
This section provides references to SAP on Azure solutions.
* [SAP S/4HANA in Linux on Azure - Azure Architecture Center](/azure/architecture/reference-architectures/sap/sap-s4hana) * [Run SAP BW/4HANA with Linux VMs - Azure Architecture Center](/azure/architecture/reference-architectures/sap/run-sap-bw4hana-with-linux-virtual-machines) * [SAP HANA Azure virtual machine storage configurations](../virtual-machines/workloads/sap/hana-vm-operations-storage.md)
+* [SAP on Azure NetApp Files Sizing Best Practices](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-on-azure-netapp-files-sizing-best-practices/ba-p/3895300)
* [Optimize HANA deployments with Azure NetApp Files application volume group for SAP HANA](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/optimize-hana-deployments-with-azure-netapp-files-application/ba-p/3683417) * [Using Azure NetApp Files AVG for SAP HANA to deploy HANA with multiple partitions (MP)](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/using-azure-netapp-files-avg-for-sap-hana-to-deploy-hana-with/ba-p/3742747) * [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](../virtual-machines/workloads/sap/hana-vm-operations-netapp.md)
azure-netapp-files Backup Requirements Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-requirements-considerations.md
na Previously updated : 02/23/2023 Last updated : 08/15/2023 # Requirements and considerations for Azure NetApp Files backup
Azure NetApp Files backup in a region can only protect an Azure NetApp Files vol
* Policy-based (scheduled) Azure NetApp Files backup is independent from [snapshot policy configuration](azure-netapp-files-manage-snapshots.md).
-* In a cross-region replication setting, Azure NetApp Files backup can be configured on a source volume only. Azure NetApp Files backup isn't supported on a cross-region replication *destination* volume.
+* In a [cross-region replication](cross-region-replication-introduction.md) (CRR) or [cross-zone replication](cross-zone-replication-introduction.md) (CZR) setting, Azure NetApp Files backup can be configured on a source volume only. Azure NetApp Files backup isn't supported on a CRR or CZR *destination* volume.
* See [Restore a backup to a new volume](backup-restore-new-volume.md) for additional considerations related to restoring backups.
azure-resource-manager Manage Resource Groups Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/manage-resource-groups-portal.md
Title: Manage resource groups - Azure portal description: Use Azure portal to manage your resource groups through Azure Resource Manager. Shows how to create, list, and delete resource groups.- Previously updated : 03/26/2019- Last updated : 08/16/2023 # Manage Azure resource groups by using the Azure portal
The resource group stores metadata about the resources. Therefore, when you spec
## Create resource groups 1. Sign in to the [Azure portal](https://portal.azure.com).
-2. Select **Resource groups**
+1. Select **Resource groups**.
+1. Select **Create**.
- :::image type="content" source="./media/manage-resource-groups-portal/manage-resource-groups-add-group.png" alt-text="Screenshot of the Azure portal with 'Resource groups' and 'Add' highlighted.":::
-3. Select **Add**.
-4. Enter the following values:
+ :::image type="content" source="./media/manage-resource-groups-portal/manage-resource-groups-add-group.png" alt-text="Screenshot of the Azure portal with 'Resource groups' and 'Add' highlighted." lightbox="./media/manage-resource-groups-portal/manage-resource-groups-add-group.png":::
- - **Subscription**: Select your Azure subscription.
- - **Resource group**: Enter a new resource group name.
+1. Enter the following values:
+
+ - **Subscription**: Select your Azure subscription.
+ - **Resource group**: Enter a new resource group name.
- **Region**: Select an Azure location, such as **Central US**.
- :::image type="content" source="./media/manage-resource-groups-portal/manage-resource-groups-create-group.png" alt-text="Screenshot of the Create Resource Group form in the Azure portal with fields for Subscription, Resource group, and Region.":::
-5. Select **Review + Create**
-6. Select **Create**. It takes a few seconds to create a resource group.
-7. Select **Refresh** from the top menu to refresh the resource group list, and then select the newly created resource group to open it. Or select **Notification**(the bell icon) from the top, and then select **Go to resource group** to open the newly created resource group
+ :::image type="content" source="./media/manage-resource-groups-portal/manage-resource-groups-create-group.png" alt-text="Screenshot of the Create Resource Group form in the Azure portal with fields for Subscription, Resource group, and Region." lightbox="./media/manage-resource-groups-portal/manage-resource-groups-create-group.png":::
+1. Select **Review + Create**
+1. Select **Create**. It takes a few seconds to create a resource group.
+1. Select **Refresh** from the top menu to refresh the resource group list, and then select the newly created resource group to open it. Or select **Notification**(the bell icon) from the top, and then select **Go to resource group** to open the newly created resource group
- :::image type="content" source="./media/manage-resource-groups-portal/manage-resource-groups-add-group-go-to-resource-group.png" alt-text="Screenshot of the Azure portal with the 'Go to resource group' button in the Notifications panel.":::
+ :::image type="content" source="./media/manage-resource-groups-portal/manage-resource-groups-add-group-go-to-resource-group.png" alt-text="Screenshot of the Azure portal with the 'Go to resource group' button in the Notifications panel." lightbox="./media/manage-resource-groups-portal/manage-resource-groups-add-group-go-to-resource-group.png":::
## List resource groups 1. Sign in to the [Azure portal](https://portal.azure.com).
-2. To list the resource groups, select **Resource groups**
-
- :::image type="content" source="./media/manage-resource-groups-portal/manage-resource-groups-list-groups.png" alt-text="Screenshot of the Azure portal displaying a list of resource groups.":::
+1. To list the resource groups, select **Resource groups**
+1. To customize the information displayed for the resource groups, configure the filters. The following screenshot shows the additional columns you could add to the display:
-3. To customize the information displayed for the resource groups, select **Edit columns**. The following screenshot shows the additional columns you could add to the display:
+ :::image type="content" source="./media/manage-resource-groups-portal/manage-resource-groups-list-groups.png" alt-text="Screenshot of the Azure portal displaying a list of resource groups." lightbox="./media/manage-resource-groups-portal/manage-resource-groups-list-groups.png":::
## Open resource groups 1. Sign in to the [Azure portal](https://portal.azure.com).
-2. Select **Resource groups**.
-3. Select the resource group you want to open.
+1. Select **Resource groups**.
+1. Select the resource group you want to open.
## Delete resource groups 1. Open the resource group you want to delete. See [Open resource groups](#open-resource-groups).
-2. Select **Delete resource group**.
+1. Select **Delete resource group**.
- :::image type="content" source="./media/manage-resource-groups-portal/delete-group.png" alt-text="Screenshot of the Azure portal with the Delete resource group button highlighted in a specific resource group.":::
+ :::image type="content" source="./media/manage-resource-groups-portal/delete-group.png" alt-text="Screenshot of the Azure portal with the Delete resource group button highlighted in a specific resource group." lightbox="./media/manage-resource-groups-portal/delete-group.png":::
For more information about how Azure Resource Manager orders the deletion of resources, see [Azure Resource Manager resource group deletion](delete-resource-group.md).
You can move the resources in the group to another resource group. For more info
## Lock resource groups
-Locking prevents other users in your organization from accidentally deleting or modifying critical resources, such as Azure subscription, resource group, or resource.
+Locking prevents other users in your organization from accidentally deleting or modifying critical resources, such as Azure subscription, resource group, or resource.
1. Open the resource group you want to lock. See [Open resource groups](#open-resource-groups).
-2. In the left pane, select **Locks**.
-3. To add a lock to the resource group, select **Add**.
-4. Enter **Lock name**, **Lock type**, and **Notes**. The lock types include **Read-only**, and **Delete**.
+1. In the left pane, select **Locks**.
+1. To add a lock to the resource group, select **Add**.
+1. Enter **Lock name**, **Lock type**, and **Notes**. The lock types include **Read-only**, and **Delete**.
- :::image type="content" source="./media/manage-resource-groups-portal/manage-resource-groups-add-lock.png" alt-text="Screenshot of the Add Lock form in the Azure portal with fields for Lock name, Lock type, and Notes.":::
+ :::image type="content" source="./media/manage-resource-groups-portal/manage-resource-groups-add-lock.png" alt-text="Screenshot of the Add Lock form in the Azure portal with fields for Lock name, Lock type, and Notes." lightbox="./media/manage-resource-groups-portal/manage-resource-groups-add-lock.png":::
For more information, see [Lock resources to prevent unexpected changes](lock-resources.md).
azure-resource-manager Resources Without Resource Group Limit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resources-without-resource-group-limit.md
Title: Resources without 800 count limit description: Lists the Azure resource types that can have more than 800 instances in a resource group. Previously updated : 02/02/2023 Last updated : 08/15/2023 # Resources not limited to 800 instances per resource group
Some resources have a limit on the number instances per region. This limit is di
* automationAccounts
+## Microsoft.AzureArcData
+
+* SqlServerInstances
+ ## Microsoft.AzureStack * generateDeploymentLicense
Some resources have a limit on the number instances per region. This limit is di
* botServices - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.Resources/ARMDisableResourcesPerRGLimit
+## Microsoft.Cdn
+
+* profiles - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.Resources/ARMDisableResourcesPerRGLimit
+* profiles/networkpolicies - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.Resources/ARMDisableResourcesPerRGLimit
+ ## Microsoft.Compute
+* diskEncryptionSets
* disks * galleries * galleries/images
Some resources have a limit on the number instances per region. This limit is di
## Microsoft.DBforPostgreSQL * flexibleServers
-* serverGroups
* serverGroupsv2 * servers
-* serversv2
## Microsoft.DevTestLab
Some resources have a limit on the number instances per region. This limit is di
## Microsoft.EdgeOrder
+* bootstrapConfigurations
* orderItems * orders
Some resources have a limit on the number instances per region. This limit is di
* clusters * namespaces
+## Microsoft.Fabric
+
+* capacities - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.Fabric/UnlimitedResourceGroupQuota
+ ## Microsoft.GuestConfiguration * guestConfigurationAssignments
Some resources have a limit on the number instances per region. This limit is di
* machines * machines/extensions
+* machines/runcommands
## Microsoft.Logic
Some resources have a limit on the number instances per region. This limit is di
## Microsoft.Network
-* applicationGatewayWebApplicationFirewallPolicies
* applicationSecurityGroups
-* bastionHosts
* customIpPrefixes * ddosProtectionPlans
-* dnsForwardingRulesets
-* dnsForwardingRulesets/forwardingRules
-* dnsForwardingRulesets/virtualNetworkLinks
-* dnsResolvers
-* dnsResolvers/inboundEndpoints
-* dnsResolvers/outboundEndpoints
-* dnszones
-* dnszones/A
-* dnszones/AAAA
-* dnszones/all
-* dnszones/CAA
-* dnszones/CNAME
-* dnszones/MX
-* dnszones/NS
-* dnszones/PTR
-* dnszones/recordsets
-* dnszones/SOA
-* dnszones/SRV
-* dnszones/TXT
-* expressRouteCrossConnections
* loadBalancers - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.Resources/ARMDisableResourcesPerRGLimit * networkIntentPolicies * networkInterfaces * networkSecurityGroups
-* privateDnsZones
-* privateDnsZones/A
-* privateDnsZones/AAAA
-* privateDnsZones/all
-* privateDnsZones/CNAME
-* privateDnsZones/MX
-* privateDnsZones/PTR
-* privateDnsZones/SOA
-* privateDnsZones/SRV
-* privateDnsZones/TXT
-* privateDnsZones/virtualNetworkLinks
* privateEndpointRedirectMaps * privateEndpoints * privateLinkServices * publicIPAddresses * serviceEndpointPolicies
-* trafficmanagerprofiles
-* virtualNetworks/privateDnsZoneLinks
* virtualNetworkTaps
+## Microsoft.NetworkCloud
+
+* volumes
+
+## Microsoft.NetworkFunction
+
+* vpnBranches - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.NetworkFunction/AllowNaasVpnAccess
+ ## Microsoft.NotificationHubs * namespaces - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.NotificationHubs/ARMDisableResourcesPerRGLimit
Some resources have a limit on the number instances per region. This limit is di
* assignments * securityConnectors
+* securityConnectors/devops
## Microsoft.ServiceBus
Some resources have a limit on the number instances per region. This limit is di
* accounts/jobs * accounts/models * accounts/networks
+* accounts/secrets
* accounts/storageContainers ## Microsoft.Sql
Some resources have a limit on the number instances per region. This limit is di
* storageAccounts
-## Microsoft.StoragePool
-
-* diskPools
-* diskPools/iscsiTargets
- ## Microsoft.StreamAnalytics * streamingjobs - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.StreamAnalytics/ASADisableARMResourcesPerRGLimit
Some resources have a limit on the number instances per region. This limit is di
## Microsoft.Web * apiManagementAccounts/apis
+* certificates - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.Web/DisableResourcesPerRGLimitForAPIMinWebApp
* sites ## Next steps
azure-vmware Deploy Vsan Stretched Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-vsan-stretched-clusters.md
It should be noted that these types of failures, although rare, fall outside the
Azure VMware Solution stretched clusters are available in the following regions: - UK South (on AV36) -- West Europe (on AV36)
+- West Europe (on AV36, and AV36P)
- Germany West Central (on AV36) - Australia East (on AV36P)
azure-vmware Rotate Cloudadmin Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/rotate-cloudadmin-credentials.md
description: Learn how to rotate the vCenter Server credentials for your Azure V
Previously updated : 12/22/2022 Last updated : 8/15/2023 #Customer intent: As an Azure service administrator, I want to rotate my cloudadmin credentials so that the HCX Connector has the latest vCenter Server CloudAdmin credentials. # Rotate the cloudadmin credentials for Azure VMware Solution
->[!IMPORTANT]
->Currently, rotating your NSX-T Manager *cloudadmin* credentials isn't supported. To rotate your NSX-T Manager password, submit a [support request](https://rc.portal.azure.com/#create/Microsoft.Support). This process might impact running HCX services.
-In this article, you'll rotate the cloudadmin credentials (vCenter Server *CloudAdmin* credentials) for your Azure VMware Solution private cloud. Although the password for this account doesn't expire, you can generate a new one at any time.
+In this article, you'll rotate the cloudadmin credentials (vCenter Server and NSX-T *CloudAdmin* credentials) for your Azure VMware Solution private cloud. Although the password for this account doesn't expire, you can generate a new one at any time.
>[!CAUTION]
->If you use your cloudadmin credentials to connect services to vCenter Server in your private cloud, those connections will stop working once you rotate your password. Those connections will also lock out the cloudadmin account unless you stop those services before rotating the password.
+>If you use your cloudadmin credentials to connect services to vCenter Server or NSX-T in your private cloud, those connections will stop working once you rotate your password. Those connections will also lock out the cloudadmin account unless you stop those services before rotating the password.
## Prerequisites
-Consider and determine which services connect to vCenter Server as *cloudadmin@vsphere.local* before you rotate the password. These services may include VMware services such as HCX, vRealize Orchestrator, vRealize Operations Manager, VMware Horizon, or other third-party tools used for monitoring or provisioning.
+Consider and determine which services connect to vCenter Server as *cloudadmin@vsphere.local* or NSX-T as cloudadmin before you rotate the password. These services may include VMware services such as HCX, vRealize Orchestrator, vRealize Operations Manager, VMware Horizon, or other third-party tools used for monitoring or provisioning.
One way to determine which services authenticate to vCenter Server with the cloudadmin user is to inspect vSphere events using the vSphere Client for your private cloud. After you identify such services, and before rotating the password, you must stop these services. Otherwise, the services won't work after you rotate the password. You'll also experience temporary locks on your vCenter Server CloudAdmin account, as these services continuously attempt to authenticate using a cached version of the old credentials.
-Instead of using the cloudadmin user to connect services to vCenter Server, we recommend individual accounts for each service. For more information about setting up separate accounts for connected services, see [Access and Identity Concepts](./concepts-identity.md).
+Instead of using the cloudadmin user to connect services to vCenter Server or NSX-T, we recommend individual accounts for each service. For more information about setting up separate accounts for connected services, see [Access and Identity Concepts](./concepts-identity.md).
## Reset your vCenter Server credentials ### [Portal](#tab/azure-portal) 1. In your Azure VMware Solution private cloud, select **VMWare credentials**.
-1. Select **Generate new password**.
+1. Select **Generate new password** under vCenter Server credentials.
1. Select the confirmation checkbox and then select **Generate password**.
To begin using Azure CLI:
``` -----
-
--
-
+
-
-## Update HCX Connector
+### Update HCX Connector
1. Go to the on-premises HCX Connector at https://{ip of the HCX connector appliance}:443 and sign in using the new credentials.
To begin using Azure CLI:
4. Provide the new vCenter Server user credentials and select **Edit**, which saves the credentials. Save should show successful.
+## Reset your NSX-T manager credentials
+
+1. In your Azure VMware Solution private cloud, select **VMWare credentials**.
+1. Select **Generate new password** under NSX-T Manager credentials.
+1. Select the confirmation checkbox and then select **Generate password**.
+ ## Next steps
-Now that you've covered resetting your vCenter Server credentials for Azure VMware Solution, you may want to learn about:
+Now that you've covered resetting your vCenter Server and NSX-T credentials for Azure VMware Solution, you may want to learn about:
- [Integrating Azure native services in Azure VMware Solution](integrate-azure-native-services.md) - [Deploying disaster recovery for Azure VMware Solution workloads using VMware HCX](deploy-disaster-recovery-using-vmware-hcx.md) -
azure-web-pubsub Concept Azure Ad Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/concept-azure-ad-authorization.md
Title: Authorize access with Azure Active Directory for Azure Web PubSub
-description: This article provides information on authorizing access to Azure Web PubSub Service resources using Azure Active Directory.
+description: This article provides information on authorizing access to Azure Web PubSub Service resources using Azure Active Directory.
By utilizing role-based access control (RBAC) within Azure AD, permissions can b
Using Azure AD for authorization of Web PubSub requests offers improved security and ease of use compared to Access Key authorization. Microsoft recommends utilizing Azure AD authorization with Web PubSub resources when possible to ensure access with the minimum necessary privileges. <a id="security-principal"></a>
-*[1] security principal: a user/resource group, an application, or a service principal such as system-assigned identities and user-assigned identities.*
+_[1] security principal: a user/resource group, an application, or a service principal such as system-assigned identities and user-assigned identities._
## Overview of Azure AD for Web PubSub
Before assigning an Azure RBAC role to a security principal, it's important to i
You can scope access to Azure SignalR resources at the following levels, beginning with the narrowest scope: -- **An individual resource.**
+- **An individual resource.**
At this scope, a role assignment applies to only the target resource. -- **A resource group.**
+- **A resource group.**
At this scope, a role assignment applies to all of the resources in the resource group.
You can scope access to Azure SignalR resources at the following levels, beginni
At this scope, a role assignment applies to all of the resources in all of the resource groups in the subscription. -- **A management group.**
+- **A management group.**
At this scope, a role assignment applies to all of the resources in all of the resource groups in all of the subscriptions in the management group.
You can scope access to Azure SignalR resources at the following levels, beginni
- `Web PubSub Service Owner`
- Full access to data-plane permissions, including read/write REST APIs and Auth APIs.
+ Full access to data-plane permissions, including read/write REST APIs and Auth APIs.
- This role is the most common used for building an upstream server.
+ This role is the most common used for building an upstream server.
- `Web PubSub Service Reader`
- Use to grant read-only REST APIs permissions to Web PubSub resources.
+ Use to grant read-only REST APIs permissions to Web PubSub resources.
- It's used when you'd like to write a monitoring tool that calling **ONLY** Web PubSub data-plane **READONLY** REST APIs.
+ It's used when you'd like to write a monitoring tool that calling **ONLY** Web PubSub data-plane **READONLY** REST APIs.
## Next steps To learn how to create an Azure application and use Azure AD auth, see+ - [Authorize request to Web PubSub resources with Azure AD from Azure applications](howto-authorize-from-application.md) To learn how to configure a managed identity and use Azure AD auth, see+ - [Authorize request to Web PubSub resources with Azure AD from managed identities](howto-authorize-from-managed-identity.md)
-To learn more about roles and role assignments, see
+To learn more about roles and role assignments, see
+ - [What is Azure role-based access control](../role-based-access-control/overview.md)
-To learn how to create custom roles, see
+To learn how to create custom roles, see
+ - [Steps to create a custom role](../role-based-access-control/custom-roles.md#steps-to-create-a-custom-role) To learn how to use only Azure AD authentication, see-- [Disable local authentication](./howto-disable-local-auth.md)+
+- [Disable local authentication](./howto-disable-local-auth.md)
azure-web-pubsub Concept Service Internals https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/concept-service-internals.md
Title: Azure Web PubSub service internals
-description: Learn about Azure Web PubSub Service internals, the architecture, the connections and how data is transmitted.
+description: Learn about Azure Web PubSub Service internals, the architecture, the connections and how data is transmitted.
Last updated 09/30/2022
-# Azure Web PubSub service internals
+# Azure Web PubSub service internals
Azure Web PubSub Service provides an easy way to publish/subscribe messages using simple [WebSocket](https://tools.ietf.org/html/rfc6455) connections.
Azure Web PubSub Service provides an easy way to publish/subscribe messages usin
- The service manages the WebSocket connections for you. ## Terms
-* **Service**: Azure Web PubSub Service.
+
+- **Service**: Azure Web PubSub Service.
[!INCLUDE [Terms](includes/terms.md)]
Azure Web PubSub Service provides an easy way to publish/subscribe messages usin
![Diagram showing the Web PubSub service workflow.](./media/concept-service-internals/workflow.png) Workflow as shown in the above graph:
-1. A *client* connects to the service `/client` endpoint using WebSocket transport. Service forward every WebSocket frame to the configured upstream(server). The WebSocket connection can connect with any custom subprotocol for the server to handle, or it can connect with the service-supported subprotocol `json.webpubsub.azure.v1`, which empowers the clients to do pub/sub directly. Details are described in [client protocol](#client-protocol).
+
+1. A _client_ connects to the service `/client` endpoint using WebSocket transport. Service forward every WebSocket frame to the configured upstream(server). The WebSocket connection can connect with any custom subprotocol for the server to handle, or it can connect with the service-supported subprotocol `json.webpubsub.azure.v1`, which empowers the clients to do pub/sub directly. Details are described in [client protocol](#client-protocol).
2. On different client events, the service invokes the server using **CloudEvents protocol**. [**CloudEvents**](https://github.com/cloudevents/spec/tree/v1.0.1) is a standardized and protocol-agnostic definition of the structure and metadata description of events hosted by the Cloud Native Computing Foundation (CNCF). Detailed implementation of CloudEvents protocol relies on the server role, described in [server protocol](#server-protocol). 3. The Web PubSub server can invoke the service using the REST API to send messages to clients or to manage the connected clients. Details are described in [server protocol](#server-protocol)
Workflow as shown in the above graph:
A client connection connects to the `/client` endpoint of the service using [WebSocket protocol](https://tools.ietf.org/html/rfc6455). The WebSocket protocol provides full-duplex communication channels over a single TCP connection and was standardized by the IETF as RFC 6455 in 2011. Most languages have native support to start WebSocket connections. Our service supports two kinds of clients:+ - One is called [the simple WebSocket client](#the-simple-websocket-client) - The other is called [the PubSub WebSocket client](#the-pubsub-websocket-client) ### The simple WebSocket client+ A simple WebSocket client, as the naming indicates, is a simple WebSocket connection. It can also have its custom subprotocol. For example, in JS, a simple WebSocket client can be created using the following code.+ ```js // simple WebSocket client1
-var client1 = new WebSocket('wss://test.webpubsub.azure.com/client/hubs/hub1');
+var client1 = new WebSocket("wss://test.webpubsub.azure.com/client/hubs/hub1");
// simple WebSocket client2 with some custom subprotocol
-var client2 = new WebSocket('wss://test.webpubsub.azure.com/client/hubs/hub1', 'custom.subprotocol')
-
+var client2 = new WebSocket(
+ "wss://test.webpubsub.azure.com/client/hubs/hub1",
+ "custom.subprotocol"
+);
``` A simple WebSocket client follows a client<->server architecture, as the below sequence diagram shows: ![Diagram showing the sequence for a client connection.](./media/concept-service-internals/simple-client-sequence.png) - 1. When the client starts a WebSocket handshake, the service tries to invoke the `connect` event handler for WebSocket handshake. Developers can use this handler to handle the WebSocket handshake, determine the subprotocol to use, authenticate the client, and join the client to groups. 2. When the client is successfully connected, the service invokes a `connected` event handler. It works as a notification and doesn't block the client from sending messages. Developers can use this handler to do data storage and can respond with messages to the client. The service also pushes a `connected` event to all concerning event listeners, if any. 3. When the client sends messages, the service triggers a `message` event to the event handler to handle the messages sent. This event is a general event containing the messages sent in a WebSocket frame. Your code needs to dispatch the messages inside this event handler. If the event handler returns non-successful response code for, the service drops the client connection. The service also pushes a `message` event to all concerning event listeners, if any. If the service can't find any registered servers to receive the messages, the service also drops the connection. 4. When the client disconnects, the service tries to trigger the `disconnected` event to the event handler once it detects the disconnect. The service also pushes a `disconnected` event to all concerning event listeners, if any. #### Scenarios+ These connections can be used in a typical client-server architecture where the client sends messages to the server and the server handles incoming messages using [Event Handlers](#event-handler). It can also be used when customers apply existing [subprotocols](https://www.iana.org/assignments/websocket/websocket.xml) in their application logic. ### The PubSub WebSocket client+ The service also supports a specific subprotocol called `json.webpubsub.azure.v1`, which empowers the clients to do publish/subscribe directly instead of a round trip to the upstream server. We call the WebSocket connection with `json.webpubsub.azure.v1` subprotocol a PubSub WebSocket client. For more information, see the [Web PubSub client specification](https://github.com/Azure/azure-webpubsub/blob/main/protocols/client/client-spec.md) on GitHub. For example, in JS, a PubSub WebSocket client can be created using the following code.+ ```js // PubSub WebSocket client
-var pubsub = new WebSocket('wss://test.webpubsub.azure.com/client/hubs/hub1', 'json.webpubsub.azure.v1');
+var pubsub = new WebSocket(
+ "wss://test.webpubsub.azure.com/client/hubs/hub1",
+ "json.webpubsub.azure.v1"
+);
``` A PubSub WebSocket client can:
-* Join a group, for example:
- ```json
- {
- "type": "joinGroup",
- "group": "<group_name>"
- }
- ```
-* Leave a group, for example:
- ```json
- {
- "type": "leaveGroup",
- "group": "<group_name>"
- }
- ```
-* Publish messages to a group, for example:
- ```json
- {
- "type": "sendToGroup",
- "group": "<group_name>",
- "data": { "hello": "world" }
- }
- ```
-* Send custom events to the upstream server, for example:
-
- ```json
- {
- "type": "event",
- "event": "<event_name>",
- "data": { "hello": "world" }
- }
- ```
+
+- Join a group, for example:
+ ```json
+ {
+ "type": "joinGroup",
+ "group": "<group_name>"
+ }
+ ```
+- Leave a group, for example:
+ ```json
+ {
+ "type": "leaveGroup",
+ "group": "<group_name>"
+ }
+ ```
+- Publish messages to a group, for example:
+ ```json
+ {
+ "type": "sendToGroup",
+ "group": "<group_name>",
+ "data": { "hello": "world" }
+ }
+ ```
+- Send custom events to the upstream server, for example:
+
+ ```json
+ {
+ "type": "event",
+ "event": "<event_name>",
+ "data": { "hello": "world" }
+ }
+ ```
[PubSub WebSocket Subprotocol](./reference-json-webpubsub-subprotocol.md) contains the details of the `json.webpubsub.azure.v1` subprotocol.
-You may have noticed that for a [simple WebSocket client](#the-simple-websocket-client), the *server* is a **must have** role to receive the `message` events from clients. A simple WebSocket connection always triggers a `message` event when it sends messages, and always relies on the server-side to process messages and do other operations. With the help of the `json.webpubsub.azure.v1` subprotocol, an authorized client can join a group and publish messages to a group directly. It can also route messages to different event handlers / event listeners by customizing the *event* the message belongs.
+You may have noticed that for a [simple WebSocket client](#the-simple-websocket-client), the _server_ is a **must have** role to receive the `message` events from clients. A simple WebSocket connection always triggers a `message` event when it sends messages, and always relies on the server-side to process messages and do other operations. With the help of the `json.webpubsub.azure.v1` subprotocol, an authorized client can join a group and publish messages to a group directly. It can also route messages to different event handlers / event listeners by customizing the _event_ the message belongs.
#### Scenarios:+ Such clients can be used when clients want to talk to each other. Messages are sent from `client2` to the service and the service delivers the message directly to `client1` if the clients are authorized to do so. Client1: ```js
-var client1 = new WebSocket("wss://xxx.webpubsub.azure.com/client/hubs/hub1", "json.webpubsub.azure.v1");
-client1.onmessage = e => {
- if (e.data) {
- var message = JSON.parse(e.data);
- if (message.type === "message"
- && message.group === "Group1"){
- // Only print messages from Group1
- console.log(message.data);
- }
+var client1 = new WebSocket(
+ "wss://xxx.webpubsub.azure.com/client/hubs/hub1",
+ "json.webpubsub.azure.v1"
+);
+client1.onmessage = (e) => {
+ if (e.data) {
+ var message = JSON.parse(e.data);
+ if (message.type === "message" && message.group === "Group1") {
+ // Only print messages from Group1
+ console.log(message.data);
}
+ }
};
-client1.onopen = e => {
- client1.send(JSON.stringify({
- type: "joinGroup",
- group: "Group1"
- }));
+client1.onopen = (e) => {
+ client1.send(
+ JSON.stringify({
+ type: "joinGroup",
+ group: "Group1",
+ })
+ );
}; ```
As the above example shows, `client2` sends data directly to `client1` by publis
### Client events summary Client events fall into two categories:
-* Synchronous events (blocking)
- Synchronous events block the client workflow.
- * `connect`: This event is for event handler only. When the client starts a WebSocket handshake, the event is triggered and developers can use `connect` event handler to handle the WebSocket handshake, determine the subprotocol to use, authenticate the client, and join the client to groups.
- * `message`: This event is triggered when a client sends a message.
-* Asynchronous events (non-blocking)
- Asynchronous events don't block the client workflow, it acts as some notification to server. When such an event trigger fails, the service logs the error detail.
- * `connected`: This event is triggered when a client connects to the service successfully.
- * `disconnected`: This event is triggered when a client disconnected with the service.
+
+- Synchronous events (blocking)
+ Synchronous events block the client workflow.
+ - `connect`: This event is for event handler only. When the client starts a WebSocket handshake, the event is triggered and developers can use `connect` event handler to handle the WebSocket handshake, determine the subprotocol to use, authenticate the client, and join the client to groups.
+ - `message`: This event is triggered when a client sends a message.
+- Asynchronous events (non-blocking)
+ Asynchronous events don't block the client workflow, it acts as some notification to server. When such an event trigger fails, the service logs the error detail.
+ - `connected`: This event is triggered when a client connects to the service successfully.
+ - `disconnected`: This event is triggered when a client disconnected with the service.
### Client message limit+ The maximum allowed message size for one WebSocket frame is **1MB**. ### Client authentication
The following graph describes the workflow.
![Diagram showing the client authentication workflow.](./media/concept-service-internals/client-connect-workflow.png)
-As you may have noticed when we describe the PubSub WebSocket clients, that a client can publish to other clients only when it's *authorized* to. The `role`s of the client determines the *initial* permissions the client have:
+As you may have noticed when we describe the PubSub WebSocket clients, that a client can publish to other clients only when it's _authorized_ to. The `role`s of the client determines the _initial_ permissions the client have:
-| Role | Permission |
-|||
-| Not specified | The client can send events.
-| `webpubsub.joinLeaveGroup` | The client can join/leave any group.
-| `webpubsub.sendToGroup` | The client can publish messages to any group.
-| `webpubsub.joinLeaveGroup.<group>` | The client can join/leave group `<group>`.
-| `webpubsub.sendToGroup.<group>` | The client can publish messages to group `<group>`.
+| Role | Permission |
+| - | |
+| Not specified | The client can send events. |
+| `webpubsub.joinLeaveGroup` | The client can join/leave any group. |
+| `webpubsub.sendToGroup` | The client can publish messages to any group. |
+| `webpubsub.joinLeaveGroup.<group>` | The client can join/leave group `<group>`. |
+| `webpubsub.sendToGroup.<group>` | The client can publish messages to group `<group>`. |
The server-side can also grant or revoke permissions of the client dynamically through [server protocol](#connection-manager) as to be illustrated in a later section.
The server-side can also grant or revoke permissions of the client dynamically t
Server protocol provides the functionality for the server to handle client events and manage the client connections and the groups. In general, server protocol contains three roles:+ 1. [Event handler](#event-handler) 1. [Connection manager](#connection-manager) 1. [Event listener](#event-listener) ### Event handler+ The event handler handles the incoming client events. Event handlers are registered and configured in the service through the portal or Azure CLI. When a client event is triggered, the service can identify if the event is to be handled or not. Now we use `PUSH` mode to invoke the event handler. The event handler on the server side exposes a publicly accessible endpoint for the service to invoke when the event is triggered. It acts as a **webhook**. Web PubSub service delivers client events to the upstream webhook using the [CloudEvents HTTP protocol](https://github.com/cloudevents/spec/blob/v1.0.1/http-protocol-binding.md).
When doing the validation, the `{event}` parameter is resolved to `validate`. Fo
For now, we don't support [WebHook-Request-Rate](https://github.com/cloudevents/spec/blob/v1.0/http-webhook.md#414-webhook-request-rate) and [WebHook-Request-Callback](https://github.com/cloudevents/spec/blob/v1.0/http-webhook.md#413-webhook-request-callback). #### Authentication between service and webhook+ - Anonymous mode - Simple authentication that `code` is provided through the configured Webhook URL. - Use Azure Active Directory (Azure AD) authentication. For more information, see [how to use managed identity](howto-use-managed-identity.md) for details.
- - Step1: Enable Identity for the Web PubSub service
- - Step2: Select from existing Azure AD application that stands for your webhook web app
+ - Step1: Enable Identity for the Web PubSub service
+ - Step2: Select from existing Azure AD application that stands for your webhook web app
### Connection manager
-The server is by nature an authorized user. With the help of the *event handler role*, the server knows the metadata of the clients, for example, `connectionId` and `userId`, so it can:
- - Close a client connection
- - Send messages to a client
- - Send messages to clients that belong to the same user
- - Add a client to a group
- - Add clients authenticated as the same user to a group
- - Remove a client from a group
- - Remove clients authenticated as the same user from a group
- - Publish messages to a group
+The server is by nature an authorized user. With the help of the _event handler role_, the server knows the metadata of the clients, for example, `connectionId` and `userId`, so it can:
+
+- Close a client connection
+- Send messages to a client
+- Send messages to clients that belong to the same user
+- Add a client to a group
+- Add clients authenticated as the same user to a group
+- Remove a client from a group
+- Remove clients authenticated as the same user from a group
+- Publish messages to a group
It can also grant or revoke publish/join permissions for a PubSub client:
- - Grant publish/join permissions to some specific group or to all groups
- - Revoke publish/join permissions for some specific group or for all groups
- - Check if the client has permission to join or publish to some specific group or to all groups
+
+- Grant publish/join permissions to some specific group or to all groups
+- Revoke publish/join permissions for some specific group or for all groups
+- Check if the client has permission to join or publish to some specific group or to all groups
The service provides REST APIs for the server to do connection management.
You can combine an [event handler](#event-handler) and event listeners for the s
Web PubSub service delivers client events to event listeners using [CloudEvents AMQP extension for Azure Web PubSub](reference-cloud-events-amqp.md). ### Summary
-You may have noticed that the *event handler role* handles communication from the service to the server while *the manager role* handles communication from the server to the service. After combining the two roles, the data flow between service and server looks similar to the following diagram using HTTP protocol.
+
+You may have noticed that the _event handler role_ handles communication from the service to the server while _the manager role_ handles communication from the server to the service. After combining the two roles, the data flow between service and server looks similar to the following diagram using HTTP protocol.
![Diagram showing the Web PubSub service bi-directional workflow.](./media/concept-service-internals/http-service-server.png)
azure-web-pubsub Howto Authorize From Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-authorize-from-application.md
# Authorize request to Web PubSub resources with Azure AD from Azure applications
-Azure Web PubSub Service supports Azure Active Directory (Azure AD) authorizing requests from [Azure applications](../active-directory/develop/app-objects-and-service-principals.md).
+Azure Web PubSub Service supports Azure Active Directory (Azure AD) authorizing requests from [Azure applications](../active-directory/develop/app-objects-and-service-principals.md).
This article shows how to configure your Web PubSub resource and codes to authorize the request to a Web PubSub resource from an Azure application.
The first step is to register an Azure application.
2. Under **Manage** section, select **App registrations**. 3. Click **New registration**.
- ![Screenshot of registering an application.](./media/howto-authorize-from-application/register-an-application.png)
+ ![Screenshot of registering an application.](./media/howto-authorize-from-application/register-an-application.png)
4. Enter a display **Name** for your application. 5. Click **Register** to confirm the register.
Once you have your application registered, you can find the **Application (clien
![Screenshot of an application.](./media/howto-authorize-from-application/application-overview.png) To learn more about registering an application, see+ - [Quickstart: Register an application with the Microsoft identity platform](../active-directory/develop/quickstart-register-app.md). ## Add credentials
The application requires a client secret to prove its identity when requesting a
1. Under **Manage** section, select **Certificates & secrets** 1. On the **Client secrets** tab, click **New client secret**.
-![Screenshot of creating a client secret.](./media/howto-authorize-from-application/new-client-secret.png)
+ ![Screenshot of creating a client secret.](./media/howto-authorize-from-application/new-client-secret.png)
1. Enter a **description** for the client secret, and choose a **expire time**.
-1. Copy the value of the **client secret** and then paste it to a secure location.
- > [!NOTE]
- > The secret will display only once.
+1. Copy the value of the **client secret** and then paste it to a secure location.
+ > [!NOTE]
+ > The secret will display only once.
+ ### Certificate You can also upload a certification instead of creating a client secret.
To learn more about adding credentials, see
## Add role assignments on Azure portal
-This sample shows how to assign a `Web PubSub Service Owner` role to a service principal (application) over a Web PubSub resource.
+This sample shows how to assign a `Web PubSub Service Owner` role to a service principal (application) over a Web PubSub resource.
> [!Note] > A role can be assigned to any scope, including management group, subscription, resource group or a single resource. To learn more about scope, see [Understand scope for Azure RBAC](../role-based-access-control/scope-overview.md)+ 1. On the [Azure portal](https://portal.azure.com/), navigate to your Web PubSub resource. 1. Click **Access Control (IAM)** to display access control settings for the Azure Web PubSub.
This sample shows how to assign a `Web PubSub Service Owner` role to a service p
1. Click **Select Members**
-3. Search for and select the application that you would like to assign the role to.
+1. Search for and select the application that you would like to assign the role to.
1. Click **Select** to confirm the selection.
-4. Click **Next**.
+1. Click **Next**.
![Screenshot of assigning role to service principals.](./media/howto-authorize-from-application/assign-role-to-service-principals.png)
-5. Click **Review + assign** to confirm the change.
+1. Click **Review + assign** to confirm the change.
> [!IMPORTANT] > Azure role assignments may take up to 30 minutes to propagate.
-To learn more about how to assign and manage Azure role assignments, see these articles:
+> To learn more about how to assign and manage Azure role assignments, see these articles:
+ - [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md) - [Assign Azure roles using the REST API](../role-based-access-control/role-assignments-rest.md) - [Assign Azure roles using Azure PowerShell](../role-based-access-control/role-assignments-powershell.md)
To learn more about how to assign and manage Azure role assignments, see these a
- [Assign Azure roles using Azure Resource Manager templates](../role-based-access-control/role-assignments-template.md) ## Use Postman to get the Azure AD token+ 1. Launch Postman 2. For the method, select **GET**.
To learn more about how to assign and manage Azure role assignments, see these a
![Screenshot of the basic info using postman to get the token.](./media/howto-authorize-from-application/get-azure-ad-token-using-postman.png) 5. Switch to the **Body** tab, and add the following keys and values.
- 1. Select **x-www-form-urlencoded**.
- 2. Add `grant_type` key, and type `client_credentials` for the value.
- 3. Add `client_id` key, and paste the value of **Application (client) ID** in the **Overview** tab of the application you created earlier.
- 4. Add `client_secret` key, and paste the value of client secret you noted down earlier.
- 5. Add `resource` key, and type `https://webpubsub.azure.com` for the value.
+ 1. Select **x-www-form-urlencoded**.
+ 2. Add `grant_type` key, and type `client_credentials` for the value.
+ 3. Add `client_id` key, and paste the value of **Application (client) ID** in the **Overview** tab of the application you created earlier.
+ 4. Add `client_secret` key, and paste the value of client secret you noted down earlier.
+ 5. Add `resource` key, and type `https://webpubsub.azure.com` for the value.
![Screenshot of the body parameters when using postman to get the token.](./media/howto-authorize-from-application/get-azure-ad-token-using-postman-body.png)
-6. Select **Send** to send the request to get the token. You see the token in the `access_token` field.
+6. Select **Send** to send the request to get the token. You see the token in the `access_token` field.
![Screenshot of the response token when using postman to get the token.](./media/howto-authorize-from-application/get-azure-ad-token-using-postman-response.png)
See the following related articles:
- [Overview of Azure AD for Web PubSub](concept-azure-ad-authorization.md) - [Authorize request to Web PubSub resources with Azure AD from managed identities](howto-authorize-from-managed-identity.md)-- [Disable local authentication](./howto-disable-local-auth.md)
+- [Disable local authentication](./howto-disable-local-auth.md)
azure-web-pubsub Howto Authorize From Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-authorize-from-managed-identity.md
# Authorize request to Web PubSub resources with Azure AD from managed identities
-Azure Web PubSub Service supports Azure Active Directory (Azure AD) authorizing requests from [Managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md).
+
+Azure Web PubSub Service supports Azure Active Directory (Azure AD) authorizing requests from [Managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md).
This article shows how to configure your Web PubSub resource and codes to authorize the request to a Web PubSub resource from a managed identity.
This is an example for configuring `System-assigned managed identity` on a `Virt
1. Click the **Save** button to confirm the change. ### How to create user-assigned managed identities+ - [Create a user-assigned managed identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md#create-a-user-assigned-managed-identity) ### How to configure managed identities on other platforms
This is an example for configuring `System-assigned managed identity` on a `Virt
- [How to use managed identities for App Service and Azure Functions](../app-service/overview-managed-identity.md).
-## Add role assignments on Azure portal
+## Add role assignments on Azure portal
-This sample shows how to assign a `Web PubSub Service Owner` role to a system-assigned identity over a Web PubSub resource.
+This sample shows how to assign a `Web PubSub Service Owner` role to a system-assigned identity over a Web PubSub resource.
> [!Note] > A role can be assigned to any scope, including management group, subscription, resource group or a single resource. To learn more about scope, see [Understand scope for Azure RBAC](../role-based-access-control/scope-overview.md)+ 1. Open [Azure portal](https://portal.azure.com/), navigate to your Web PubSub resource. 1. Click **Access Control (IAM)** to display access control settings for the Azure Web PubSub.
This sample shows how to assign a `Web PubSub Service Owner` role to a system-as
1. Click **Select** to confirm the selection.
-2. Click **Next**.
+1. Click **Next**.
![Screenshot of assigning role to managed identities.](./media/howto-authorize-from-managed-identity/assign-role-to-managed-identities.png)
-3. Click **Review + assign** to confirm the change.
+1. Click **Review + assign** to confirm the change.
> [!IMPORTANT] > Azure role assignments may take up to 30 minutes to propagate.
-To learn more about how to assign and manage Azure role assignments, see these articles:
+> To learn more about how to assign and manage Azure role assignments, see these articles:
+ - [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md) - [Assign Azure roles using the REST API](../role-based-access-control/role-assignments-rest.md) - [Assign Azure roles using Azure PowerShell](../role-based-access-control/role-assignments-powershell.md)
See the following related articles:
- [Overview of Azure AD for Web PubSub](concept-azure-ad-authorization.md) - [Authorize request to Web PubSub resources with Azure AD from Azure applications](howto-authorize-from-application.md)-- [Disable local authentication](./howto-disable-local-auth.md)
+- [Disable local authentication](./howto-disable-local-auth.md)
azure-web-pubsub Howto Create Serviceclient With Java And Azure Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-create-serviceclient-with-java-and-azure-identity.md
This how-to guide shows you how to create a `WebPubSubServiceClient` with Java a
1. Create a `TokenCredential` with Azure Identity SDK.
- ```java
- package com.webpubsub.tutorial;
+ ```java
+ package com.webpubsub.tutorial;
- import com.azure.core.credential.TokenCredential;
- import com.azure.identity.DefaultAzureCredentialBuilder;
+ import com.azure.core.credential.TokenCredential;
+ import com.azure.identity.DefaultAzureCredentialBuilder;
- public class App {
+ public class App {
- public static void main(String[] args) {
- TokenCredential credential = new DefaultAzureCredentialBuilder().build();
- }
- }
- ```
+ public static void main(String[] args) {
+ TokenCredential credential = new DefaultAzureCredentialBuilder().build();
+ }
+ }
+ ```
- `credential` can be any class that inherits from `TokenCredential` class.
+ `credential` can be any class that inherits from `TokenCredential` class.
- - EnvironmentCredential
- - ClientSecretCredential
- - ClientCertificateCredential
- - ManagedIdentityCredential
- - VisualStudioCredential
- - VisualStudioCodeCredential
- - AzureCliCredential
+ - EnvironmentCredential
+ - ClientSecretCredential
+ - ClientCertificateCredential
+ - ManagedIdentityCredential
+ - VisualStudioCredential
+ - VisualStudioCodeCredential
+ - AzureCliCredential
- To learn more, see [Azure Identity client library for Java](/java/api/overview/azure/identity-readme)
+ To learn more, see [Azure Identity client library for Java](/java/api/overview/azure/identity-readme)
-2. Then create a `client` with `endpoint`, `hub`, and `credential`.
+2. Then create a `client` with `endpoint`, `hub`, and `credential`.
- ```Java
- package com.webpubsub.tutorial;
+ ```Java
+ package com.webpubsub.tutorial;
- import com.azure.core.credential.TokenCredential;
- import com.azure.identity.DefaultAzureCredentialBuilder;
- import com.azure.messaging.webpubsub.WebPubSubServiceClient;
- import com.azure.messaging.webpubsub.WebPubSubServiceClientBuilder;
+ import com.azure.core.credential.TokenCredential;
+ import com.azure.identity.DefaultAzureCredentialBuilder;
+ import com.azure.messaging.webpubsub.WebPubSubServiceClient;
+ import com.azure.messaging.webpubsub.WebPubSubServiceClientBuilder;
- public class App {
- public static void main(String[] args) {
+ public class App {
+ public static void main(String[] args) {
- TokenCredential credential = new DefaultAzureCredentialBuilder().build();
+ TokenCredential credential = new DefaultAzureCredentialBuilder().build();
- // create the service client
- WebPubSubServiceClient client = new WebPubSubServiceClientBuilder()
- .endpoint("<endpoint>")
- .credential(credential)
- .hub("<hub>")
- .buildClient();
- }
- }
- ```
+ // create the service client
+ WebPubSubServiceClient client = new WebPubSubServiceClientBuilder()
+ .endpoint("<endpoint>")
+ .credential(credential)
+ .hub("<hub>")
+ .buildClient();
+ }
+ }
+ ```
- Learn how to use this client, see [Azure Web PubSub service client library for Java](/java/api/overview/azure/messaging-webpubsub-readme)
+ Learn how to use this client, see [Azure Web PubSub service client library for Java](/java/api/overview/azure/messaging-webpubsub-readme)
## Complete sample
azure-web-pubsub Howto Create Serviceclient With Javascript And Azure Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-create-serviceclient-with-javascript-and-azure-identity.md
This how-to guide shows you how to create a `WebPubSubServiceClient` using Azure
1. Create a `TokenCredential` with Azure Identity SDK.
- ```javascript
- const { DefaultAzureCredential } = require('@azure/identity')
+ ```javascript
+ const { DefaultAzureCredential } = require("@azure/identity");
- let credential = new DefaultAzureCredential();
- ```
+ let credential = new DefaultAzureCredential();
+ ```
- `credential` can be any class that inherits from `TokenCredential` class.
+ `credential` can be any class that inherits from `TokenCredential` class.
- - EnvironmentCredential
- - ClientSecretCredential
- - ClientCertificateCredential
- - ManagedIdentityCredential
- - VisualStudioCredential
- - VisualStudioCodeCredential
- - AzureCliCredential
+ - EnvironmentCredential
+ - ClientSecretCredential
+ - ClientCertificateCredential
+ - ManagedIdentityCredential
+ - VisualStudioCredential
+ - VisualStudioCodeCredential
+ - AzureCliCredential
- To learn more, see [Azure Identity client library for JavaScript](/javascript/api/overview/azure/identity-readme)
+ To learn more, see [Azure Identity client library for JavaScript](/javascript/api/overview/azure/identity-readme)
-2. Then create a `client` with `endpoint`, `hub`, and `credential`.
+2. Then create a `client` with `endpoint`, `hub`, and `credential`.
- ```javascript
- const { DefaultAzureCredential } = require('@azure/identity')
+ ```javascript
+ const { DefaultAzureCredential } = require("@azure/identity");
- let credential = new DefaultAzureCredential();
+ let credential = new DefaultAzureCredential();
- let serviceClient = new WebPubSubServiceClient("<endpoint>", credential, "<hub>");
- ```
+ let serviceClient = new WebPubSubServiceClient(
+ "<endpoint>",
+ credential,
+ "<hub>"
+ );
+ ```
- Learn how to use this client, see [Azure Web PubSub service client library for JavaScript](/javascript/api/overview/azure/web-pubsub-readme)
+ Learn how to use this client, see [Azure Web PubSub service client library for JavaScript](/javascript/api/overview/azure/web-pubsub-readme)
## Complete sample
azure-web-pubsub Howto Create Serviceclient With Net And Azure Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-create-serviceclient-with-net-and-azure-identity.md
This how-to guide shows you how to create a `WebPubSubServiceClient` using Azure
- Install [Azure.Messaging.WebPubSub](https://www.nuget.org/packages/Azure.Messaging.WebPubSub) from nuget.org ```bash
- Install-Package Azure.Messaging.WebPubSub
+ Install-Package Azure.Messaging.WebPubSub
``` ## Sample codes 1. Create a `TokenCredential` with Azure Identity SDK.
- ```C#
- using Azure.Identity;
-
- namespace chatapp
- {
- public class Program
- {
- public static void Main(string[] args)
- {
- var credential = new DefaultAzureCredential();
- }
- }
- }
- ```
-
- `credential` can be any class that inherits from `TokenCredential` class.
-
- - EnvironmentCredential
- - ClientSecretCredential
- - ClientCertificateCredential
- - ManagedIdentityCredential
- - VisualStudioCredential
- - VisualStudioCodeCredential
- - AzureCliCredential
-
- To learn more, see [Azure Identity client library for .NET](/dotnet/api/overview/azure/identity-readme)
-
-2. Then create a `client` with `endpoint`, `hub`, and `credential`.
-
- ```C#
- using Azure.Identity;
- using Azure.Messaging.WebPubSub;
-
- public class Program
- {
- public static void Main(string[] args)
- {
- var credential = new DefaultAzureCredential();
- var client = new WebPubSubServiceClient(new Uri("<endpoint>"), "<hub>", credential);
- }
- }
- ```
-
- Or inject it into `IServiceCollections` with our `BuilderExtensions`.
-
- ```C#
- using System;
-
- using Azure.Identity;
-
- using Microsoft.Extensions.Azure;
- using Microsoft.Extensions.Configuration;
- using Microsoft.Extensions.DependencyInjection;
-
- namespace chatapp
- {
- public class Startup
- {
- public Startup(IConfiguration configuration)
- {
- Configuration = configuration;
- }
-
- public IConfiguration Configuration { get; }
-
- public void ConfigureServices(IServiceCollection services)
- {
- services.AddAzureClients(builder =>
- {
- var credential = new DefaultAzureCredential();
- builder.AddWebPubSubServiceClient(new Uri("<endpoint>"), "<hub>", credential);
- });
- }
- }
- }
- ```
-
- Learn how to use this client, see [Azure Web PubSub service client library for .NET](/dotnet/api/overview/azure/messaging.webpubsub-readme)
+ ```C#
+ using Azure.Identity;
+
+ namespace chatapp
+ {
+ public class Program
+ {
+ public static void Main(string[] args)
+ {
+ var credential = new DefaultAzureCredential();
+ }
+ }
+ }
+ ```
+
+ `credential` can be any class that inherits from `TokenCredential` class.
+
+ - EnvironmentCredential
+ - ClientSecretCredential
+ - ClientCertificateCredential
+ - ManagedIdentityCredential
+ - VisualStudioCredential
+ - VisualStudioCodeCredential
+ - AzureCliCredential
+
+ To learn more, see [Azure Identity client library for .NET](/dotnet/api/overview/azure/identity-readme)
+
+2. Then create a `client` with `endpoint`, `hub`, and `credential`.
+
+ ```C#
+ using Azure.Identity;
+ using Azure.Messaging.WebPubSub;
+
+ public class Program
+ {
+ public static void Main(string[] args)
+ {
+ var credential = new DefaultAzureCredential();
+ var client = new WebPubSubServiceClient(new Uri("<endpoint>"), "<hub>", credential);
+ }
+ }
+ ```
+
+ Or inject it into `IServiceCollections` with our `BuilderExtensions`.
+
+ ```C#
+ using System;
+
+ using Azure.Identity;
+
+ using Microsoft.Extensions.Azure;
+ using Microsoft.Extensions.Configuration;
+ using Microsoft.Extensions.DependencyInjection;
+
+ namespace chatapp
+ {
+ public class Startup
+ {
+ public Startup(IConfiguration configuration)
+ {
+ Configuration = configuration;
+ }
+
+ public IConfiguration Configuration { get; }
+
+ public void ConfigureServices(IServiceCollection services)
+ {
+ services.AddAzureClients(builder =>
+ {
+ var credential = new DefaultAzureCredential();
+ builder.AddWebPubSubServiceClient(new Uri("<endpoint>"), "<hub>", credential);
+ });
+ }
+ }
+ }
+ ```
+
+ Learn how to use this client, see [Azure Web PubSub service client library for .NET](/dotnet/api/overview/azure/messaging.webpubsub-readme)
## Complete sample
azure-web-pubsub Howto Create Serviceclient With Python And Azure Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-create-serviceclient-with-python-and-azure-identity.md
This how-to guide shows you how to create a `WebPubSubServiceClient` using Azure
1. Create a `TokenCredential` with Azure Identity SDK.
- ```python
- from azure.identity import DefaultAzureCredential
+ ```python
+ from azure.identity import DefaultAzureCredential
- credential = DefaultAzureCredential()
- ```
+ credential = DefaultAzureCredential()
+ ```
- `credential` can be any class that inherits from `TokenCredential` class.
+ `credential` can be any class that inherits from `TokenCredential` class.
- - EnvironmentCredential
- - ClientSecretCredential
- - ClientCertificateCredential
- - ManagedIdentityCredential
- - VisualStudioCredential
- - VisualStudioCodeCredential
- - AzureCliCredential
+ - EnvironmentCredential
+ - ClientSecretCredential
+ - ClientCertificateCredential
+ - ManagedIdentityCredential
+ - VisualStudioCredential
+ - VisualStudioCodeCredential
+ - AzureCliCredential
- To learn more, see [Azure Identity client library for Python](/python/api/overview/azure/identity-readme)
+ To learn more, see [Azure Identity client library for Python](/python/api/overview/azure/identity-readme)
-2. Then create a `client` with `endpoint`, `hub`, and `credential`.
+2. Then create a `client` with `endpoint`, `hub`, and `credential`.
- ```python
- from azure.identity import DefaultAzureCredential
+ ```python
+ from azure.identity import DefaultAzureCredential
- credential = DefaultAzureCredential()
+ credential = DefaultAzureCredential()
- client = WebPubSubServiceClient(hub="<hub>", endpoint="<endpoint>", credential=credential)
- ```
+ client = WebPubSubServiceClient(hub="<hub>", endpoint="<endpoint>", credential=credential)
+ ```
- Learn how to use this client, see [Azure Web PubSub service client library for Python](/python/api/overview/azure/messaging-webpubsubservice-readme)
+ Learn how to use this client, see [Azure Web PubSub service client library for Python](/python/api/overview/azure/messaging-webpubsubservice-readme)
## Complete sample
azure-web-pubsub Howto Develop Create Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-develop-create-instance.md
Title: Create an Azure Web PubSub resource
-description: Quickstart showing how to create a Web PubSub resource from Azure portal, using Azure CLI and a Bicep template
+description: Quickstart showing how to create a Web PubSub resource from Azure portal, using Azure CLI and a Bicep template
Last updated 03/13/2023
zone_pivot_groups: azure-web-pubsub-create-resource-methods + # Create a Web PubSub resource ## Prerequisites+ > [!div class="checklist"]
-> * An Azure account with an active subscription. [Create a free Azure account](https://azure.microsoft.com/free/), if don't have one already.
+>
+> - An Azure account with an active subscription. [Create a free Azure account](https://azure.microsoft.com/free/), if don't have one already.
> [!TIP] > Web PubSub includes a generous **free tier** that can be used for testing and production purposes.
-
-
+ ::: zone pivot="method-azure-portal"+ ## Create a resource from Azure portal
-1. Select the New button found on the upper left-hand corner of the Azure portal. In the New screen, type **Web PubSub** in the search box and press enter.
+1. Select the New button found on the upper left-hand corner of the Azure portal. In the New screen, type **Web PubSub** in the search box and press enter.
- :::image type="content" source="./media/create-instance-portal/search-web-pubsub-in-portal.png" alt-text="Screenshot of searching the Azure Web PubSub in portal.":::
+ :::image type="content" source="./media/create-instance-portal/search-web-pubsub-in-portal.png" alt-text="Screenshot of searching the Azure Web PubSub in portal.":::
2. Select **Web PubSub** from the search results, then select **Create**. 3. Enter the following settings.
- | Setting | Suggested value | Description |
- | | - | -- |
- | **Resource name** | Globally unique name | The globally unique Name that identifies your new Web PubSub service instance. Valid characters are `a-z`, `A-Z`, `0-9`, and `-`. |
- | **Subscription** | Your subscription | The Azure subscription under which this new Web PubSub service instance is created. |
- | **[Resource Group]** | myResourceGroup | Name for the new resource group in which to create your Web PubSub service instance. |
- | **Location** | West US | Choose a [region](https://azure.microsoft.com/regions/) near you. |
- | **Pricing tier** | Free | You can first try Azure Web PubSub service for free. Learn more details about [Azure Web PubSub service pricing tiers](https://azure.microsoft.com/pricing/details/web-pubsub/) |
- | **Unit count** | - | Unit count specifies how many connections your Web PubSub service instance can accept. Each unit supports 1,000 concurrent connections at most. It is only configurable in the Standard tier. |
+ | Setting | Suggested value | Description |
+ | -- | -- | |
+ | **Resource name** | Globally unique name | The globally unique Name that identifies your new Web PubSub service instance. Valid characters are `a-z`, `A-Z`, `0-9`, and `-`. |
+ | **Subscription** | Your subscription | The Azure subscription under which this new Web PubSub service instance is created. |
+ | **[Resource Group]** | myResourceGroup | Name for the new resource group in which to create your Web PubSub service instance. |
+ | **Location** | West US | Choose a [region](https://azure.microsoft.com/regions/) near you. |
+ | **Pricing tier** | Free | You can first try Azure Web PubSub service for free. Learn more details about [Azure Web PubSub service pricing tiers](https://azure.microsoft.com/pricing/details/web-pubsub/) |
+ | **Unit count** | - | Unit count specifies how many connections your Web PubSub service instance can accept. Each unit supports 1,000 concurrent connections at most. It is only configurable in the Standard tier. |
- :::image type="content" source="./media/howto-develop-create-instance/create-web-pubsub-instance-in-portal.png" alt-text="Screenshot of creating the Azure Web PubSub instance in portal.":::
+ :::image type="content" source="./media/howto-develop-create-instance/create-web-pubsub-instance-in-portal.png" alt-text="Screenshot of creating the Azure Web PubSub instance in portal.":::
4. Select **Create** to provision your Web PubSub resource.-
+ ::: zone-end
::: zone pivot="method-azure-cli"+ ## Create a resource using Azure CLI
-The [Azure CLI](/cli/azure) is a set of commands used to create and manage Azure resources. The Azure CLI is available across Azure services and is designed to get you working quickly with Azure, with an emphasis on automation.
+The [Azure CLI](/cli/azure) is a set of commands used to create and manage Azure resources. The Azure CLI is available across Azure services and is designed to get you working quickly with Azure, with an emphasis on automation.
> [!IMPORTANT] > This quickstart requires Azure CLI of version 2.22.0 or higher.
The [Azure CLI](/cli/azure) is a set of commands used to create and manage Azure
[!INCLUDE [Create a Web PubSub instance](includes/cli-awps-creation.md)] ::: zone-end - ::: zone pivot="method-bicep"+ ## Create a resource using Bicep template [!INCLUDE [About Bicep](../../includes/resource-manager-quickstart-bicep-introduction.md)]
The template used in this quickstart is from [Azure Quickstart Templates](/sampl
1. Save the Bicep file as **main.bicep** to your local computer. 1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
- # [CLI](#tab/CLI)
+ # [CLI](#tab/CLI)
- ```azurecli
- az group create --name exampleRG --location eastus
- az deployment group create --resource-group exampleRG --template-file main.bicep
- ```
+ ```azurecli
+ az group create --name exampleRG --location eastus
+ az deployment group create --resource-group exampleRG --template-file main.bicep
+ ```
- # [PowerShell](#tab/PowerShell)
+ # [PowerShell](#tab/PowerShell)
- ```azurepowershell
- New-AzResourceGroup -Name exampleRG -Location eastus
- New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep
- ```
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep
+ ```
-
+ ***
- When the deployment finishes, you should see a message indicating the deployment succeeded.
+ When the deployment finishes, you should see a message indicating the deployment succeeded.
## Review deployed resources
Get-AzResource -ResourceGroupName exampleRG
``` + ## Clean up resources When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to delete the resource group and its resources.
az group delete --name exampleRG
```azurepowershell-interactive Remove-AzResourceGroup -Name exampleRG ```+ ::: zone-end ## Next step+ Now that you have created a resource, you are ready to put it to use. Next, you will learn how to subscribe and publish messages among your clients.
-> [!div class="nextstepaction"]
+
+> [!div class="nextstepaction"]
> [PubSub among clients](quickstarts-pubsub-among-clients.md)
azure-web-pubsub Howto Develop Event Listener https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-develop-event-listener.md
Web PubSub service uses Azure Active Directory (Azure AD) authentication with ma
To configure an Event Hubs listener, you need to:
-1. [Add a managed identity to your Web PubSub service](#add-a-managed-identity-to-your-web-pubsub-service)
-2. [Grant the managed identity an `Azure Event Hubs Data sender` role](#grant-the-managed-identity-an-azure-event-hubs-data-sender-role)
-3. [Add an event listener rule to your service settings](#add-an-event-listener-rule-to-your-service-settings)
+- [Send client events to Event Hubs](#send-client-events-to-event-hubs)
+ - [Overview](#overview)
+ - [Configure an event listener](#configure-an-event-listener)
+ - [Add a managed identity to your Web PubSub service](#add-a-managed-identity-to-your-web-pubsub-service)
+ - [Grant the managed identity an `Azure Event Hubs Data sender` role](#grant-the-managed-identity-an-azure-event-hubs-data-sender-role)
+ - [Add an event listener rule to your service settings](#add-an-event-listener-rule-to-your-service-settings)
+ - [Test your configuration with live demo](#test-your-configuration-with-live-demo)
+ - [Next steps](#next-steps)
## Configure an event listener
Find your Azure Web PubSub service from **Azure portal**. Navigate to **Identity
### Add an event listener rule to your service settings
-1. Find your service from **Azure portal**. Navigate to **Settings**. Then select **Add** to configure your event listener. For an existing hub configuration, select **...** on right side will navigate to the same editing page.
+1. Find your service from **Azure portal**. Navigate to **Settings**. Then select **Add** to configure your event listener. For an existing hub configuration, select **...** on right side will navigate to the same editing page.
:::image type="content" source="media/howto-develop-event-listener/web-pubsub-settings.png" alt-text="Screenshot of Web PubSub settings"::: 1. Then in the below editing page, you'd need to configure hub name, and select **Add** to add an event listener.
Find your Azure Web PubSub service from **Azure portal**. Navigate to **Identity
1. On the **Configure Event Listener** page, first configure an event hub endpoint. You can select **Select Event Hub from your subscription** to select, or directly input the fully qualified namespace and the event hub name. Then select `user` and `system` events you'd like to listen to. Finally select **Confirm** when everything is done. :::image type="content" source="media/howto-develop-event-listener/configure-event-hub-listener.png" alt-text="Screenshot of configuring Event Hubs Listener"::: - ## Test your configuration with live demo 1. Open this [Event Hubs Consumer Client](https://awpseventlistenerdemo.blob.core.windows.net/eventhub-consumer/https://docsupdatetracker.net/index.html) web app, input the Event Hubs connection string to connect to an event hub as a consumer. If you get the Event Hubs connection string from an Event Hubs namespace resource instead of an event hub instance, then you need to specify the event hub name. This event hub consumer client is connected with the mode that only reads new events; the events published before aren't seen here. You can change the consumer client connection mode to read all the available events in the production environment. 1. Use this [WebSocket Client](https://awpseventlistenerdemo.blob.core.windows.net/webpubsub-client/websocket-client.html) web app to generate client events. If you've configured to send system event `connected` to that event hub, you should be able to see a printed `connected` event in the Event Hubs consumer client after connecting to Web PubSub service successfully. You can also generate a user event with the app.
- :::image type="content" source="media/howto-develop-event-listener/eventhub-consumer-connected-event.png" alt-text="Screenshot of a printed connected event in the Event Hubs consumer client app":::
- :::image type="content" source="media/howto-develop-event-listener/web-pubsub-client-specify-event-name.png" alt-text="The area of the WebSocket client app to generate a user event":::
+ :::image type="content" source="media/howto-develop-event-listener/eventhub-consumer-connected-event.png" alt-text="Screenshot of a printed connected event in the Event Hubs consumer client app.":::
+ :::image type="content" source="media/howto-develop-event-listener/web-pubsub-client-specify-event-name.png" alt-text="Screenshot showing the area of the WebSocket client app to generate a user event.":::
## Next steps In this article, you learned how event listeners work and how to configure an event listener with an event hub endpoint. To learn the data format sent to Event Hubs, read the following specification.
-> [!div class="nextstepaction"]
+> [!div class="nextstepaction"]
> [Specification: CloudEvents AMQP extension for Azure Web PubSub](./reference-cloud-events-amqp.md)
-<!--TODO: Add demo-->
+
+<!--TODO: Add demo-->
azure-web-pubsub Howto Develop Eventhandler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-develop-eventhandler.md
description: Guidance about event handler concepts and integration introduction
-+ Last updated 01/27/2023 # Event handler in Azure Web PubSub service
-The event handler handles the incoming client events. Event handlers are registered and configured in the service through the Azure portal or Azure CLI. When a client event is triggered, the service can send the event to the appropriate event handler. The Web PubSub service now supports the event handler as the server-side, which exposes the publicly accessible endpoint for the service to invoke when the event is triggered. In other words, it acts as a **webhook**.
+The event handler handles the incoming client events. Event handlers are registered and configured in the service through the Azure portal or Azure CLI. When a client event is triggered, the service can send the event to the appropriate event handler. The Web PubSub service now supports the event handler as the server-side, which exposes the publicly accessible endpoint for the service to invoke when the event is triggered. In other words, it acts as a **webhook**.
The Web PubSub service delivers client events to the upstream webhook using the [CloudEvents HTTP protocol](https://github.com/cloudevents/spec/blob/v1.0.1/http-protocol-binding.md).
-For every event, the service formulates an HTTP POST request to the registered upstream endpoint and expects an HTTP response.
+For every event, the service formulates an HTTP POST request to the registered upstream endpoint and expects an HTTP response.
The data sending from the service to the server is always in CloudEvents `binary` format.
You can use any of these methods to authenticate between the service and webhook
- Anonymous mode - Simple Auth with `?code=<code>` is provided through the configured Webhook URL as query parameter.-- Azure Active Directory(Azure AD) authentication. For more information, see [Use a managed identity in client events](howto-use-managed-identity.md#use-a-managed-identity-in-client-events-scenarios).
+- Azure Active Directory(Azure AD) authentication. For more information, see [Use a managed identity in client events](howto-use-managed-identity.md#use-a-managed-identity-in-client-events-scenarios).
## Configure event handler
You can add an event handler to a new hub or edit an existing hub.
To configure an event handler in a new hub:
-1. Go to your Azure Web PubSub service page in the **Azure portal**.
-1. Select **Settings** from the menu.
+1. Go to your Azure Web PubSub service page in the **Azure portal**.
+1. Select **Settings** from the menu.
1. Select **Add** to create a hub and configure your server-side webhook URL. Note: To add an event handler to an existing hub, select the hub and select **Edit**. :::image type="content" source="media/quickstart-serverless/set-event-handler.png" alt-text="Screenshot of setting the event handler."::: 1. Enter your hub name. 1. Select **Add** under **Configure Even Handlers**.
-1. In the event handler page, configure the following fields:
- 1. Enter the server webhook URL in the **URL Template** field.
- 1. Select the **System events** that you want to subscribe to.
- 1. Select the **User events** that you want to subscribe to.
- 1. Select **Authentication** method to authenticate upstream requests.
- 1. Select **Confirm**.
+1. In the event handler page, configure the following fields: 1. Enter the server webhook URL in the **URL Template** field. 1. Select the **System events** that you want to subscribe to. 1. Select the **User events** that you want to subscribe to. 1. Select **Authentication** method to authenticate upstream requests. 1. Select **Confirm**.
+ :::image type="content" source="media/howto-develop-eventhandler/configure-event-handler.png" alt-text="Screenshot of Azure Web PubSub Configure Event Handler.":::
1. Select **Save** at the top of the **Configure Hub Settings** page.
To configure an event handler in a new hub:
Use the Azure CLI [**az webpubsub hub**](/cli/azure/webpubsub/hub) group commands to configure the event handler settings.
-Commands | Description
|--
-`create` | Create hub settings for WebPubSub Service.
-`delete` | Delete hub settings for WebPubSub Service.
-`list` | List all hub settings for WebPubSub Service.
-`show` | Show hub settings for WebPubSub Service.
-`update` | Update hub settings for WebPubSub Service.
+| Commands | Description |
+| -- | -- |
+| `create` | Create hub settings for WebPubSub Service. |
+| `delete` | Delete hub settings for WebPubSub Service. |
+| `list` | List all hub settings for WebPubSub Service. |
+| `show` | Show hub settings for WebPubSub Service. |
+| `update` | Update hub settings for WebPubSub Service. |
Here's an example of creating two webhook URLs for hub `MyHub` of `MyWebPubSub` resource:
azure-web-pubsub Howto Develop Reliable Clients https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-develop-reliable-clients.md
description: How to create reliable Websocket clients
-+ Last updated 01/12/2023
The Web PubSub service supports two reliable subprotocols `json.reliable.webpubs
The simplest way to create a reliable client is to use Client SDK. Client SDK implements [Web PubSub client specification](./reference-client-specification.md) and uses `json.reliable.webpubsub.azure.v1` by default. Please refer to [PubSub with client SDK](./quickstart-use-client-sdk.md) for quick start. - ## The Hard Way - Implement by hand The following tutorial walks you through the important part of implementing the [Web PubSub client specification](./reference-client-specification.md). This guide is not for people looking for a quick start but who wants to know the principle of achieving reliability. For quick start, please use the Client SDK.
To use reliable subprotocols, you must set the subprotocol when constructing Web
- Use Json reliable subprotocol:
- ```js
- var pubsub = new WebSocket('wss://test.webpubsub.azure.com/client/hubs/hub1', 'json.reliable.webpubsub.azure.v1');
- ```
+ ```js
+ var pubsub = new WebSocket(
+ "wss://test.webpubsub.azure.com/client/hubs/hub1",
+ "json.reliable.webpubsub.azure.v1"
+ );
+ ```
- Use Protobuf reliable subprotocol:
- ```js
- var pubsub = new WebSocket('wss://test.webpubsub.azure.com/client/hubs/hub1', 'protobuf.reliable.webpubsub.azure.v1');
- ```
+ ```js
+ var pubsub = new WebSocket(
+ "wss://test.webpubsub.azure.com/client/hubs/hub1",
+ "protobuf.reliable.webpubsub.azure.v1"
+ );
+ ```
### Connection recovery Connection recovery is the basis of achieving reliability and must be implemented when using the `json.reliable.webpubsub.azure.v1` and `protobuf.reliable.webpubsub.azure.v1` protocols.
-Websocket connections rely on TCP. When the connection doesn't drop, messages are lossless and delivered in order. To prevent message loss over dropped connections, the Web PubSub service retains the connection status information, including group and message information. This information is used to restore the client on connection recovery
+Websocket connections rely on TCP. When the connection doesn't drop, messages are lossless and delivered in order. To prevent message loss over dropped connections, the Web PubSub service retains the connection status information, including group and message information. This information is used to restore the client on connection recovery
-When the client reconnects to the service using reliable subprotocols, the client will receive a `Connected` message containing the `connectionId` and `reconnectionToken`. The `connectionId` identifies the session of the connection in the service.
+When the client reconnects to the service using reliable subprotocols, the client will receive a `Connected` message containing the `connectionId` and `reconnectionToken`. The `connectionId` identifies the session of the connection in the service.
```json {
- "type":"system",
- "event":"connected",
- "connectionId": "<connection_id>",
- "reconnectionToken": "<reconnection_token>"
+ "type": "system",
+ "event": "connected",
+ "connectionId": "<connection_id>",
+ "reconnectionToken": "<reconnection_token>"
} ```
Connection recovery may fail if the network issue hasn't been recovered yet. The
### Publisher
-Clients that send events to event handlers or publish messages to other clients are called publishers. Publishers should set `ackId` in the message to receive an acknowledgment from the Web PubSub service that publishing the message was successful or not.
+Clients that send events to event handlers or publish messages to other clients are called publishers. Publishers should set `ackId` in the message to receive an acknowledgment from the Web PubSub service that publishing the message was successful or not.
-The `ackId` is the identifier of the message, each new message should use a unique ID. The original `ackId` should be used when resending a message.
+The `ackId` is the identifier of the message, each new message should use a unique ID. The original `ackId` should be used when resending a message.
A sample group send message: ```json {
- "type": "sendToGroup",
- "group": "group1",
- "dataType" : "text",
- "data": "text data",
- "ackId": 1
+ "type": "sendToGroup",
+ "group": "group1",
+ "dataType": "text",
+ "data": "text data",
+ "ackId": 1
} ```
A sample ack response:
```json {
- "type": "ack",
- "ackId": 1,
- "success": true
+ "type": "ack",
+ "ackId": 1,
+ "success": true
} ``` When the Web PubSub service returns an ack response with `success: true`, the message has been processed by the service, and the client can expect the message will be delivered to all subscribers.
-When the service experiences a transient internal error and the message can't be sent to subscriber, the publisher will receive an ack with `success: false`. The publisher should read the error to determine whether or not to resend the message. If the message is resent, the same `ackId` should be used.
+When the service experiences a transient internal error and the message can't be sent to subscriber, the publisher will receive an ack with `success: false`. The publisher should read the error to determine whether or not to resend the message. If the message is resent, the same `ackId` should be used.
```json {
- "type": "ack",
- "ackId": 1,
- "success": false,
- "error": {
- "name": "InternalServerError",
- "message": "Internal server error"
- }
+ "type": "ack",
+ "ackId": 1,
+ "success": false,
+ "error": {
+ "name": "InternalServerError",
+ "message": "Internal server error"
+ }
} ``` ![Message Failure](./media/howto-develop-reliable-clients/message-failed.png)
-If the service's ack response is lost because the WebSocket connection dropped, the publisher should resend the message with the same `ackId` after recovery. When the message was previously processed by the service, it will send an ack containing a `Duplicate` error. The publisher should stop resending this message.
+If the service's ack response is lost because the WebSocket connection dropped, the publisher should resend the message with the same `ackId` after recovery. When the message was previously processed by the service, it will send an ack containing a `Duplicate` error. The publisher should stop resending this message.
```json {
- "type": "ack",
- "ackId": 1,
- "success": false,
- "error": {
- "name": "Duplicate",
- "message": "Message with ack-id: 1 has been processed"
- }
+ "type": "ack",
+ "ackId": 1,
+ "success": false,
+ "error": {
+ "name": "Duplicate",
+ "message": "Message with ack-id: 1 has been processed"
+ }
} ```
A sample sequence ack:
```json {
- "type": "sequenceAck",
- "sequenceId": 1
+ "type": "sequenceAck",
+ "sequenceId": 1
} ```
azure-web-pubsub Howto Disable Local Auth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-disable-local-auth.md
There are two ways to authenticate to Azure Web PubSub Service resources: Azure
> [!IMPORTANT] > Disabling local authentication can have following influences.
-> - The current set of access keys will be permanently deleted.
-> - Tokens signed with current set of access keys will become unavailable.
-> - Signature will **NOT** be attached in the upstream request header. Please visit *[how to validate access token](./howto-use-managed-identity.md#validate-access-tokens)* to learn how to validate requests via Azure AD token.
+>
+> - The current set of access keys will be permanently deleted.
+> - Tokens signed with current set of access keys will become unavailable.
+> - Signature will **NOT** be attached in the upstream request header. Please visit _[how to validate access token](./howto-use-managed-identity.md#validate-access-tokens)_ to learn how to validate requests via Azure AD token.
## Use Azure portal
You can disable local authentication by setting `disableLocalAuth` property to t
```json {
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "resource_name": {
- "defaultValue": "test-for-disable-aad",
- "type": "String"
- }
- },
- "variables": {},
- "resources": [
- {
- "type": "Microsoft.SignalRService/WebPubSub",
- "apiVersion": "2022-08-01-preview",
- "name": "[parameters('resource_name')]",
- "location": "eastus",
- "sku": {
- "name": "Premium_P1",
- "tier": "Premium",
- "size": "P1",
- "capacity": 1
- },
- "properties": {
- "tls": {
- "clientCertEnabled": false
- },
- "networkACLs": {
- "defaultAction": "Deny",
- "publicNetwork": {
- "allow": [
- "ServerConnection",
- "ClientConnection",
- "RESTAPI",
- "Trace"
- ]
- },
- "privateEndpoints": []
- },
- "publicNetworkAccess": "Enabled",
- "disableLocalAuth": true,
- "disableAadAuth": false
- }
- }
- ]
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "resource_name": {
+ "defaultValue": "test-for-disable-aad",
+ "type": "String"
+ }
+ },
+ "variables": {},
+ "resources": [
+ {
+ "type": "Microsoft.SignalRService/WebPubSub",
+ "apiVersion": "2022-08-01-preview",
+ "name": "[parameters('resource_name')]",
+ "location": "eastus",
+ "sku": {
+ "name": "Premium_P1",
+ "tier": "Premium",
+ "size": "P1",
+ "capacity": 1
+ },
+ "properties": {
+ "tls": {
+ "clientCertEnabled": false
+ },
+ "networkACLs": {
+ "defaultAction": "Deny",
+ "publicNetwork": {
+ "allow": [
+ "ServerConnection",
+ "ClientConnection",
+ "RESTAPI",
+ "Trace"
+ ]
+ },
+ "privateEndpoints": []
+ },
+ "publicNetworkAccess": "Enabled",
+ "disableLocalAuth": true,
+ "disableAadAuth": false
+ }
+ }
+ ]
} ```
azure-web-pubsub Howto Generate Client Access Url https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-generate-client-access-url.md
# How to generate client access URL for the clients
-A client, be it a browser 💻, a mobile app 📱, or an IoT device 💡, uses a **Client Access URL** to connect and authenticate with your resource. This URL follows a pattern of `wss://<service_name>.webpubsub.azure.com/client/hubs/<hub_name>?access_token=<token>`. This article shows you several ways to get the Client Access URL.
+A client, be it a browser 💻, a mobile app 📱, or an IoT device 💡, uses a **Client Access URL** to connect and authenticate with your resource. This URL follows a pattern of `wss://<service_name>.webpubsub.azure.com/client/hubs/<hub_name>?access_token=<token>`. This article shows you several ways to get the Client Access URL.
- For quick start, copy one from the Azure portal - For development, generate the value using [Web PubSub server SDK](./reference-server-sdk-js.md) - If you're using Azure AD, you can also invoke the [Generate Client Token REST API](/rest/api/webpubsub/dataplane/web-pub-sub/generate-client-token) ## Copy from the Azure portal+ In the Keys tab in Azure portal, there's a Client URL Generator tool to quickly generate a Client Access URL for you, as shown in the following diagram. Values input here aren't stored. :::image type="content" source="./media/howto-websocket-connect/generate-client-url.png" alt-text="Screenshot of the Web PubSub Client URL Generator."::: ## Generate from service SDK+ The same Client Access URL can be generated by using the Web PubSub server SDK. # [JavaScript](#tab/javascript)
The same Client Access URL can be generated by using the Web PubSub server SDK.
1. Follow [Getting started with server SDK](./reference-server-sdk-js.md#getting-started) to create a `WebPubSubServiceClient` object `service` 2. Generate Client Access URL by calling `WebPubSubServiceClient.getClientAccessToken`:
- * Configure user ID
- ```js
- let token = await serviceClient.getClientAccessToken({ userId: "user1" });
- ```
- * Configure the lifetime of the token
- ```js
- let token = await serviceClient.getClientAccessToken({ expirationTimeInMinutes: 5 });
- ```
- * Configure a role that can join group `group1` directly when it connects using this Client Access URL
- ```js
- let token = await serviceClient.getClientAccessToken({ roles: ["webpubsub.joinLeaveGroup.group1"] });
- ```
- * Configure a role that the client can send messages to group `group1` directly when it connects using this Client Access URL
- ```js
- let token = await serviceClient.getClientAccessToken({ roles: ["webpubsub.sendToGroup.group1"] });
- ```
- * Configure a group `group1` that the client joins once it connects using this Client Access URL
- ```js
- let token = await serviceClient.getClientAccessToken({ groups: ["group1"] });
- ```
+ - Configure user ID
+ ```js
+ let token = await serviceClient.getClientAccessToken({ userId: "user1" });
+ ```
+ - Configure the lifetime of the token
+ ```js
+ let token = await serviceClient.getClientAccessToken({
+ expirationTimeInMinutes: 5,
+ });
+ ```
+ - Configure a role that can join group `group1` directly when it connects using this Client Access URL
+ ```js
+ let token = await serviceClient.getClientAccessToken({
+ roles: ["webpubsub.joinLeaveGroup.group1"],
+ });
+ ```
+ - Configure a role that the client can send messages to group `group1` directly when it connects using this Client Access URL
+ ```js
+ let token = await serviceClient.getClientAccessToken({
+ roles: ["webpubsub.sendToGroup.group1"],
+ });
+ ```
+ - Configure a group `group1` that the client joins once it connects using this Client Access URL
+ ```js
+ let token = await serviceClient.getClientAccessToken({
+ groups: ["group1"],
+ });
+ ```
# [C#](#tab/csharp) 1. Follow [Getting started with server SDK](./reference-server-sdk-csharp.md#getting-started) to create a `WebPubSubServiceClient` object `service` 2. Generate Client Access URL by calling `WebPubSubServiceClient.GetClientAccessUri`:
- * Configure user ID
- ```csharp
- var url = service.GetClientAccessUri(userId: "user1");
- ```
- * Configure the lifetime of the token
- ```csharp
- var url = service.GetClientAccessUri(expiresAfter: TimeSpan.FromMinutes(5));
- ```
- * Configure a role that can join group `group1` directly when it connects using this Client Access URL
- ```csharp
- var url = service.GetClientAccessUri(roles: new string[] { "webpubsub.joinLeaveGroup.group1" });
- ```
- * Configure a role that the client can send messages to group `group1` directly when it connects using this Client Access URL
- ```csharp
- var url = service.GetClientAccessUri(roles: new string[] { "webpubsub.sendToGroup.group1" });
- ```
- * Configure a group `group1` that the client joins once it connects using this Client Access URL
- ```csharp
- var url = service.GetClientAccessUri(groups: new string[] { "group1" });
- ```
+ - Configure user ID
+ ```csharp
+ var url = service.GetClientAccessUri(userId: "user1");
+ ```
+ - Configure the lifetime of the token
+ ```csharp
+ var url = service.GetClientAccessUri(expiresAfter: TimeSpan.FromMinutes(5));
+ ```
+ - Configure a role that can join group `group1` directly when it connects using this Client Access URL
+ ```csharp
+ var url = service.GetClientAccessUri(roles: new string[] { "webpubsub.joinLeaveGroup.group1" });
+ ```
+ - Configure a role that the client can send messages to group `group1` directly when it connects using this Client Access URL
+ ```csharp
+ var url = service.GetClientAccessUri(roles: new string[] { "webpubsub.sendToGroup.group1" });
+ ```
+ - Configure a group `group1` that the client joins once it connects using this Client Access URL
+ ```csharp
+ var url = service.GetClientAccessUri(groups: new string[] { "group1" });
+ ```
# [Python](#tab/python) 1. Follow [Getting started with server SDK](./reference-server-sdk-python.md#install-the-package) to create a `WebPubSubServiceClient` object `service` 2. Generate Client Access URL by calling `WebPubSubServiceClient.get_client_access_token`:
- * Configure user ID
- ```python
- token = service.get_client_access_token(user_id="user1")
- ```
- * Configure the lifetime of the token
- ```python
- token = service.get_client_access_token(minutes_to_expire=5)
- ```
- * Configure a role that can join group `group1` directly when it connects using this Client Access URL
- ```python
- token = service.get_client_access_token(roles=["webpubsub.joinLeaveGroup.group1"])
- ```
- * Configure a role that the client can send messages to group `group1` directly when it connects using this Client Access URL
- ```python
- token = service.get_client_access_token(roles=["webpubsub.sendToGroup.group1"])
- ```
- * Configure a group `group1` that the client joins once it connects using this Client Access URL
- ```python
- token = service.get_client_access_token(groups=["group1"])
- ```
+ - Configure user ID
+ ```python
+ token = service.get_client_access_token(user_id="user1")
+ ```
+ - Configure the lifetime of the token
+ ```python
+ token = service.get_client_access_token(minutes_to_expire=5)
+ ```
+ - Configure a role that can join group `group1` directly when it connects using this Client Access URL
+ ```python
+ token = service.get_client_access_token(roles=["webpubsub.joinLeaveGroup.group1"])
+ ```
+ - Configure a role that the client can send messages to group `group1` directly when it connects using this Client Access URL
+ ```python
+ token = service.get_client_access_token(roles=["webpubsub.sendToGroup.group1"])
+ ```
+ - Configure a group `group1` that the client joins once it connects using this Client Access URL
+ ```python
+ token = service.get_client_access_token(groups=["group1"])
+ ```
# [Java](#tab/java) 1. Follow [Getting started with server SDK](./reference-server-sdk-java.md#getting-started) to create a `WebPubSubServiceClient` object `service` 2. Generate Client Access URL by calling `WebPubSubServiceClient.getClientAccessToken`:
- * Configure user ID
- ```java
- GetClientAccessTokenOptions option = new GetClientAccessTokenOptions();
- option.setUserId(id);
- WebPubSubClientAccessToken token = service.getClientAccessToken(option);
- ```
- * Configure the lifetime of the token
- ```java
- GetClientAccessTokenOptions option = new GetClientAccessTokenOptions();
- option.setExpiresAfter(Duration.ofDays(1));
- WebPubSubClientAccessToken token = service.getClientAccessToken(option);
- ```
- * Configure a role that can join group `group1` directly when it connects using this Client Access URL
- ```java
- GetClientAccessTokenOptions option = new GetClientAccessTokenOptions();
- option.addRole("webpubsub.joinLeaveGroup.group1");
- WebPubSubClientAccessToken token = service.getClientAccessToken(option);
- ```
- * Configure a role that the client can send messages to group `group1` directly when it connects using this Client Access URL
- ```java
- GetClientAccessTokenOptions option = new GetClientAccessTokenOptions();
- option.addRole("webpubsub.sendToGroup.group1");
- WebPubSubClientAccessToken token = service.getClientAccessToken(option);
- ```
- * Configure a group `group1` that the client joins once it connects using this Client Access URL
- ```java
- GetClientAccessTokenOptions option = new GetClientAccessTokenOptions();
- option.setGroups(Arrays.asList("group1")),
- WebPubSubClientAccessToken token = service.getClientAccessToken(option);
- ```
+ - Configure user ID
+ ```java
+ GetClientAccessTokenOptions option = new GetClientAccessTokenOptions();
+ option.setUserId(id);
+ WebPubSubClientAccessToken token = service.getClientAccessToken(option);
+ ```
+ - Configure the lifetime of the token
+ ```java
+ GetClientAccessTokenOptions option = new GetClientAccessTokenOptions();
+ option.setExpiresAfter(Duration.ofDays(1));
+ WebPubSubClientAccessToken token = service.getClientAccessToken(option);
+ ```
+ - Configure a role that can join group `group1` directly when it connects using this Client Access URL
+ ```java
+ GetClientAccessTokenOptions option = new GetClientAccessTokenOptions();
+ option.addRole("webpubsub.joinLeaveGroup.group1");
+ WebPubSubClientAccessToken token = service.getClientAccessToken(option);
+ ```
+ - Configure a role that the client can send messages to group `group1` directly when it connects using this Client Access URL
+ ```java
+ GetClientAccessTokenOptions option = new GetClientAccessTokenOptions();
+ option.addRole("webpubsub.sendToGroup.group1");
+ WebPubSubClientAccessToken token = service.getClientAccessToken(option);
+ ```
+ - Configure a group `group1` that the client joins once it connects using this Client Access URL
+ ```java
+ GetClientAccessTokenOptions option = new GetClientAccessTokenOptions();
+ option.setGroups(Arrays.asList("group1")),
+ WebPubSubClientAccessToken token = service.getClientAccessToken(option);
+ ```
+ In real-world code, we usually have a server side to host the logic generating the Client Access URL. When a client request comes in, the server side can use the general authentication/authorization workflow to validate the client request. Only valid client requests can get the Client Access URL back.
You can enable Azure AD in your service and use the Azure AD token to invoke [Ge
1. Follow [Authorize from application](./howto-authorize-from-application.md) to enable Azure AD. 2. Follow [Get Azure AD token](./howto-authorize-from-application.md#use-postman-to-get-the-azure-ad-token) to get the Azure AD token with Postman. 3. Use the Azure AD token to invoke `:generateToken` with Postman:
- 1. For the URI, enter `https://{Endpoint}/api/hubs/{hub}/:generateToken?api-version=2022-11-01`
- 2. On the **Auth** tab, select **Bearer Token** and paste the Azure AD token fetched in the previous step
- 3. Select **Send** and you see the Client Access Token in the response:
- ```json
- {
- "token": "ABCDEFG.ABC.ABC"
- }
- ```
+ 1. For the URI, enter `https://{Endpoint}/api/hubs/{hub}/:generateToken?api-version=2022-11-01`
+ 2. On the **Auth** tab, select **Bearer Token** and paste the Azure AD token fetched in the previous step
+ 3. Select **Send** and you see the Client Access Token in the response:
+ ```json
+ {
+ "token": "ABCDEFG.ABC.ABC"
+ }
+ ```
4. The Client Access URI is in the format of `wss://<endpoint>/client/hubs/<hub_name>?access_token=<token>`-
azure-web-pubsub Howto Monitor Azure Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-monitor-azure-policy.md
[Azure Policy](../governance/policy/overview.md) is a free service in Azure to create, assign, and manage policies that enforce rules and effects to ensure your resources stay compliant with your corporate standards and service level agreements. Use these policies to audit Web PubSub resources for compliance.
-This article describes the built-in policies for Azure Web PubSub Service.
+This article describes the built-in policies for Azure Web PubSub Service.
## Built-in policy definitions - The following table contains an index of Azure Policy built-in policy definitions for Azure Web PubSub. For Azure Policy built-ins for other services, see [Azure Policy built-in definitions](../governance/policy/samples/built-in-policies.md). The name of each built-in policy definition links to the policy definition in the Azure portal. Use the link in the Version column to view the source on the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy).
The name of each built-in policy definition links to the policy definition in th
When assigning a policy definition:
-* You can assign policy definitions using the [Azure portal](../governance/policy/assign-policy-portal.md), [Azure CLI](../governance/policy/assign-policy-azurecli.md), a [Resource Manager template](../governance/policy/assign-policy-template.md), or the Azure Policy SDKs.
-* Policy assignments can be scoped to a resource group, a subscription, or an [Azure management group](../governance/management-groups/overview.md).
-* You can enable or disable [policy enforcement](../governance/policy/concepts/assignment-structure.md#enforcement-mode) at any time.
-* Web PubSub policy assignments apply to existing and new Web PubSub resources within the scope.
+- You can assign policy definitions using the [Azure portal](../governance/policy/assign-policy-portal.md), [Azure CLI](../governance/policy/assign-policy-azurecli.md), a [Resource Manager template](../governance/policy/assign-policy-template.md), or the Azure Policy SDKs.
+- Policy assignments can be scoped to a resource group, a subscription, or an [Azure management group](../governance/management-groups/overview.md).
+- You can enable or disable [policy enforcement](../governance/policy/concepts/assignment-structure.md#enforcement-mode) at any time.
+- Web PubSub policy assignments apply to existing and new Web PubSub resources within the scope.
> [!NOTE] > After you assign or update a policy, it takes some time for the assignment to be applied to resources in the defined scope. See information about [policy evaluation triggers](../governance/policy/how-to/get-compliance-data.md#evaluation-triggers).
When a resource is non-compliant, there are many possible reasons. To determine
1. Open the Azure portal and search for **Policy**. 1. Select **Policy**. 1. Select **Compliance**.
-1. Use the filters to display by **Scope**, **Type** or **Compliance state**. Use search list by name or
- ID.
- [ ![Policy compliance in portal](./media/howto-monitor-azure-policy/azure-policy-compliance.png) ](./media/howto-monitor-azure-policy/azure-policy-compliance.png#lightbox)
-1. Select a policy to review aggregate compliance details and events.
+1. Use the filters to display by **Scope**, **Type** or **Compliance state**. Use search list by name or
+ ID.
+ [ ![Screenshot showing policy compliance in portal.](./media/howto-monitor-azure-policy/azure-policy-compliance.png) ](./media/howto-monitor-azure-policy/azure-policy-compliance.png#lightbox)
+1. Select a policy to review aggregate compliance details and events.
1. Select a specific Web PubSub for resource compliance. ### Policy compliance in the Azure CLI
az policy state list \
## Next steps
-* Learn more about Azure Policy [definitions](../governance/policy/concepts/definition-structure.md) and [effects](../governance/policy/concepts/effects.md)
-
-* Create a [custom policy definition](../governance/policy/tutorials/create-custom-policy-definition.md)
+- Learn more about Azure Policy [definitions](../governance/policy/concepts/definition-structure.md) and [effects](../governance/policy/concepts/effects.md)
-* Learn more about [governance capabilities](../governance/index.yml) in Azure
+- Create a [custom policy definition](../governance/policy/tutorials/create-custom-policy-definition.md)
+- Learn more about [governance capabilities](../governance/index.yml) in Azure
<!-- LINKS - External -->
-[terms-of-use]: https://azure.microsoft.com/support/legal/preview-supplemental-terms/
+
+[terms-of-use]: https://azure.microsoft.com/support/legal/preview-supplemental-terms/
azure-web-pubsub Howto Troubleshoot Resource Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-troubleshoot-resource-logs.md
description: Learn what resource logs are and how to use them for troubleshootin
-+ Last updated 07/21/2022 # How to troubleshoot with resource logs
-This how-to guide provides an overview of Azure Web PubSub resource logs and some tips for using the logs to troubleshoot certain problems. Logs can be used for issue identification, connection tracking, message tracing, HTTP request tracing, and analysis.
+This how-to guide provides an overview of Azure Web PubSub resource logs and some tips for using the logs to troubleshoot certain problems. Logs can be used for issue identification, connection tracking, message tracing, HTTP request tracing, and analysis.
-## What are resource logs?
+## What are resource logs?
+
+There are three types of resource logs: _Connectivity_, _Messaging_, and _HTTP requests_.
-There are three types of resource logs: *Connectivity*, *Messaging*, and *HTTP requests*.
- **Connectivity** logs provide detailed information for Azure Web PubSub hub connections. For example, basic information (user ID, connection ID, and so on) and event information (connect, disconnect, and so on). - **Messaging** logs provide tracing information for the Azure Web PubSub hub messages received and sent via Azure Web PubSub service. For example, tracing ID and message type of the message. - **HTTP requests** logs provide tracing information for HTTP requests to the Azure Web PubSub service. For example, HTTP method and status code. Typically the HTTP request is recorded when it arrives at or leave from service.
The Azure Web PubSub service live trace tool has ability to collect resource log
> [!NOTE] > The following considerations apply to using the live trace tool:
-> - The real-time resource logs captured by live trace tool will be billed as messages (outbound traffic).
-> - The live trace tool does not currently support Azure Active Directory authentication. You must enable access keys to use live trace. Under **Settings**, select **Keys**, and then enable **Access Key**.
-> - The Azure Web PubSub service Free Tier instance has a daily limit of 20,000 messages (outbound traffic). Live trace can cause you to unexpectedly reach the daily limit.
+>
+> - The real-time resource logs captured by live trace tool will be billed as messages (outbound traffic).
+> - The live trace tool does not currently support Azure Active Directory authentication. You must enable access keys to use live trace. Under **Settings**, select **Keys**, and then enable **Access Key**.
+> - The Azure Web PubSub service Free Tier instance has a daily limit of 20,000 messages (outbound traffic). Live trace can cause you to unexpectedly reach the daily limit.
### Launch the live trace tool
The Azure Web PubSub service live trace tool has ability to collect resource log
1. Select **Save** and then wait until the settings take effect. 1. Select **Open Live Trace Tool**.
- :::image type="content" source="./media/howto-troubleshoot-diagnostic-logs/diagnostic-logs-with-live-trace-tool.png" alt-text="Screenshot of launching the live trace tool.":::
+ :::image type="content" source="./media/howto-troubleshoot-diagnostic-logs/diagnostic-logs-with-live-trace-tool.png" alt-text="Screenshot of launching the live trace tool.":::
### Capture the resource logs The live trace tool provides functionality to help you capture the resource logs for troubleshooting.
-* **Capture**: Begin to capture the real-time resource logs from Azure Web PubSub.
-* **Clear**: Clear the captured real-time resource logs.
-* **Log filter**: The live trace tool lets you filter the captured real-time resource logs with one specific key word. The common separators (for example, space, comma, semicolon, and so on) will be treated as part of the key word.
-* **Status**: The status shows whether the live trace tool is connected or disconnected with the specific instance.
+- **Capture**: Begin to capture the real-time resource logs from Azure Web PubSub.
+- **Clear**: Clear the captured real-time resource logs.
+- **Log filter**: The live trace tool lets you filter the captured real-time resource logs with one specific key word. The common separators (for example, space, comma, semicolon, and so on) will be treated as part of the key word.
+- **Status**: The status shows whether the live trace tool is connected or disconnected with the specific instance.
:::image type="content" source="./media/howto-troubleshoot-diagnostic-logs/live-trace-tool-capture.png" alt-text="Screenshot of capturing resource logs with live trace tool.":::
-The real-time resource logs captured by live trace tool contain detailed information for troubleshooting.
-
-| Name | Description |
-| | |
-| Time | Log event time |
-| Log Level | Log event level, can be [Trace \| Debug \| Informational \| Warning \| Error] |
-| Event Name | Operation name of the event |
-| Message | Detailed message for the event |
-| Exception | The run-time exception of Azure Web PubSub service |
-| Hub | User-defined hub name |
-| Connection ID | Identity of the connection |
-| User ID | User identity|
-| IP | Client IP address |
-| Route Template | The route template of the API |
-| Http Method | The Http method (POST/GET/PUT/DELETE) |
-| URL | The uniform resource locator |
-| Trace ID | The unique identifier to the invocation |
-| Status Code | The Http response code |
-| Duration | The duration between receiving the request and processing the request |
-| Headers | The additional information passed by the client and the server with an HTTP request or response |
+The real-time resource logs captured by live trace tool contain detailed information for troubleshooting.
+
+| Name | Description |
+| -- | -- |
+| Time | Log event time |
+| Log Level | Log event level, can be [Trace \| Debug \| Informational \| Warning \| Error] |
+| Event Name | Operation name of the event |
+| Message | Detailed message for the event |
+| Exception | The run-time exception of Azure Web PubSub service |
+| Hub | User-defined hub name |
+| Connection ID | Identity of the connection |
+| User ID | User identity |
+| IP | Client IP address |
+| Route Template | The route template of the API |
+| Http Method | The Http method (POST/GET/PUT/DELETE) |
+| URL | The uniform resource locator |
+| Trace ID | The unique identifier to the invocation |
+| Status Code | The Http response code |
+| Duration | The duration between receiving the request and processing the request |
+| Headers | The additional information passed by the client and the server with an HTTP request or response |
## Capture resource logs with Azure Monitor
Currently Azure Web PubSub supports integration with [Azure Storage](../azure-mo
1. Go to Azure portal. 1. On **Diagnostic settings** page of your Azure Web PubSub service instance, select **+ Add diagnostic setting**.
- :::image type="content" source="./media/howto-troubleshoot-diagnostic-logs/diagnostic-settings-list.png" alt-text="Screenshot of viewing diagnostic settings and create a new one":::
+ :::image type="content" source="./media/howto-troubleshoot-diagnostic-logs/diagnostic-settings-list.png" alt-text="Screenshot of viewing diagnostic settings and create a new one.":::
1. In **Diagnostic setting name**, input the setting name. 1. In **Category details**, select any log category you need. 1. In **Destination details**, check **Archive to a storage account**.
- :::image type="content" source="./media/howto-troubleshoot-diagnostic-logs/diagnostic-settings-details.png" alt-text="Screenshot of configuring diagnostic setting detail":::
+ :::image type="content" source="./media/howto-troubleshoot-diagnostic-logs/diagnostic-settings-details.png" alt-text="Screenshot of configuring diagnostic setting detail":::
+ 1. Select **Save** to save the diagnostic setting.
-> [!NOTE]
-> The storage account should be in the same region as Azure Web PubSub service.
+ > [!NOTE]
+ > The storage account should be in the same region as Azure Web PubSub service.
### Archive to an Azure Storage Account
Archive log JSON strings include elements listed in the following tables:
**Format**
-Name | Description
-- | -
-time | Log event time
-level | Log event level
-resourceId | Resource ID of your Azure SignalR Service
-location | Location of your Azure SignalR Service
-category | Category of the log event
-operationName | Operation name of the event
-callerIpAddress | IP address of your server or client
-properties | Detailed properties related to this log event. For more detail, see the properties table below
+| Name | Description |
+| | - |
+| time | Log event time |
+| level | Log event level |
+| resourceId | Resource ID of your Azure SignalR Service |
+| location | Location of your Azure SignalR Service |
+| category | Category of the log event |
+| operationName | Operation name of the event |
+| callerIpAddress | IP address of your server or client |
+| properties | Detailed properties related to this log event. For more detail, see the properties table below |
**Properties Table**
-Name | Description
-- | -
-collection | Collection of the log event. Allowed values are: `Connection`, `Authorization` and `Throttling`
-connectionId | Identity of the connection
-userId | Identity of the user
-message | Detailed message of log event
-hub | User-defined Hub Name |
-routeTemplate | The route template of the API |
-httpMethod | The Http method (POST/GET/PUT/DELETE) |
-url | The uniform resource locator |
-traceId | The unique identifier to the invocation |
-statusCode | The Http response code |
-duration | The duration between the request is received and processed |
-headers | The additional information passed by the client and the server with an HTTP request or response |
+| Name | Description |
+| - | -- |
+| collection | Collection of the log event. Allowed values are: `Connection`, `Authorization` and `Throttling` |
+| connectionId | Identity of the connection |
+| userId | Identity of the user |
+| message | Detailed message of log event |
+| hub | User-defined Hub Name |
+| routeTemplate | The route template of the API |
+| httpMethod | The Http method (POST/GET/PUT/DELETE) |
+| url | The uniform resource locator |
+| traceId | The unique identifier to the invocation |
+| statusCode | The Http response code |
+| duration | The duration between the request is received and processed |
+| headers | The additional information passed by the client and the server with an HTTP request or response |
The following code is an example of an archive log JSON string:
The following code is an example of an archive log JSON string:
### Archive to Azure Log Analytics To send logs to a Log Analytics workspace:
-1. On the **Diagnostic setting** page, under **Destination details**, select **Send to Log Analytics workspace.
+
+1. On the **Diagnostic setting** page, under **Destination details**, select \*\*Send to Log Analytics workspace.
1. Select the **Subscription** you want to use. 1. Select the **Log Analytics workspace** to use as the destination for the logs.
To view the resource logs, follow these steps:
1. Select `Logs` in your target Log Analytics.
- :::image type="content" alt-text="Log Analytics menu item" source="./media/howto-troubleshoot-diagnostic-logs/log-analytics-menu-item.png" lightbox="./media/howto-troubleshoot-diagnostic-logs/log-analytics-menu-item.png":::
+ :::image type="content" alt-text="Screenshot showing the Log Analytics menu item." source="./media/howto-troubleshoot-diagnostic-logs/log-analytics-menu-item.png" lightbox="./media/howto-troubleshoot-diagnostic-logs/log-analytics-menu-item.png":::
1. Enter `WebPubSubConnectivity`, `WebPubSubMessaging` or `WebPubSubHttpRequest`, and then select the time range to query the log. For advanced queries, see [Get started with Log Analytics in Azure Monitor](../azure-monitor/logs/log-analytics-tutorial.md).
- :::image type="content" alt-text="Query log in Log Analytics" source="./media/howto-troubleshoot-diagnostic-logs/query-log-in-log-analytics.png" lightbox="./media/howto-troubleshoot-diagnostic-logs/query-log-in-log-analytics.png":::
-
+ :::image type="content" alt-text="Screenshot showing the Query log in Log Analytics." source="./media/howto-troubleshoot-diagnostic-logs/query-log-in-log-analytics.png" lightbox="./media/howto-troubleshoot-diagnostic-logs/query-log-in-log-analytics.png":::
To use a sample query for SignalR service, follow the steps below.+ 1. Select `Logs` in your target Log Analytics. 1. Select `Queries` to open query explorer. 1. Select `Resource type` to group sample queries in resource type. 1. Select `Run` to run the script.
- :::image type="content" alt-text="Sample query in Log Analytics" source="./media/howto-troubleshoot-diagnostic-logs/log-analytics-sample-query.png" lightbox="./media/howto-troubleshoot-diagnostic-logs/log-analytics-sample-query.png":::
-
+ :::image type="content" alt-text="Screenshot showing the sample query in Log Analytics." source="./media/howto-troubleshoot-diagnostic-logs/log-analytics-sample-query.png" lightbox="./media/howto-troubleshoot-diagnostic-logs/log-analytics-sample-query.png":::
Archive log columns include elements listed in the following table.
-Name | Description
-- | -
-TimeGenerated | Log event time
-Collection | Collection of the log event. Allowed values are: `Connection`, `Authorization` and `Throttling`
-OperationName | Operation name of the event
-Location | Location of your Azure SignalR Service
-Level | Log event level
-CallerIpAddress | IP address of your server/client
-Message | Detailed message of log event
-UserId | Identity of the user
-ConnectionId | Identity of the connection
-ConnectionType | Type of the connection. Allowed values are: `Server` \| `Client`. `Server`: connection from server side; `Client`: connection from client side
-TransportType | Transport type of the connection. Allowed values are: `Websockets` \| `ServerSentEvents` \| `LongPolling`
+| Name | Description |
+| | - |
+| TimeGenerated | Log event time |
+| Collection | Collection of the log event. Allowed values are: `Connection`, `Authorization` and `Throttling` |
+| OperationName | Operation name of the event |
+| Location | Location of your Azure SignalR Service |
+| Level | Log event level |
+| CallerIpAddress | IP address of your server/client |
+| Message | Detailed message of log event |
+| UserId | Identity of the user |
+| ConnectionId | Identity of the connection |
+| ConnectionType | Type of the connection. Allowed values are: `Server` \| `Client`. `Server`: connection from server side; `Client`: connection from client side |
+| TransportType | Transport type of the connection. Allowed values are: `Websockets` \| `ServerSentEvents` \| `LongPolling` |
## Troubleshoot with the resource logs
The difference between `ConnectionAborted` and `ConnectionEnded` is that `Connec
The abort reasons are listed in the following table:
-| Reason | Description |
-| - | - |
-| Connection count reaches limit | Connection count reaches limit of your current price tier. Consider scale up service unit
-| Service reloading, reconnect | Azure Web PubSub service is reloading. You need to implement your own reconnect mechanism or manually reconnect to Azure Web PubSub service |
-| Internal server transient error | Transient error occurs in Azure Web PubSub service, should be auto recovered
+| Reason | Description |
+| - | - |
+| Connection count reaches limit | Connection count reaches limit of your current price tier. Consider scale up service unit |
+| Service reloading, reconnect | Azure Web PubSub service is reloading. You need to implement your own reconnect mechanism or manually reconnect to Azure Web PubSub service |
+| Internal server transient error | Transient error occurs in Azure Web PubSub service, should be auto recovered |
#### Unexpected increase in connections
If you get 401 Unauthorized returned for client requests, check your resource lo
### Throttling
-If you find that you can't establish client connections to Azure Web PubSub service, check your resource logs. If you see `Connection count reaches limit` in the resource log, you established too many connections to Azure Web PubSub service and reached the connection count limit. Consider scaling up your Azure Web PubSub service instance. If you see `Message count reaches limit` in the resource log and you're using the Free tier, it means you used up the quota of messages. If you want to send more messages, consider changing your Azure Web PubSub service instance to Standard tier. For more information, see [Azure Web PubSub service Pricing](https://azure.microsoft.com/pricing/details/web-pubsub/).
+If you find that you can't establish client connections to Azure Web PubSub service, check your resource logs. If you see `Connection count reaches limit` in the resource log, you established too many connections to Azure Web PubSub service and reached the connection count limit. Consider scaling up your Azure Web PubSub service instance. If you see `Message count reaches limit` in the resource log and you're using the Free tier, it means you used up the quota of messages. If you want to send more messages, consider changing your Azure Web PubSub service instance to Standard tier. For more information, see [Azure Web PubSub service Pricing](https://azure.microsoft.com/pricing/details/web-pubsub/).
azure-web-pubsub Howto Use Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-use-managed-identity.md
This article shows you how to create a managed identity for Azure Web PubSub Service and how to use it.
-> [!Important]
-> Azure Web PubSub Service can support only one managed identity. That means you can add either a system-assigned identity or a user-assigned identity.
+> [!Important]
+> Azure Web PubSub Service can support only one managed identity. That means you can add either a system-assigned identity or a user-assigned identity.
## Add a system-assigned identity
To set up a managed identity in the Azure portal, you'll first create an Azure W
2. Select **Identity**.
-4. On the **System assigned** tab, switch **Status** to **On**. Select **Save**.
+3. On the **System assigned** tab, switch **Status** to **On**. Select **Save**.
- :::image type="content" source="media/howto-use-managed-identity/system-identity-portal.png" alt-text="Add a system-assigned identity in the portal":::
+ :::image type="content" source="media/howto-use-managed-identity/system-identity-portal.png" alt-text="Screenshot showing Add a system-assigned identity in the portal.":::
## Add a user-assigned identity
Creating an Azure Web PubSub Service instance with a user-assigned identity requ
5. Search for the identity that you created earlier and selects it. Select **Add**.
- :::image type="content" source="media/howto-use-managed-identity/user-identity-portal.png" alt-text="Add a user-assigned identity in the portal":::
+ :::image type="content" source="media/howto-use-managed-identity/user-identity-portal.png" alt-text="Screenshot showing Add a user-assigned identity in the portal.":::
## Use a managed identity in client events scenarios
Azure Web PubSub Service is a fully managed service, so you can't use a managed
2. Navigate to the rule and switch on the **Authentication**.
- :::image type="content" source="media/howto-use-managed-identity/msi-settings.png" alt-text="msi-setting":::
+ :::image type="content" source="media/howto-use-managed-identity/msi-settings.png" alt-text="Screenshot showing the msi-setting.":::
3. Select application. The application ID will become the `aud` claim in the obtained access token, which can be used as a part of validation in your event handler. You can choose one of the following:
- - Use default AAD application.
- - Select from existing AAD applications. The application ID of the one you choose will be used.
- - Specify an AAD application. The value should be [Resource ID of an Azure service](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-services-that-support-azure-ad-authentication)
- > [!NOTE]
- > If you validate an access token by yourself in your service, you can choose any one of the resource formats. If you use Azure role-based access control (Azure RBAC) for a data plane, you must use the resource that the service provider requests.
+ - Use default AAD application.
+ - Select from existing AAD applications. The application ID of the one you choose will be used.
+ - Specify an AAD application. The value should be [Resource ID of an Azure service](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-services-that-support-azure-ad-authentication)
+
+ > [!NOTE]
+ > If you validate an access token by yourself in your service, you can choose any one of the resource formats. If you use Azure role-based access control (Azure RBAC) for a data plane, you must use the resource that the service provider requests.
### Validate access tokens
azure-web-pubsub Quickstart Live Demo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/quickstart-live-demo.md
In this quickstart, we use the *Client URL Generator* to generate a temporarily
In real-world applications, you can use SDKs in various languages build your own application. We also provide Function extensions for you to build serverless applications easily.
azure-web-pubsub Quickstart Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/quickstart-serverless.md
description: A tutorial to walk through how to use Azure Web PubSub service and
-+ Last updated 05/05/2023
The Azure Web PubSub service helps you build real-time messaging web application
In this tutorial, you learn how to: > [!div class="checklist"]
-> * Build a serverless real-time chat app
-> * Work with Web PubSub function trigger bindings and output bindings
-> * Deploy the function to Azure Function App
-> * Configure Azure Authentication
-> * Configure Web PubSub Event Handler to route events and messages to the application
+>
+> - Build a serverless real-time chat app
+> - Work with Web PubSub function trigger bindings and output bindings
+> - Deploy the function to Azure Function App
+> - Configure Azure Authentication
+> - Configure Web PubSub Event Handler to route events and messages to the application
## Prerequisites # [JavaScript](#tab/javascript)
-* A code editor, such as [Visual Studio Code](https://code.visualstudio.com/)
+- A code editor, such as [Visual Studio Code](https://code.visualstudio.com/)
-* [Node.js](https://nodejs.org/en/download/), version 10.x.
- > [!NOTE]
- > For more information about the supported versions of Node.js, see [Azure Functions runtime versions documentation](../azure-functions/functions-versions.md#languages).
-* [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools#installing) (v4 or higher preferred) to run Azure Function apps locally and deploy to Azure.
+- [Node.js](https://nodejs.org/en/download/), version 10.x.
+ > [!NOTE]
+ > For more information about the supported versions of Node.js, see [Azure Functions runtime versions documentation](../azure-functions/functions-versions.md#languages).
+- [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools#installing) (v4 or higher preferred) to run Azure Function apps locally and deploy to Azure.
-* The [Azure CLI](/cli/azure) to manage Azure resources.
+- The [Azure CLI](/cli/azure) to manage Azure resources.
# [C# in-process](#tab/csharp-in-process)
-* A code editor, such as [Visual Studio Code](https://code.visualstudio.com/).
+- A code editor, such as [Visual Studio Code](https://code.visualstudio.com/).
-* [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools#installing) (v4 or higher preferred) to run Azure Function apps locally and deploy to Azure.
+- [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools#installing) (v4 or higher preferred) to run Azure Function apps locally and deploy to Azure.
-* The [Azure CLI](/cli/azure) to manage Azure resources.
+- The [Azure CLI](/cli/azure) to manage Azure resources.
# [C# isolated process](#tab/csharp-isolated-process)
-* A code editor, such as [Visual Studio Code](https://code.visualstudio.com/).
+- A code editor, such as [Visual Studio Code](https://code.visualstudio.com/).
-* [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools#installing) (v4 or higher preferred) to run Azure Function apps locally and deploy to Azure.
+- [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools#installing) (v4 or higher preferred) to run Azure Function apps locally and deploy to Azure.
-* The [Azure CLI](/cli/azure) to manage Azure resources.
+- The [Azure CLI](/cli/azure) to manage Azure resources.
In this tutorial, you learn how to:
1. Make sure you have [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools#installing) installed. And then create an empty directory for the project. Run command under this working directory.
- # [JavaScript](#tab/javascript)
- ```bash
- func init --worker-runtime javascript
- ```
+ # [JavaScript](#tab/javascript)
+
+ ```bash
+ func init --worker-runtime javascript
+ ```
+
+ # [C# in-process](#tab/csharp-in-process)
+
+ ```bash
+ func init --worker-runtime dotnet
+ ```
- # [C# in-process](#tab/csharp-in-process)
- ```bash
- func init --worker-runtime dotnet
- ```
+ # [C# isolated process](#tab/csharp-isolated-process)
- # [C# isolated process](#tab/csharp-isolated-process)
- ```bash
- func init --worker-runtime dotnet-isolated
- ```
+ ```bash
+ func init --worker-runtime dotnet-isolated
+ ```
2. Install `Microsoft.Azure.WebJobs.Extensions.WebPubSub`.
-
- # [JavaScript](#tab/javascript)
- Update `host.json`'s extensionBundle to version _3.3.0_ or later to get Web PubSub support.
- ```json
- {
- "version": "2.0",
- "extensionBundle": {
- "id": "Microsoft.Azure.Functions.ExtensionBundle",
- "version": "[3.3.*, 4.0.0)"
- }
- }
- ```
-
- # [C# in-process](#tab/csharp-in-process)
- ```bash
- dotnet add package Microsoft.Azure.WebJobs.Extensions.WebPubSub
- ```
-
- # [C# isolated process](#tab/csharp-isolated-process)
- ```bash
- dotnet add package Microsoft.Azure.Functions.Worker.Extensions.WebPubSub --prerelease
- ```
+
+ # [JavaScript](#tab/javascript)
+
+ Update `host.json`'s extensionBundle to version _3.3.0_ or later to get Web PubSub support.
+
+ ```json
+ {
+ "version": "2.0",
+ "extensionBundle": {
+ "id": "Microsoft.Azure.Functions.ExtensionBundle",
+ "version": "[3.3.*, 4.0.0)"
+ }
+ }
+ ```
+
+ # [C# in-process](#tab/csharp-in-process)
+
+ ```bash
+ dotnet add package Microsoft.Azure.WebJobs.Extensions.WebPubSub
+ ```
+
+ # [C# isolated process](#tab/csharp-isolated-process)
+
+ ```bash
+ dotnet add package Microsoft.Azure.Functions.Worker.Extensions.WebPubSub --prerelease
+ ```
3. Create an `index` function to read and host a static web page for clients.
- ```bash
- func new -n index -t HttpTrigger
- ```
+
+ ```bash
+ func new -n index -t HttpTrigger
+ ```
+ # [JavaScript](#tab/javascript)+ - Update `index/function.json` and copy following json codes.
- ```json
- {
- "bindings": [
- {
- "authLevel": "anonymous",
- "type": "httpTrigger",
- "direction": "in",
- "name": "req",
- "methods": [
- "get",
- "post"
- ]
- },
- {
- "type": "http",
- "direction": "out",
- "name": "res"
- }
- ]
- }
- ```
+ ```json
+ {
+ "bindings": [
+ {
+ "authLevel": "anonymous",
+ "type": "httpTrigger",
+ "direction": "in",
+ "name": "req",
+ "methods": ["get", "post"]
+ },
+ {
+ "type": "http",
+ "direction": "out",
+ "name": "res"
+ }
+ ]
+ }
+ ```
- Update `index/index.js` and copy following codes.
- ```js
- var fs = require('fs');
- var path = require('path');
-
- module.exports = function (context, req) {
- var index = context.executionContext.functionDirectory + '/../https://docsupdatetracker.net/index.html';
- context.log("https://docsupdatetracker.net/index.html path: " + index);
- fs.readFile(index, 'utf8', function (err, data) {
- if (err) {
- console.log(err);
- context.done(err);
- }
- context.res = {
- status: 200,
- headers: {
- 'Content-Type': 'text/html'
- },
- body: data
- };
- context.done();
- });
- }
- ```
+
+ ```js
+ var fs = require("fs");
+ var path = require("path");
+
+ module.exports = function (context, req) {
+ var index =
+ context.executionContext.functionDirectory + "/../https://docsupdatetracker.net/index.html";
+ context.log("https://docsupdatetracker.net/index.html path: " + index);
+ fs.readFile(index, "utf8", function (err, data) {
+ if (err) {
+ console.log(err);
+ context.done(err);
+ }
+ context.res = {
+ status: 200,
+ headers: {
+ "Content-Type": "text/html",
+ },
+ body: data,
+ };
+ context.done();
+ });
+ };
+ ```
# [C# in-process](#tab/csharp-in-process)+ - Update `index.cs` and replace `Run` function with following codes.
- ```c#
- [FunctionName("index")]
- public static IActionResult Run([HttpTrigger(AuthorizationLevel.Anonymous)] HttpRequest req, ExecutionContext context, ILogger log)
- {
- var indexFile = Path.Combine(context.FunctionAppDirectory, "https://docsupdatetracker.net/index.html");
- log.LogInformation($"https://docsupdatetracker.net/index.html path: {indexFile}.");
- return new ContentResult
- {
- Content = File.ReadAllText(indexFile),
- ContentType = "text/html",
- };
- }
- ```
-
- # [C# isolated process](#tab/csharp-isolated-process)
+ ```csharp
+ [FunctionName("index")]
+ public static IActionResult Run([HttpTrigger(AuthorizationLevel.Anonymous)] HttpRequest req, ExecutionContext context, ILogger log)
+ {
+ var indexFile = Path.Combine(context.FunctionAppDirectory, "https://docsupdatetracker.net/index.html");
+ log.LogInformation($"https://docsupdatetracker.net/index.html path: {indexFile}.");
+ return new ContentResult
+ {
+ Content = File.ReadAllText(indexFile),
+ ContentType = "text/html",
+ };
+ }
+ ```
+
+ # [C# isolated process](#tab/csharp-isolated-process)
+ - Update `index.cs` and replace `Run` function with following codes.
- ```c#
- [Function("index")]
- public HttpResponseData Run([HttpTrigger(AuthorizationLevel.Anonymous, "get", "post")] HttpRequestData req, FunctionContext context)
- {
- var path = Path.Combine(context.FunctionDefinition.PathToAssembly, "../https://docsupdatetracker.net/index.html");
- _logger.LogInformation($"https://docsupdatetracker.net/index.html path: {path}.");
-
- var response = req.CreateResponse();
- response.WriteString(File.ReadAllText(path));
- response.Headers.Add("Content-Type", "text/html");
- return response;
- }
- ```
+
+ ```csharp
+ [Function("index")]
+ public HttpResponseData Run([HttpTrigger(AuthorizationLevel.Anonymous, "get", "post")] HttpRequestData req, FunctionContext context)
+ {
+ var path = Path.Combine(context.FunctionDefinition.PathToAssembly, "../https://docsupdatetracker.net/index.html");
+ _logger.LogInformation($"https://docsupdatetracker.net/index.html path: {path}.");
+
+ var response = req.CreateResponse();
+ response.WriteString(File.ReadAllText(path));
+ response.Headers.Add("Content-Type", "text/html");
+ return response;
+ }
+ ```
4. Create a `negotiate` function to help clients get service connection url with access token.
- ```bash
- func new -n negotiate -t HttpTrigger
- ```
- > [!NOTE]
- > In this sample, we use [AAD](../app-service/configure-authentication-user-identities.md) user identity header `x-ms-client-principal-name` to retrieve `userId`. And this won't work in a local function. You can make it empty or change to other ways to get or generate `userId` when playing in local. For example, let client type a user name and pass it in query like `?user={$username}` when call `negotiate` function to get service connection url. And in the `negotiate` function, set `userId` with value `{query.user}`.
-
- # [JavaScript](#tab/javascript)
- - Update `negotiate/function.json` and copy following json codes.
- ```json
- {
- "bindings": [
- {
- "authLevel": "anonymous",
- "type": "httpTrigger",
- "direction": "in",
- "name": "req"
- },
- {
- "type": "http",
- "direction": "out",
- "name": "res"
- },
- {
- "type": "webPubSubConnection",
- "name": "connection",
- "hub": "simplechat",
- "userId": "{headers.x-ms-client-principal-name}",
- "direction": "in"
- }
- ]
- }
- ```
- - Update `negotiate/index.js` and copy following codes.
- ```js
- module.exports = function (context, req, connection) {
- context.res = { body: connection };
- context.done();
- };
- ```
-
- # [C# in-process](#tab/csharp-in-process)
- - Update `negotiate.cs` and replace `Run` function with following codes.
- ```c#
- [FunctionName("negotiate")]
- public static WebPubSubConnection Run(
- [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req,
- [WebPubSubConnection(Hub = "simplechat", UserId = "{headers.x-ms-client-principal-name}")] WebPubSubConnection connection,
- ILogger log)
- {
- log.LogInformation("Connecting...");
- return connection;
- }
- ```
- - Add `using` statements in header to resolve required dependencies.
- ```c#
- using Microsoft.Azure.WebJobs.Extensions.WebPubSub;
- ```
-
- # [C# isolated process](#tab/csharp-isolated-process)
- - Update `negotiate.cs` and replace `Run` function with following codes.
- ```c#
- [Function("negotiate")]
- public HttpResponseData Run([HttpTrigger(AuthorizationLevel.Anonymous, "get", "post")] HttpRequestData req,
- [WebPubSubConnectionInput(Hub = "simplechat", UserId = "{headers.x-ms-client-principal-name}")] WebPubSubConnection connectionInfo)
- {
- var response = req.CreateResponse(HttpStatusCode.OK);
- response.WriteAsJsonAsync(connectionInfo);
- return response;
- }
- ```
+
+ ```bash
+ func new -n negotiate -t HttpTrigger
+ ```
+
+ > [!NOTE]
+ > In this sample, we use [AAD](../app-service/configure-authentication-user-identities.md) user identity header `x-ms-client-principal-name` to retrieve `userId`. And this won't work in a local function. You can make it empty or change to other ways to get or generate `userId` when playing in local. For example, let client type a user name and pass it in query like `?user={$username}` when call `negotiate` function to get service connection url. And in the `negotiate` function, set `userId` with value `{query.user}`.
+
+ # [JavaScript](#tab/javascript)
+
+ - Update `negotiate/function.json` and copy following json codes.
+ ```json
+ {
+ "bindings": [
+ {
+ "authLevel": "anonymous",
+ "type": "httpTrigger",
+ "direction": "in",
+ "name": "req"
+ },
+ {
+ "type": "http",
+ "direction": "out",
+ "name": "res"
+ },
+ {
+ "type": "webPubSubConnection",
+ "name": "connection",
+ "hub": "simplechat",
+ "userId": "{headers.x-ms-client-principal-name}",
+ "direction": "in"
+ }
+ ]
+ }
+ ```
+ - Update `negotiate/index.js` and copy following codes.
+ ```js
+ module.exports = function (context, req, connection) {
+ context.res = { body: connection };
+ context.done();
+ };
+ ```
+
+ # [C# in-process](#tab/csharp-in-process)
+
+ - Update `negotiate.cs` and replace `Run` function with following codes.
+ ```csharp
+ [FunctionName("negotiate")]
+ public static WebPubSubConnection Run(
+ [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req,
+ [WebPubSubConnection(Hub = "simplechat", UserId = "{headers.x-ms-client-principal-name}")] WebPubSubConnection connection,
+ ILogger log)
+ {
+ log.LogInformation("Connecting...");
+ return connection;
+ }
+ ```
+ - Add `using` statements in header to resolve required dependencies.
+ ```csharp
+ using Microsoft.Azure.WebJobs.Extensions.WebPubSub;
+ ```
+
+ # [C# isolated process](#tab/csharp-isolated-process)
+
+ - Update `negotiate.cs` and replace `Run` function with following codes.
+ ```csharp
+ [Function("negotiate")]
+ public HttpResponseData Run([HttpTrigger(AuthorizationLevel.Anonymous, "get", "post")] HttpRequestData req,
+ [WebPubSubConnectionInput(Hub = "simplechat", UserId = "{headers.x-ms-client-principal-name}")] WebPubSubConnection connectionInfo)
+ {
+ var response = req.CreateResponse(HttpStatusCode.OK);
+ response.WriteAsJsonAsync(connectionInfo);
+ return response;
+ }
+ ```
5. Create a `message` function to broadcast client messages through service.+ ```bash func new -n message -t HttpTrigger ```
In this tutorial, you learn how to:
> This function is actually using `WebPubSubTrigger`. However, the `WebPubSubTrigger` is not integrated in function's template. We use `HttpTrigger` to initialize the function template and change trigger type in code. # [JavaScript](#tab/javascript)+ - Update `message/function.json` and copy following json codes.
- ```json
- {
- "bindings": [
- {
- "type": "webPubSubTrigger",
- "direction": "in",
- "name": "data",
- "hub": "simplechat",
- "eventName": "message",
- "eventType": "user"
- },
- {
- "type": "webPubSub",
- "name": "actions",
- "hub": "simplechat",
- "direction": "out"
- }
- ]
- }
- ```
+ ```json
+ {
+ "bindings": [
+ {
+ "type": "webPubSubTrigger",
+ "direction": "in",
+ "name": "data",
+ "hub": "simplechat",
+ "eventName": "message",
+ "eventType": "user"
+ },
+ {
+ "type": "webPubSub",
+ "name": "actions",
+ "hub": "simplechat",
+ "direction": "out"
+ }
+ ]
+ }
+ ```
- Update `message/index.js` and copy following codes.
- ```js
- module.exports = async function (context, data) {
- context.bindings.actions = {
- "actionName": "sendToAll",
- "data": `[${context.bindingData.request.connectionContext.userId}] ${data}`,
- "dataType": context.bindingData.dataType
- };
- // UserEventResponse directly return to caller
- var response = {
- "data": '[SYSTEM] ack.',
- "dataType" : "text"
- };
- return response;
- };
- ```
-
- # [C# in-process](#tab/csharp-in-process)
- - Update `message.cs` and replace `Run` function with following codes.
- ```c#
- [FunctionName("message")]
- public static async Task<UserEventResponse> Run(
- [WebPubSubTrigger("simplechat", WebPubSubEventType.User, "message")] UserEventRequest request,
- BinaryData data,
- WebPubSubDataType dataType,
- [WebPubSub(Hub = "simplechat")] IAsyncCollector<WebPubSubAction> actions)
- {
- await actions.AddAsync(WebPubSubAction.CreateSendToAllAction(
- BinaryData.FromString($"[{request.ConnectionContext.UserId}] {data.ToString()}"),
- dataType));
- return new UserEventResponse
- {
- Data = BinaryData.FromString("[SYSTEM] ack"),
- DataType = WebPubSubDataType.Text
- };
- }
- ```
- - Add `using` statements in header to resolve required dependencies.
- ```c#
- using Microsoft.Azure.WebJobs.Extensions.WebPubSub;
- using Microsoft.Azure.WebPubSub.Common;
- ```
-
- # [C# isolated process](#tab/csharp-isolated-process)
- - Update `message.cs` and replace `Run` function with following codes.
- ```c#
- [Function("message")]
- [WebPubSubOutput(Hub = "simplechat")]
- public SendToAllAction Run(
- [WebPubSubTrigger("simplechat", WebPubSubEventType.User, "message")] UserEventRequest request)
- {
- return new SendToAllAction
- {
- Data = BinaryData.FromString($"[{request.ConnectionContext.UserId}] {request.Data.ToString()}"),
- DataType = request.DataType
- };
- }
- ```
+ ```js
+ module.exports = async function (context, data) {
+ context.bindings.actions = {
+ actionName: "sendToAll",
+ data: `[${context.bindingData.request.connectionContext.userId}] ${data}`,
+ dataType: context.bindingData.dataType,
+ };
+ // UserEventResponse directly return to caller
+ var response = {
+ data: "[SYSTEM] ack.",
+ dataType: "text",
+ };
+ return response;
+ };
+ ```
+
+ # [C# in-process](#tab/csharp-in-process)
+
+ - Update `message.cs` and replace `Run` function with following codes.
+ ```csharp
+ [FunctionName("message")]
+ public static async Task<UserEventResponse> Run(
+ [WebPubSubTrigger("simplechat", WebPubSubEventType.User, "message")] UserEventRequest request,
+ BinaryData data,
+ WebPubSubDataType dataType,
+ [WebPubSub(Hub = "simplechat")] IAsyncCollector<WebPubSubAction> actions)
+ {
+ await actions.AddAsync(WebPubSubAction.CreateSendToAllAction(
+ BinaryData.FromString($"[{request.ConnectionContext.UserId}] {data.ToString()}"),
+ dataType));
+ return new UserEventResponse
+ {
+ Data = BinaryData.FromString("[SYSTEM] ack"),
+ DataType = WebPubSubDataType.Text
+ };
+ }
+ ```
+ - Add `using` statements in header to resolve required dependencies.
+ ```csharp
+ using Microsoft.Azure.WebJobs.Extensions.WebPubSub;
+ using Microsoft.Azure.WebPubSub.Common;
+ ```
+
+ # [C# isolated process](#tab/csharp-isolated-process)
+
+ - Update `message.cs` and replace `Run` function with following codes.
+ ```csharp
+ [Function("message")]
+ [WebPubSubOutput(Hub = "simplechat")]
+ public SendToAllAction Run(
+ [WebPubSubTrigger("simplechat", WebPubSubEventType.User, "message")] UserEventRequest request)
+ {
+ return new SendToAllAction
+ {
+ Data = BinaryData.FromString($"[{request.ConnectionContext.UserId}] {request.Data.ToString()}"),
+ DataType = request.DataType
+ };
+ }
+ ```
6. Add the client single page `https://docsupdatetracker.net/index.html` in the project root folder and copy content.
- ```html
- <html>
- <body>
- <h1>Azure Web PubSub Serverless Chat App</h1>
- <div id="login"></div>
- <p></p>
- <input id="message" placeholder="Type to chat...">
- <div id="messages"></div>
- <script>
- (async function () {
- let authenticated = window.location.href.includes('?authenticated=true');
- if (!authenticated) {
- // auth
- let login = document.querySelector("#login");
- let link = document.createElement('a');
- link.href = `${window.location.origin}/.auth/login/aad?post_login_redirect_url=/api/index?authenticated=true`;
- link.text = "login";
- login.appendChild(link);
- }
- else {
- // negotiate
- let messages = document.querySelector('#messages');
- let res = await fetch(`${window.location.origin}/api/negotiate`, {
- credentials: "include"
- });
- let url = await res.json();
- // connect
- let ws = new WebSocket(url.url);
- ws.onopen = () => console.log('connected');
- ws.onmessage = event => {
- let m = document.createElement('p');
- m.innerText = event.data;
- messages.appendChild(m);
- };
- let message = document.querySelector('#message');
- message.addEventListener('keypress', e => {
- if (e.charCode !== 13) return;
- ws.send(message.value);
- message.value = '';
- });
- }
- })();
- </script>
- </body>
- </html>
- ```
-
- # [JavaScript](#tab/javascript)
-
- # [C# in-process](#tab/csharp-in-process)
- Since C# project compiles files to a different output folder, you need to update your `*.csproj` to make the content page go with it.
- ```xml
- <ItemGroup>
- <None Update="https://docsupdatetracker.net/index.html">
- <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
- </None>
- </ItemGroup>
- ```
-
- # [C# isolated process](#tab/csharp-isolated-process)
- Since C# project compiles files to a different output folder, you need to update your `*.csproj` to make the content page go with it.
- ```xml
- <ItemGroup>
- <None Update="https://docsupdatetracker.net/index.html">
- <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
- </None>
- </ItemGroup>
- ```
+
+ ```html
+ <html>
+ <body>
+ <h1>Azure Web PubSub Serverless Chat App</h1>
+ <div id="login"></div>
+ <p></p>
+ <input id="message" placeholder="Type to chat..." />
+ <div id="messages"></div>
+ <script>
+ (async function () {
+ let authenticated = window.location.href.includes(
+ "?authenticated=true"
+ );
+ if (!authenticated) {
+ // auth
+ let login = document.querySelector("#login");
+ let link = document.createElement("a");
+ link.href = `${window.location.origin}/.auth/login/aad?post_login_redirect_url=/api/index?authenticated=true`;
+ link.text = "login";
+ login.appendChild(link);
+ } else {
+ // negotiate
+ let messages = document.querySelector("#messages");
+ let res = await fetch(`${window.location.origin}/api/negotiate`, {
+ credentials: "include",
+ });
+ let url = await res.json();
+ // connect
+ let ws = new WebSocket(url.url);
+ ws.onopen = () => console.log("connected");
+ ws.onmessage = (event) => {
+ let m = document.createElement("p");
+ m.innerText = event.data;
+ messages.appendChild(m);
+ };
+ let message = document.querySelector("#message");
+ message.addEventListener("keypress", (e) => {
+ if (e.charCode !== 13) return;
+ ws.send(message.value);
+ message.value = "";
+ });
+ }
+ })();
+ </script>
+ </body>
+ </html>
+ ```
+
+ # [JavaScript](#tab/javascript)
+
+ # [C# in-process](#tab/csharp-in-process)
+
+ Since C# project compiles files to a different output folder, you need to update your `*.csproj` to make the content page go with it.
+
+ ```xml
+ <ItemGroup>
+ <None Update="https://docsupdatetracker.net/index.html">
+ <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
+ </None>
+ </ItemGroup>
+ ```
+
+ # [C# isolated process](#tab/csharp-isolated-process)
+
+ Since C# project compiles files to a different output folder, you need to update your `*.csproj` to make the content page go with it.
+
+ ```xml
+ <ItemGroup>
+ <None Update="https://docsupdatetracker.net/index.html">
+ <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
+ </None>
+ </ItemGroup>
+ ```
## Create and Deploy the Azure Function App Before you can deploy your function code to Azure, you need to create three resources:
-* A resource group, which is a logical container for related resources.
-* A storage account, which is used to maintain state and other information about your functions.
-* A function app, which provides the environment for executing your function code. A function app maps to your local function project and lets you group functions as a logical unit for easier management, deployment and sharing of resources.
-Use the following commands to create these items.
+- A resource group, which is a logical container for related resources.
+- A storage account, which is used to maintain state and other information about your functions.
+- A function app, which provides the environment for executing your function code. A function app maps to your local function project and lets you group functions as a logical unit for easier management, deployment and sharing of resources.
+
+Use the following commands to create these items.
1. If you haven't done so already, sign in to Azure:
- ```azurecli
- az login
- ```
+ ```azurecli
+ az login
+ ```
1. Create a resource group or you can skip by reusing the one of Azure Web PubSub service:
- ```azurecli
- az group create -n WebPubSubFunction -l <REGION>
- ```
+ ```azurecli
+ az group create -n WebPubSubFunction -l <REGION>
+ ```
1. Create a general-purpose storage account in your resource group and region:
- ```azurecli
- az storage account create -n <STORAGE_NAME> -l <REGION> -g WebPubSubFunction
- ```
+ ```azurecli
+ az storage account create -n <STORAGE_NAME> -l <REGION> -g WebPubSubFunction
+ ```
1. Create the function app in Azure:
- # [JavaScript](#tab/javascript)
+ # [JavaScript](#tab/javascript)
+
+ ```azurecli
+ az functionapp create --resource-group WebPubSubFunction --consumption-plan-location <REGION> --runtime node --runtime-version 14 --functions-version 4 --name <FUNCIONAPP_NAME> --storage-account <STORAGE_NAME>
+ ```
- ```azurecli
- az functionapp create --resource-group WebPubSubFunction --consumption-plan-location <REGION> --runtime node --runtime-version 14 --functions-version 4 --name <FUNCIONAPP_NAME> --storage-account <STORAGE_NAME>
- ```
- > [!NOTE]
- > Check [Azure Functions runtime versions documentation](../azure-functions/functions-versions.md#languages) to set `--runtime-version` parameter to supported value.
+ > [!NOTE]
+ > Check [Azure Functions runtime versions documentation](../azure-functions/functions-versions.md#languages) to set `--runtime-version` parameter to supported value.
- # [C# in-process](#tab/csharp-in-process)
+ # [C# in-process](#tab/csharp-in-process)
- ```azurecli
- az functionapp create --resource-group WebPubSubFunction --consumption-plan-location <REGION> --runtime dotnet --functions-version 4 --name <FUNCIONAPP_NAME> --storage-account <STORAGE_NAME>
- ```
+ ```azurecli
+ az functionapp create --resource-group WebPubSubFunction --consumption-plan-location <REGION> --runtime dotnet --functions-version 4 --name <FUNCIONAPP_NAME> --storage-account <STORAGE_NAME>
+ ```
- # [C# isolated process](#tab/csharp-isolated-process)
+ # [C# isolated process](#tab/csharp-isolated-process)
- ```azurecli
- az functionapp create --resource-group WebPubSubFunction --consumption-plan-location <REGION> --runtime dotnet-isolated --functions-version 4 --name <FUNCIONAPP_NAME> --storage-account <STORAGE_NAME>
- ```
+ ```azurecli
+ az functionapp create --resource-group WebPubSubFunction --consumption-plan-location <REGION> --runtime dotnet-isolated --functions-version 4 --name <FUNCIONAPP_NAME> --storage-account <STORAGE_NAME>
+ ```
1. Deploy the function project to Azure:
- After you have successfully created your function app in Azure, you're now ready to deploy your local functions project by using the [func azure functionapp publish](./../azure-functions/functions-run-local.md) command.
+ After you have successfully created your function app in Azure, you're now ready to deploy your local functions project by using the [func azure functionapp publish](./../azure-functions/functions-run-local.md) command.
+
+ ```bash
+ func azure functionapp publish <FUNCIONAPP_NAME>
+ ```
- ```bash
- func azure functionapp publish <FUNCIONAPP_NAME>
- ```
1. Configure the `WebPubSubConnectionString` for the function app: First, find your Web PubSub resource from **Azure Portal** and copy out the connection string under **Keys**. Then, navigate to Function App settings in **Azure Portal** -> **Settings** -> **Configuration**. And add a new item under **Application settings**, with name equals `WebPubSubConnectionString` and value is your Web PubSub resource connection string.
Go to **Azure portal** -> Find your Function App resource -> **App keys** -> **S
Set `Event Handler` in Azure Web PubSub service. Go to **Azure portal** -> Find your Web PubSub resource -> **Settings**. Add a new hub settings mapping to the one function in use. Replace the `<FUNCTIONAPP_NAME>` and `<APP_KEY>` to yours.
- - Hub Name: `simplechat`
- - URL Template: **https://<FUNCTIONAPP_NAME>.azurewebsites.net/runtime/webhooks/webpubsub?code=<APP_KEY>**
- - User Event Pattern: *
- - System Events: -(No need to configure in this sample)
+- Hub Name: `simplechat`
+- URL Template: **https://<FUNCTIONAPP_NAME>.azurewebsites.NET/runtime/webhooks/webpubsub?code=<APP_KEY>**
+- User Event Pattern: \*
+- System Events: -(No need to configure in this sample)
:::image type="content" source="media/quickstart-serverless/set-event-handler.png" alt-text="Screenshot of setting the event handler.":::
Go to **Azure portal** -> Find your Function App resource -> **Authentication**.
Here we choose `Microsoft` as identify provider, which uses `x-ms-client-principal-name` as `userId` in the `negotiate` function. Besides, you can configure other identity providers following the links, and don't forget update the `userId` value in `negotiate` function accordingly.
-* [Microsoft(Azure AD)](../app-service/configure-authentication-provider-aad.md)
-* [Facebook](../app-service/configure-authentication-provider-facebook.md)
-* [Google](../app-service/configure-authentication-provider-google.md)
-* [Twitter](../app-service/configure-authentication-provider-twitter.md)
+- [Microsoft(Azure AD)](../app-service/configure-authentication-provider-aad.md)
+- [Facebook](../app-service/configure-authentication-provider-facebook.md)
+- [Google](../app-service/configure-authentication-provider-google.md)
+- [Twitter](../app-service/configure-authentication-provider-twitter.md)
## Try the application Now you're able to test your page from your function app: `https://<FUNCTIONAPP_NAME>.azurewebsites.net/api/index`. See snapshot.+ 1. Click `login` to auth yourself. 2. Type message in the input box to chat.
If you're not going to continue to use this app, delete all resources created by
## Next steps
-In this quickstart, you learned how to run a serverless chat application. Now, you could start to build your own application.
+In this quickstart, you learned how to run a serverless chat application. Now, you could start to build your own application.
-> [!div class="nextstepaction"]
+> [!div class="nextstepaction"]
> [Azure Web PubSub bindings for Azure Functions](./reference-functions-bindings.md)
-> [!div class="nextstepaction"]
+> [!div class="nextstepaction"]
> [Quick start: Create a simple chatroom with Azure Web PubSub](./tutorial-build-chat.md)
-> [!div class="nextstepaction"]
-> [Explore more Azure Web PubSub samples](https://github.com/Azure/azure-webpubsub/tree/main/samples)
+> [!div class="nextstepaction"]
+> [Explore more Azure Web PubSub samples](https://github.com/Azure/azure-webpubsub/tree/main/samples)
azure-web-pubsub Reference Rest Api Data Plane https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-rest-api-data-plane.md
As illustrated by the above workflow graph, and also detailed workflow described
In each HTTP request, an authorization header with a [JSON Web Token (JWT)](https://en.wikipedia.org/wiki/JSON_Web_Token) is required to authenticate with Azure Web PubSub Service. <a name="signing"></a>+ #### Signing Algorithm and Signature `HS256`, namely HMAC-SHA256, is used as the signing algorithm.
You should use the `AccessKey` in Azure Web PubSub Service instance's connection
Below claims are required to be included in the JWT token.
-Claim Type | Is Required | Description
-||
-`aud` | true | Should be the **SAME** as your HTTP request url. For example, a broadcast request's audience looks like: `https://example.webpubsub.azure.com/api/hubs/myhub/:send?api-version=2022-11-01`.
-`exp` | true | Epoch time when this token will be expired.
+| Claim Type | Is Required | Description |
+| - | -- | - |
+| `aud` | true | Should be the **SAME** as your HTTP request url. For example, a broadcast request's audience looks like: `https://example.webpubsub.azure.com/api/hubs/myhub/:send?api-version=2022-11-01`. |
+| `exp` | true | Epoch time when this token will be expired. |
A pseudo code in JS:+ ```js const bearerToken = jwt.sign({}, connectionString.accessKey, {
- audience: request.url,
- expiresIn: "1h",
- algorithm: "HS256",
- });
+ audience: request.url,
+ expiresIn: "1h",
+ algorithm: "HS256",
+});
``` ### Authenticate via Azure Active Directory Token (Azure AD Token)
-Like using `AccessKey`, a [JSON Web Token (JWT)](https://en.wikipedia.org/wiki/JSON_Web_Token) is also required to authenticate the HTTP request.
+Like using `AccessKey`, a [JSON Web Token (JWT)](https://en.wikipedia.org/wiki/JSON_Web_Token) is also required to authenticate the HTTP request.
-**The difference is**, in this scenario, JWT Token is generated by Azure Active Directory.
+**The difference is**, in this scenario, JWT Token is generated by Azure Active Directory.
[Learn how to generate Azure AD Tokens](../active-directory/develop/reference-v2-libraries.md)
You could also use **Role Based Access Control (RBAC)** to authorize the request
[Learn how to configure Role Based Access Control roles for your resource](./howto-authorize-from-application.md#add-role-assignments-on-azure-portal)
-## APIs
+## APIs
-| Operation Group | Description |
-|--|-|
-|[Service Status](/rest/api/webpubsub/dataplane/health-api)| Provides operations to check the service status |
-|[Hub Operations](/rest/api/webpubsub/dataplane/web-pub-sub)| Provides operations to manage the connections and send messages to them. |
+| Operation Group | Description |
+| -- | |
+| [Service Status](/rest/api/webpubsub/dataplane/health-api) | Provides operations to check the service status |
+| [Hub Operations](/rest/api/webpubsub/dataplane/web-pub-sub) | Provides operations to manage the connections and send messages to them. |
azure-web-pubsub Reference Server Sdk Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-server-sdk-csharp.md
-+ Last updated 11/11/2021
You can use this library in your app server side to manage the WebSocket client
Use this library to: -- Send messages to hubs and groups.
+- Send messages to hubs and groups.
- Send messages to particular users and connections. - Organize users and connections into groups. - Close connections
azure-web-pubsub Reference Server Sdk Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-server-sdk-java.md
-+ Last updated 01/31/2023 + # Azure Web PubSub service client library for Java [Azure Web PubSub service](./index.yml) is an Azure-managed service that helps developers easily build web applications with real-time features and a publish-subscribe pattern. Any scenario that requires real-time publish-subscribe messaging between server and clients or among clients can use Azure Web PubSub service. Traditional real-time features that often require polling from the server or submitting HTTP requests can also use Azure Web PubSub service.
Use this library to:
For more information, see: -- [Azure Web PubSub client library Java SDK][source_code] -- [Azure Web PubSub client library reference documentation][api]
+- [Azure Web PubSub client library Java SDK][source_code]
+- [Azure Web PubSub client library reference documentation][api]
- [Azure Web PubSub client library samples for Java][samples_readme] - [Azure Web PubSub service documentation][product_documentation]
For more information, see:
### Include the Package
-[//]: # ({x-version-update-start;com.azure:azure-messaging-webpubsub;current})
+[//]: # "{x-version-update-start;com.azure:azure-messaging-webpubsub;current}"
```xml <dependency>
For more information, see:
</dependency> ```
-[//]: # ({x-version-update-end})
+[//]: # "{x-version-update-end}"
### Create a `WebPubSubServiceClient` using connection string <!-- embedme ./src/samples/java/com/azure/messaging/webpubsub/ReadmeSamples.java#L21-L24 -->+ ```java WebPubSubServiceClient webPubSubServiceClient = new WebPubSubServiceClientBuilder() .connectionString("{connection-string}")
WebPubSubServiceClient webPubSubServiceClient = new WebPubSubServiceClientBuilde
### Create a `WebPubSubServiceClient` using access key <!-- embedme ./src/samples/java/com/azure/messaging/webpubsub/ReadmeSamples.java#L31-L35 -->+ ```java WebPubSubServiceClient webPubSubServiceClient = new WebPubSubServiceClientBuilder() .credential(new AzureKeyCredential("{access-key}"))
WebPubSubServiceClient webPubSubServiceClient = new WebPubSubServiceClientBuilde
### Broadcast message to entire hub <!-- embedme ./src/samples/java/com/azure/messaging/webpubsub/ReadmeSamples.java#L47-L47 -->+ ```java webPubSubServiceClient.sendToAll("Hello world!", WebPubSubContentType.TEXT_PLAIN); ```
webPubSubServiceClient.sendToAll("Hello world!", WebPubSubContentType.TEXT_PLAIN
### Broadcast message to a group <!-- embedme ./src/samples/java/com/azure/messaging/webpubsub/ReadmeSamples.java#L59-L59 -->+ ```java webPubSubServiceClient.sendToGroup("java", "Hello Java!", WebPubSubContentType.TEXT_PLAIN); ```
webPubSubServiceClient.sendToGroup("java", "Hello Java!", WebPubSubContentType.T
### Send message to a connection <!-- embedme ./src/samples/java/com/azure/messaging/webpubsub/ReadmeSamples.java#L71-L71 -->+ ```java webPubSubServiceClient.sendToConnection("myconnectionid", "Hello connection!", WebPubSubContentType.TEXT_PLAIN); ```
webPubSubServiceClient.sendToConnection("myconnectionid", "Hello connection!", W
<a name="send-to-user"></a> ### Send message to a user+ <!-- embedme ./src/samples/java/com/azure/messaging/webpubsub/ReadmeSamples.java#L83-L83 -->+ ```java webPubSubServiceClient.sendToUser("Andy", "Hello Andy!", WebPubSubContentType.TEXT_PLAIN); ```
the client library to use the Netty HTTP client. Configuring or changing the HTT
By default, all client libraries use the Tomcat-native Boring SSL library to enable native-level performance for SSL operations. The Boring SSL library is an uber jar containing native libraries for Linux / macOS / Windows, and provides
-better performance compared to the default SSL implementation within the JDK. For more information, including how to reduce the dependency size, see [performance tuning][https://github.com/Azure/azure-sdk-for-java/wiki/Performance-Tuning].
+better performance compared to the default SSL implementation within the JDK. For more information, including how to reduce the dependency size, see [performance tuning][https://github.com/Azure/azure-sdk-for-java/wiki/Performance-Tuning].
[!INCLUDE [next step](includes/include-next-step.md)]
azure-web-pubsub Reference Server Sdk Js https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-server-sdk-js.md
-+ Last updated 11/11/2021
npm install @azure/web-pubsub
```js const { WebPubSubServiceClient } = require("@azure/web-pubsub");
-const serviceClient = new WebPubSubServiceClient("<ConnectionString>", "<hubName>");
+const serviceClient = new WebPubSubServiceClient(
+ "<ConnectionString>",
+ "<hubName>"
+);
``` You can also authenticate the `WebPubSubServiceClient` using an endpoint and an `AzureKeyCredential`: ```js
-const { WebPubSubServiceClient, AzureKeyCredential } = require("@azure/web-pubsub");
+const {
+ WebPubSubServiceClient,
+ AzureKeyCredential,
+} = require("@azure/web-pubsub");
const key = new AzureKeyCredential("<Key>");
-const serviceClient = new WebPubSubServiceClient("<Endpoint>", key, "<hubName>");
+const serviceClient = new WebPubSubServiceClient(
+ "<Endpoint>",
+ key,
+ "<hubName>"
+);
``` Or authenticate the `WebPubSubServiceClient` using [Azure Active Directory][aad_doc]
npm install @azure/identity
1. Update the source code to use `DefaultAzureCredential`: ```js
-const { WebPubSubServiceClient, AzureKeyCredential } = require("@azure/web-pubsub");
+const {
+ WebPubSubServiceClient,
+ AzureKeyCredential,
+} = require("@azure/web-pubsub");
const key = new DefaultAzureCredential();
-const serviceClient = new WebPubSubServiceClient("<Endpoint>", key, "<hubName>");
+const serviceClient = new WebPubSubServiceClient(
+ "<Endpoint>",
+ key,
+ "<hubName>"
+);
``` ### Examples
const serviceClient = new WebPubSubServiceClient("<Endpoint>", key, "<hubName>")
```js const { WebPubSubServiceClient } = require("@azure/web-pubsub");
-const serviceClient = new WebPubSubServiceClient("<ConnectionString>", "<hubName>");
+const serviceClient = new WebPubSubServiceClient(
+ "<ConnectionString>",
+ "<hubName>"
+);
// Get the access token for the WebSocket client connection to use let token = await serviceClient.getClientAccessToken();
token = await serviceClient.getClientAccessToken({ userId: "user1" });
```js const { WebPubSubServiceClient } = require("@azure/web-pubsub");
-const serviceClient = new WebPubSubServiceClient("<ConnectionString>", "<hubName>");
+const serviceClient = new WebPubSubServiceClient(
+ "<ConnectionString>",
+ "<hubName>"
+);
// Send a JSON message await serviceClient.sendToAll({ message: "Hello world!" });
await serviceClient.sendToAll(payload.buffer);
```js const { WebPubSubServiceClient } = require("@azure/web-pubsub");
-const serviceClient = new WebPubSubServiceClient("<ConnectionString>", "<hubName>");
+const serviceClient = new WebPubSubServiceClient(
+ "<ConnectionString>",
+ "<hubName>"
+);
const groupClient = serviceClient.group("<groupName>");
await groupClient.sendToAll(payload.buffer);
```js const { WebPubSubServiceClient } = require("@azure/web-pubsub");
-const serviceClient = new WebPubSubServiceClient("<ConnectionString>", "<hubName>");
+const serviceClient = new WebPubSubServiceClient(
+ "<ConnectionString>",
+ "<hubName>"
+);
// Send a JSON message await serviceClient.sendToUser("user1", { message: "Hello world!" }); // Send a plain text message
-await serviceClient.sendToUser("user1", "Hi there!", { contentType: "text/plain" });
+await serviceClient.sendToUser("user1", "Hi there!", {
+ contentType: "text/plain",
+});
// Send a binary message const payload = new Uint8Array(10);
await serviceClient.sendToUser("user1", payload.buffer);
const { WebPubSubServiceClient } = require("@azure/web-pubsub"); const WebSocket = require("ws");
-const serviceClient = new WebPubSubServiceClient("<ConnectionString>", "<hubName>");
+const serviceClient = new WebPubSubServiceClient(
+ "<ConnectionString>",
+ "<hubName>"
+);
const groupClient = serviceClient.group("<groupName>");
const { WebPubSubServiceClient } = require("@azure/web-pubsub");
function onResponse(rawResponse: FullOperationResponse): void { console.log(rawResponse); }
-const serviceClient = new WebPubSubServiceClient("<ConnectionString>", "<hubName>");
+const serviceClient = new WebPubSubServiceClient(
+ "<ConnectionString>",
+ "<hubName>"
+);
await serviceClient.sendToAll({ message: "Hello world!" }, { onResponse }); ```
const app = express();
app.use(handler.getMiddleware()); app.listen(3000, () =>
- console.log(`Azure WebPubSub Upstream ready at http://localhost:3000${handler.path}`)
+ console.log(
+ `Azure WebPubSub Upstream ready at http://localhost:3000${handler.path}`
+ )
); ```
const handler = new WebPubSubEventHandler("chat", {
handleConnect: (req, res) => { // auth the connection and set the userId of the connection res.success({
- userId: "<userId>"
+ userId: "<userId>",
}); },
- allowedEndpoints: ["https://<yourAllowedService>.webpubsub.azure.com"]
+ allowedEndpoints: ["https://<yourAllowedService>.webpubsub.azure.com"],
}); const app = express();
const app = express();
app.use(handler.getMiddleware()); app.listen(3000, () =>
- console.log(`Azure WebPubSub Upstream ready at http://localhost:3000${handler.path}`)
+ console.log(
+ `Azure WebPubSub Upstream ready at http://localhost:3000${handler.path}`
+ )
); ```
const { WebPubSubEventHandler } = require("@azure/web-pubsub-express");
const handler = new WebPubSubEventHandler("chat", { allowedEndpoints: [ "https://<yourAllowedService1>.webpubsub.azure.com",
- "https://<yourAllowedService2>.webpubsub.azure.com"
- ]
+ "https://<yourAllowedService2>.webpubsub.azure.com",
+ ],
}); const app = express();
const app = express();
app.use(handler.getMiddleware()); app.listen(3000, () =>
- console.log(`Azure WebPubSub Upstream ready at http://localhost:3000${handler.path}`)
+ console.log(
+ `Azure WebPubSub Upstream ready at http://localhost:3000${handler.path}`
+ )
); ```
const express = require("express");
const { WebPubSubEventHandler } = require("@azure/web-pubsub-express"); const handler = new WebPubSubEventHandler("chat", {
- path: "/customPath1"
+ path: "/customPath1",
}); const app = express();
app.use(handler.getMiddleware());
app.listen(3000, () => // Azure WebPubSub Upstream ready at http://localhost:3000/customPath1
- console.log(`Azure WebPubSub Upstream ready at http://localhost:3000${handler.path}`)
+ console.log(
+ `Azure WebPubSub Upstream ready at http://localhost:3000${handler.path}`
+ )
); ```
const handler = new WebPubSubEventHandler("chat", {
// You can also set the state here res.setState("calledTime", calledTime); res.success();
- }
+ },
}); const app = express();
const app = express();
app.use(handler.getMiddleware()); app.listen(3000, () =>
- console.log(`Azure WebPubSub Upstream ready at http://localhost:3000${handler.path}`)
+ console.log(
+ `Azure WebPubSub Upstream ready at http://localhost:3000${handler.path}`
+ )
); ```
azure-web-pubsub Reference Server Sdk Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-server-sdk-python.md
description: Learn about the Python server SDK for the Azure Web PubSub service.
-+ Last updated 05/23/2022
Or use [Azure Active Directory][aad_doc] (Azure AD):
2. [Enable Azure AD authentication on your Webpubsub resource][aad_doc]. 3. Update code to use [DefaultAzureCredential][default_azure_credential].
- ```python
- >>> from azure.messaging.webpubsubservice import WebPubSubServiceClient
- >>> from azure.identity import DefaultAzureCredential
- >>> service = WebPubSubServiceClient(endpoint='<endpoint>', hub='hub', credential=DefaultAzureCredential())
- ```
+ ```python
+ >>> from azure.messaging.webpubsubservice import WebPubSubServiceClient
+ >>> from azure.identity import DefaultAzureCredential
+ >>> service = WebPubSubServiceClient(endpoint='<endpoint>', hub='hub', credential=DefaultAzureCredential())
+ ```
## Examples
When you submit a pull request, a CLA-bot automatically determines whether you n
This project has adopted the Microsoft Open Source Code of Conduct. For more information, see [Code of Conduct][code_of_conduct] FAQ or contact [Open Source Conduct Team](mailto:opencode@microsoft.com) with questions or comments. <!-- LINKS -->+ [webpubsubservice_docs]: ./index.yml [azure_cli]: /cli/azure [azure_sub]: https://azure.microsoft.com/free/
azure-web-pubsub Samples Authenticate And Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/samples-authenticate-and-connect.md
Title: Azure Web PubSub samples - authenticate and connect
-description: A list of code samples showing how to authenticate and connect to Web PubSub resource(s)
+description: A list of code samples showing how to authenticate and connect to Web PubSub resource(s)
Last updated 05/15/2023
zone_pivot_groups: azure-web-pubsub-samples-authenticate-and-connect + # Azure Web PubSub samples - Authenticate and connect To make use of your Azure Web PubSub resource, you need to authenticate and connect to the service first. Azure Web PubSub service distinguishes two roles and they're given a different set of capabilities.
-
+ ## Client
-The client can be a browser, a mobile app, an IoT device or even an EV charging point as long as it supports WebSocket. A client is limited to publishing and subscribing to messages.
+
+The client can be a browser, a mobile app, an IoT device or even an EV charging point as long as it supports WebSocket. A client is limited to publishing and subscribing to messages.
## Application server
-While the client's role is often limited, the application server's role goes beyond simply receiving and publishing messages. Before a client tries to connect with your Web PubSub resource, it goes to the application server for a Client Access Token first. The token is used to establish a persistent WebSocket connection with your Web PubSub resource.
+
+While the client's role is often limited, the application server's role goes beyond simply receiving and publishing messages. Before a client tries to connect with your Web PubSub resource, it goes to the application server for a Client Access Token first. The token is used to establish a persistent WebSocket connection with your Web PubSub resource.
::: zone pivot="method-sdk-csharp"
-| Use case | Description |
+| Use case | Description |
| | -- |
-| [Using connection string](https://github.com/Azure/azure-webpubsub/blob/main/samples/csharp/chatapp/Startup.cs#L29) | Applies to application server only.
-| [Using Client Access Token](https://github.com/Azure/azure-webpubsub/blob/main/samples/csharp/chatapp/wwwroot/https://docsupdatetracker.net/index.html#L13) | Applies to client only. Client Access Token is generated on the application server.
+| [Using connection string](https://github.com/Azure/azure-webpubsub/blob/main/samples/csharp/chatapp/Startup.cs#L29) | Applies to application server only.
+| [Using Client Access Token](https://github.com/Azure/azure-webpubsub/blob/main/samples/csharp/chatapp/wwwroot/https://docsupdatetracker.net/index.html#L13) | Applies to client only. Client Access Token is generated on the application server.
| [Using Azure Active Directory](https://github.com/Azure/azure-webpubsub/blob/main/samples/csharp/chatapp-aad/Startup.cs#L26) | Using Azure AD for authorization offers improved security and ease of use compared to Access Key authorization.
-| [Anonymous connection](https://github.com/Azure/azure-webpubsub/blob/main/samples/csharp/clientWithCert/client/Program.cs#L15) | Anonymous connection allows clients to connect with Azure Web PubSub directly without going to an application server for a Client Access Token first. This is useful for clients that have limited networking capabilities, like an EV charging point.
+| [Anonymous connection](https://github.com/Azure/azure-webpubsub/blob/main/samples/csharp/clientWithCert/client/Program.cs#L15) | Anonymous connection allows clients to connect with Azure Web PubSub directly without going to an application server for a Client Access Token first. This is useful for clients that have limited networking capabilities, like an EV charging point.
::: zone-end ::: zone pivot="method-sdk-javascript"
-| Use case | Description |
+| Use case | Description |
| | -- | | [Using connection string](https://github.com/Azure/azure-webpubsub/blob/main/samples/javascript/chatapp/sdk/server.js#L9) | Applies to application server only. | [Using Client Access Token](https://github.com/Azure/azure-webpubsub/blob/main/samples/javascript/chatapp/sdk/src/index.js#L5) | Applies to client only. Client Access Token is generated on the application server.
While the client's role is often limited, the application server's role goes bey
::: zone-end ::: zone pivot="method-sdk-java"
-| Use case | Description |
+| Use case | Description |
| | -- | | [Using connection string](https://github.com/Azure/azure-webpubsub/blob/eb60438ff9e0735d90a6e7e6370b9d38aa6bc730/samples/java/chatapp/src/main/java/com/webpubsub/tutorial/App.java#L21) | Applies to application server only. | [Using Client Access Token](https://github.com/Azure/azure-webpubsub/blob/eb60438ff9e0735d90a6e7e6370b9d38aa6bc730/samples/java/chatapp/src/main/resources/public/https://docsupdatetracker.net/index.html#L12) | Applies to client only. Client Access Token is generated on the application server.
While the client's role is often limited, the application server's role goes bey
::: zone-end ::: zone pivot="method-sdk-python"
-| Use case | Description |
+| Use case | Description |
| | -- | | [Using connection string](https://github.com/Azure/azure-webpubsub/blob/eb60438ff9e0735d90a6e7e6370b9d38aa6bc730/samples/python/chatapp/server.py#L19) | Applies to application server only. | [Using Client Access Token](https://github.com/Azure/azure-webpubsub/blob/eb60438ff9e0735d90a6e7e6370b9d38aa6bc730/samples/python/chatapp/public/https://docsupdatetracker.net/index.html#L13) | Applies to client only. Client Access Token is generated on the application server.
azure-web-pubsub Socketio Migrate From Self Hosted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/socketio-migrate-from-self-hosted.md
Locate `index.js` in the server-side code.
```javascript const { useAzureSocketIO } = require("@azure/web-pubsub-socket.io"); ```-
-3. Add configuration so that the server can connect with your Web PubSub for Socket.IO resource.
+
+3. Locate in your server-side code where Socket.IO server is created and wrap it with `useAzureSocketIO()`:
```javascript
- const wpsOptions = {
+ const io = require("socket.io")();
+ useAzureSocketIO(io, {
hub: "eio_hub", // The hub name can be any valid string. connectionString: process.argv[2]
- };
- ```
-
-4. Locate in your server-side code where Socket.IO server is created and append `.useAzureSocketIO(wpsOptions)`:
- ```javascript
- const io = require("socket.io")();
- useAzureSocketIO(io, wpsOptions);
+ });
```
->[!IMPORTANT]
-> `useAzureSocketIO` is an asynchronous method. Here we `await`. So you need to wrap it and related code in an asynchronous function.
+ >[!IMPORTANT]
+ > `useAzureSocketIO` is an asynchronous method and it does initialization steps to connect to Web PubSub. You can `await useAzureSocketIO(...)` or use `useAzureSocketIO(...).then(...)` to make sure your app server starts to serve requests after the initialization succeeds.
-5. If you use the following server APIs, add `async` before using them as they're asynchronous with Web PubSub for Socket.IO.
+4. If you use the following server APIs, add `async` before using them as they're asynchronous with Web PubSub for Socket.IO.
- [server.socketsJoin](https://socket.io/docs/v4/server-api/#serversocketsjoinrooms) - [server.socketsLeave](https://socket.io/docs/v4/server-api/#serversocketsleaverooms) - [socket.join](https://socket.io/docs/v4/server-api/#socketjoinroom)
backup Backup Client Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-client-automation.md
Title: Use PowerShell to back up Windows Server to Azure
description: In this article, learn how to use PowerShell to set up Azure Backup on Windows Server or a Windows client, and manage backup and recovery. Last updated 08/24/2021 -+
Invoke-Command -Session $Session -Script { param($D, $A) Start-Process -FilePath
For more information about Azure Backup for Windows Server/Client: * [Introduction to Azure Backup](./backup-overview.md)
-* [Back up Windows Servers](backup-windows-with-mars-agent.md)
+* [Back up Windows Servers](backup-windows-with-mars-agent.md)
bastion Bastion Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-faq.md
description: Learn about frequently asked questions for Azure Bastion.
Previously updated : 08/08/2023 Last updated : 08/16/2023 # Azure Bastion FAQ
No. You don't need to install an agent or any software on your browser or your A
See [About VM connections and features](vm-about.md) for supported features.
+### <a name="shareable-links-passwords"></a>Is Reset Password available for local users connecting via shareable link?
+
+No. Some organizations have company policies that require a password reset when a user logs into a local account for the first time. When using shareable links, the user can't change the password, even though a "Reset Password" button may appear.
+ ### <a name="audio"></a>Is remote audio available for VMs? Yes. See [About VM connections and features](vm-about.md#audio).
This may be due to the Private DNS zone for privatelink.azure.com linked to the
## Next steps
-For more information, see [What is Azure Bastion](bastion-overview.md).
+For more information, see [What is Azure Bastion](bastion-overview.md).
cloud-shell Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/troubleshooting.md
ms.contributor: jahelmic
Last updated 05/03/2023 tags: azure-resource-manager+ Title: Azure Cloud Shell troubleshooting # Troubleshooting & Limitations of Azure Cloud Shell
communication-services Teams User Calling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/teams-user-calling.md
Last updated 12/01/2021
+ # Calling capabilities supported for Teams users in Calling SDK
communication-services Manage Teams Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/manage-teams-identity.md
zone_pivot_groups: acs-js-csharp-java-python-+ # Quickstart: Set up and manage access tokens for Teams users
communication-services Get Started Teams Auto Attendant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/get-started-teams-auto-attendant.md
# Quickstart: Join your calling app to a Teams Auto Attendant + In this quickstart you are going to learn how to start a call from Azure Communication Services user to Teams Auto Attendant. You are going to achieve it with the following steps: 1. Enable federation of Azure Communication Services resource with Teams Tenant.
communication-services Get Started Teams Call Queue https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/get-started-teams-call-queue.md
# Quickstart: Join your calling app to a Teams call queue + In this quickstart you are going to learn how to start a call from Azure Communication Services user to Teams Call Queue. You are going to achieve it with the following steps: 1. Enable federation of Azure Communication Services resource with Teams Tenant.
communications-gateway Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/deploy.md
description: This article guides you through how to deploy an Azure Communicatio
+ Last updated 05/05/2023
You now need to wait for your resource to be provisioned and connected to the Mi
## Next steps -- [Prepare for live traffic with Azure Communications Gateway](prepare-for-live-traffic.md)
+- [Prepare for live traffic with Azure Communications Gateway](prepare-for-live-traffic.md)
communications-gateway Prepare To Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/prepare-to-deploy.md
description: Learn how to complete the prerequisite tasks required to deploy Azu
+ Last updated 05/05/2023
Wait for confirmation that Azure Communications Gateway is enabled before moving
## Next steps -- [Create an Azure Communications Gateway resource](deploy.md)
+- [Create an Azure Communications Gateway resource](deploy.md)
confidential-computing How To Leverage Virtual Tpms In Azure Confidential Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/how-to-leverage-virtual-tpms-in-azure-confidential-vms.md
These steps list out which artifacts you need and how to get them:
The AMD Versioned Chip Endorsement Key (VCEK) is used to sign the AMD SEV-SNP report. The VCEK certificate allows you to verify that the report was signed by a genuine AMD CPU key. There are two ways retrieve the certificate:
- a. Obtain the VCEK certificate by running the following command ΓÇô it obtains the cert from a well-known IMDS endpoint:
+ a. Obtain the VCEK certificate by running the following command ΓÇô it obtains the cert from a well-known [Azure Instance Metadata Service](/azure/virtual-machines/instance-metadata-service) (IMDS) endpoint:
```bash curl -H Metadata:true http://169.254.169.254/metadat/certification > vcek cat ./vcek | jq -r '.vcekCert , .certificateChain' > ./vcek.pem
confidential-computing Quick Create Confidential Vm Arm Amd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/quick-create-confidential-vm-arm-amd.md
Last updated 04/12/2023 -+ ms.devlang: azurecli
confidential-computing Quick Create Confidential Vm Portal Amd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/quick-create-confidential-vm-portal-amd.md
Last updated 3/27/2022 -+ # Quickstart: Create confidential VM on AMD in the Azure portal
container-apps Azure Arc Enable Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/azure-arc-enable-cluster.md
A [Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md) pro
--query customerId \ --output tsv) LOG_ANALYTICS_WORKSPACE_ID_ENC=$(printf %s $LOG_ANALYTICS_WORKSPACE_ID | base64 -w0) # Needed for the next step
- lOG_ANALYTICS_KEY=$(az monitor log-analytics workspace get-shared-keys \
+ LOG_ANALYTICS_KEY=$(az monitor log-analytics workspace get-shared-keys \
--resource-group $GROUP_NAME \ --workspace-name $WORKSPACE_NAME \ --query primarySharedKey \ --output tsv)
- lOG_ANALYTICS_KEY_ENC=$(printf %s $lOG_ANALYTICS_KEY | base64 -w0) # Needed for the next step
+ LOG_ANALYTICS_KEY_ENC=$(printf %s $LOG_ANALYTICS_KEY | base64 -w0) # Needed for the next step
``` # [PowerShell](#tab/azure-powershell)
A [Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md) pro
--query customerId ` --output tsv) $LOG_ANALYTICS_WORKSPACE_ID_ENC=[Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes($LOG_ANALYTICS_WORKSPACE_ID))# Needed for the next step
- $lOG_ANALYTICS_KEY=$(az monitor log-analytics workspace get-shared-keys `
+ $LOG_ANALYTICS_KEY=$(az monitor log-analytics workspace get-shared-keys `
--resource-group $GROUP_NAME ` --workspace-name $WORKSPACE_NAME ` --query primarySharedKey ` --output tsv)
- $lOG_ANALYTICS_KEY_ENC=[Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes($lOG_ANALYTICS_KEY))
+ $LOG_ANALYTICS_KEY_ENC=[Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes($LOG_ANALYTICS_KEY))
```
A [Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md) pro
--configuration-settings "envoy.annotations.service.beta.kubernetes.io/azure-load-balancer-resource-group=${AKS_CLUSTER_GROUP_NAME}" \ --configuration-settings "logProcessor.appLogs.destination=log-analytics" \ --configuration-protected-settings "logProcessor.appLogs.logAnalyticsConfig.customerId=${LOG_ANALYTICS_WORKSPACE_ID_ENC}" \
- --configuration-protected-settings "logProcessor.appLogs.logAnalyticsConfig.sharedKey=${lOG_ANALYTICS_KEY_ENC}"
+ --configuration-protected-settings "logProcessor.appLogs.logAnalyticsConfig.sharedKey=${LOG_ANALYTICS_KEY_ENC}"
``` # [PowerShell](#tab/azure-powershell)
A [Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md) pro
--configuration-settings "envoy.annotations.service.beta.kubernetes.io/azure-load-balancer-resource-group=${AKS_CLUSTER_GROUP_NAME}" ` --configuration-settings "logProcessor.appLogs.destination=log-analytics" ` --configuration-protected-settings "logProcessor.appLogs.logAnalyticsConfig.customerId=${LOG_ANALYTICS_WORKSPACE_ID_ENC}" `
- --configuration-protected-settings "logProcessor.appLogs.logAnalyticsConfig.sharedKey=${lOG_ANALYTICS_KEY_ENC}"
+ --configuration-protected-settings "logProcessor.appLogs.logAnalyticsConfig.sharedKey=${LOG_ANALYTICS_KEY_ENC}"
```
container-instances Container Instances Application Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-application-gateway.md
az network application-gateway create \
--public-ip-address myAGPublicIPAddress \ --vnet-name myVNet \ --subnet myAGSubnet \
- --servers "$ACI_IP"
+ --servers "$ACI_IP" \
--priority 100 ```
container-instances Container Instances Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-vnet.md
Examples in this article are formatted for the Bash shell. If you prefer another
## Deploy to new virtual network > [!NOTE]
-> If you are using port 29 to have only 3 IP addresses, we recommend always to go one range above or below. For example, use port 28 so you can have at least 1 or more IP buffer per container group. By doing this, you can avoid containers in stuck, not able start or not able to stop states.
+> If you are using subnet IP range /29 to have only 3 IP addresses. we recommend always to go one range above (never below). For example, use subnet IP range /28 so you can have at least 1 or more IP buffer per container group. By doing this, you can avoid containers in stuck, not able to start, restart or even not able to stop states.
To deploy to a new virtual network and have Azure create the network resources for you automatically, specify the following when you execute [az container create][az-container-create]:
container-registry Container Registry Tutorial Sign Build Push https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tutorial-sign-build-push.md
Otherwise create an x509 self-signed certificate storing it in AKV for remote si
notation verify $IMAGE ``` Upon successful verification of the image using the trust policy, the sha256 digest of the verified image is returned in a successful output message.+
+## Next steps
+
+See [Ratify on Azure: Allow only signed images to be deployed on AKS with Notation and Ratify](https://github.com/deislabs/ratify/blob/main/docs/quickstarts/ratify-on-azure.md).
cosmos-db Cmk Troubleshooting Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cmk-troubleshooting-guide.md
A troubleshooting solution, for example, would be to create a new identity with
After updating the account's default identity, you need to wait upwards to one hour for the account to stop being in revoke state. If the issue isn't resolved after more than two hours, contact customer service.
-## Customer Managed Key does not exist
+## Azure Key Vault Resource not found
### Reason for error?
-You see this error when the customer managed key isn't found on the specified Azure Key Vault.
+You see this error when the Azure Key Vault or specified Key are not found.
### Troubleshooting
cosmos-db Data Residency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/data-residency.md
In Azure Cosmos DB, you must explicitly configure the cross-region data replicat
**Periodic mode Backups**: By default, periodic mode account backups will be stored in geo-redundant storage. For periodic backup modes, you can configure data redundancy at the account level. There are three redundancy options for the backup storage. They are local redundancy, zone redundancy, or geo redundancy. For more information, see [periodic backup/restore](periodic-backup-restore-introduction.md).
+## Residency requirements for analytical store
+
+Analytical store is resident by default as it is stored in either locally redundant or zone redundant storage. To learn more, see the [analytical store](analytical-store-introduction.md) article.
++ ## Use Azure Policy to enforce the residency requirements If you have data residency requirements that require you to keep all your data in a single Azure region, you can enforce zone-redundant or locally redundant backups for your account by using an Azure Policy. You can also enforce a policy that the Azure Cosmos DB accounts are not geo-replicated to other regions.
cosmos-db Intra Account Container Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/intra-account-container-copy.md
To get started with intra-account offline container copy for NoSQL and Cassandra
### API for MongoDB
-To get started with intra-account offline container copy for Azure Cosmos DB for MongoDB accounts, register for the **Intra-account offline container copy (MongoDB)** preview feature flag in [Preview Features](access-previews.md) in the Azure portal. Once the registration is complete, the preview is effective for all API for MongoDB accounts in the subscription.
+To get started with intra-account offline container copy for Azure Cosmos DB for MongoDB accounts, register for the **Intra-account offline collection copy (MongoDB)** preview feature flag in [Preview Features](access-previews.md) in the Azure portal. Once the registration is complete, the preview is effective for all API for MongoDB accounts in the subscription.
<a name="how-to-do-container-copy"></a>
cosmos-db Certificate Based Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/certificate-based-authentication.md
Last updated 06/11/2019 -+ # Certificate-based authentication for an Azure AD identity to access keys from an Azure Cosmos DB account
data-factory Enable Aad Authentication Azure Ssis Ir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/enable-aad-authentication-azure-ssis-ir.md
ms.devlang: powershell
-+ Last updated 07/17/2023
data-lake-store Data Lake Store Secure Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-secure-data.md
description: Learn how to secure data in Azure Data Lake Storage Gen1 using grou
+ Last updated 03/26/2018 - # Securing data stored in Azure Data Lake Storage Gen1 Securing data in Azure Data Lake Storage Gen1 is a three-step approach. Both Azure role-based access control (Azure RBAC) and access control lists (ACLs) must be set to fully enable access to data for users and security groups.
defender-for-cloud Agentless Container Registry Vulnerability Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/agentless-container-registry-vulnerability-assessment.md
Container vulnerability assessment powered by MDVM (Microsoft Defender Vulnerabi
| [Running container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management)](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/ContainersRuntimeRecommendationDetailsBlade/assessmentKey/c609cf0f-71ab-41e9-a3c6-9a1f7fe1b8d5)  | Container image vulnerability assessment scans your registry for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. This recommendation provides visibility to vulnerable images currently running in your Kubernetes clusters. Remediating vulnerabilities in container images that are currently running is key to improving your security posture, significantly reducing the attack surface for your containerized workloads. | c609cf0f-71ab-41e9-a3c6-9a1f7fe1b8d5 | - **Query vulnerability information via the Azure Resource Graph** - Ability to query vulnerability information via the [Azure Resource Graph](/azure/governance/resource-graph/overview#how-resource-graph-complements-azure-resource-manager). Learn how to [query recommendations via ARG](review-security-recommendations.md#review-recommendation-data-in-azure-resource-graph-arg).-- **Query vulnerability information via sub-assessment API** - You can get scan results via REST API. See the [subassessment list](/rest/api/defenderforcloud/sub-assessments/get?tabs=HTTP).
+- **Query vulnerability information via subassessment API** - You can get scan results via [REST API](subassessment-rest-api.md).
- **Support for exemptions** - Learn how to [create exemption rules for a management group, resource group, or subscription](disable-vulnerability-findings-containers.md). - **Support for disabling vulnerabilities** - Learn how to [disable vulnerabilities on images](disable-vulnerability-findings-containers.md).
defender-for-cloud Attack Path Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/attack-path-reference.md
Title: Reference list of attack paths and cloud security graph components
description: This article lists Microsoft Defender for Cloud's list of attack paths based on resource. Previously updated : 04/13/2023 Last updated : 08/15/2023 # Reference list of attack paths and cloud security graph components
Prerequisite: For a list of prerequisites, see the [Availability table](how-to-m
| Internet exposed VM has high severity vulnerabilities and has insecure secret that is used to authenticate to a SQL server | An Azure virtual machine is reachable from the internet, has high severity vulnerabilities and has plaintext SSH private key that can authenticate to an SQL server | | VM has high severity vulnerabilities and has insecure secret that is used to authenticate to a SQL server | An Azure virtual machine has high severity vulnerabilities and has plaintext SSH private key that can authenticate to an SQL server | | VM has high severity vulnerabilities and has insecure plaintext secret that is used to authenticate to storage account | An Azure virtual machine has high severity vulnerabilities and has plaintext SSH private key that can authenticate to an Azure storage account |
-| Internet expsed VM has high severity vulnerabilities and has insecure secret that is used to authenticate to storage account | An Azure virtual machine is reachable from the internet, has high severity vulnerabilities and has secret that can authenticate to an Azure storage account |
+| Internet exposed VM has high severity vulnerabilities and has insecure secret that is used to authenticate to storage account | An Azure virtual machine is reachable from the internet, has high severity vulnerabilities and has secret that can authenticate to an Azure storage account |
### AWS EC2 instances
Prerequisite: [Enable agentless scanning](enable-vulnerability-assessment-agentl
| EC2 instance has high severity vulnerabilities and has insecure plaintext secret that is used to authenticate to a RDS resource | An AWS EC2 instance has high severity vulnerabilities and has plaintext SSH private key that can authenticate to an AWS RDS resource | | Internet exposed AWS EC2 instance has high severity vulnerabilities and has insecure secret that has permission to S3 bucket via an IAM policy, or via a bucket policy, or via both an IAM policy and a bucket policy. | An AWS EC2 instance is reachable from the internet, has high severity vulnerabilities and has insecure secret that has permissions to S3 bucket via an IAM policy, a bucket policy or both |
+### GCP VM Instances
+
+| Attack path display name | Attack path description |
+|--|--|
+| Internet exposed VM instance has high severity vulnerabilities | GCP VM instance '[VMInstanceName]' is reachable from the internet and has high severity vulnerabilities [Remote Code Execution]. |
+| Internet exposed VM instance with high severity vulnerabilities has read permissions to a data store | GCP VM instance '[VMInstanceName]' is reachable from the internet, has high severity vulnerabilities[Remote Code Execution] and has read permissions to a data store. |
+| Internet exposed VM instance with high severity vulnerabilities has read permissions to a data store with sensitive data | GCP VM instance '[VMInstanceName]' is reachable from the internet, has high severity vulnerabilities allowing remote code execution on the machine and assigned with Service Account with read permission to GCP Storage bucket '[BucketName]' containing sensitive data. |
+| Internet exposed VM instance has high severity vulnerabilities and high permission to a project | GCP VM instance '[VMInstanceName]' is reachable from the internet, has high severity vulnerabilities[Remote Code Execution] and has '[Permissions]' permission to project '[ProjectName]'. |
+| Internet exposed VM instance with high severity vulnerabilities has read permissions to a Secret Manager | GCP VM instance '[VMInstanceName]' is reachable from the internet, has high severity vulnerabilities[Remote Code Execution] and has read permissions through IAM policy to GCP Secret Manager's secret '[SecretName]'. |
+| Internet exposed VM instance has high severity vulnerabilities and a hosted database installed | GCP VM instance '[VMInstanceName]' with a hosted [DatabaseType] database is reachable from the internet and has high severity vulnerabilities. |
+| Internet exposed VM with high severity vulnerabilities has plaintext SSH private key | GCP VM instance '[MachineName]' is reachable from the internet, has high severity vulnerabilities [Remote Code Execution] and has plaintext SSH private key [SSHPrivateKey]. |
+| VM instance with high severity vulnerabilities has read permissions to a data store | GCP VM instance '[VMInstanceName]' has high severity vulnerabilities[Remote Code Execution] and has read permissions to a data store. |
+| VM instance with high severity vulnerabilities has read permissions to a data store with sensitive data | GCP VM instance '[VMInstanceName]' has high severity vulnerabilities [Remote Code Execution] and has read permissions to GCP Storage bucket '[BucketName]' containing sensitive data. |
+| VM instance has high severity vulnerabilities and high permission to a project | GCP VM instance '[VMInstanceName]' has high severity vulnerabilities[Remote Code Execution] and has '[Permissions]' permission to project '[ProjectName]'.|
+| VM instance with high severity vulnerabilities has read permissions to a Secret Manager | GCP VM instance '[VMInstanceName]' has high severity vulnerabilities[Remote Code Execution] and has read permissions through IAM policy to GCP Secret Manager's secret '[SecretName]'. |
+| VM instance with high severity vulnerabilities has plaintext SSH private key | GCP VM instance to align with all other attack paths. Virtual machine '[MachineName]' has high severity vulnerabilities [Remote Code Execution] and has plaintext SSH private key [SSHPrivateKey]. |
+ ### Azure data | Attack path display name | Attack path description |
Prerequisite: [Enable agentless scanning](enable-vulnerability-assessment-agentl
| Private AWS S3 bucket with sensitive data replicates data to internet exposed and publicly accessible AWS S3 bucket | Private AWS S3 bucket with sensitive data is replicating data to internet exposed and publicly accessible AWS S3 bucket| | RDS snapshot is publicly available to all AWS accounts (Preview) | RDS snapshot is publicly available to all AWS accounts |
+### GCP Data
+
+| Attack path display name | Attack path description |
+|--|--|
+| GCP Storage Bucket with sensitive data is publicly accessible | GCP Storage Bucket [BucketName] with sensitive data allows public read access without authorization required. |
+ ### Azure containers Prerequisite: [Enable Defender for Containers](defender-for-containers-enable.md), and install the relevant agents in order to view attack paths that are related to containers. This will also give you the ability to [query](how-to-manage-cloud-security-explorer.md#build-a-query-with-the-cloud-security-explorer) containers data plane workloads in security explorer.
This section lists all of the cloud security graph components (connections and i
| Insight | Description | Supported entities | |--|--|--|
-| Exposed to the internet | Indicates that a resource is exposed to the internet. Supports port filtering. [Learn more](concept-data-security-posture-prepare.md#exposed-to-the-internetallows-public-access) | Azure virtual machine, AWS EC2, Azure storage account, Azure SQL server, Azure Cosmos DB, AWS S3, Kubernetes pod, Azure SQL Managed Instance, Azure MySQL Single Server, Azure MySQL Flexible Server, Azure PostgreSQL Single Server, Azure PostgreSQL Flexible Server, Azure MariaDB Single Server, Synapse Workspace, RDS Instance |
+| Exposed to the internet | Indicates that a resource is exposed to the internet. Supports port filtering. [Learn more](concept-data-security-posture-prepare.md#exposed-to-the-internetallows-public-access) | Azure virtual machine, AWS EC2, Azure storage account, Azure SQL server, Azure Cosmos DB, AWS S3, Kubernetes pod, Azure SQL Managed Instance, Azure MySQL Single Server, Azure MySQL Flexible Server, Azure PostgreSQL Single Server, Azure PostgreSQL Flexible Server, Azure MariaDB Single Server, Synapse Workspace, RDS Instance, GCP VM instance, GCP SQL admin instance |
| Allows basic authentication (Preview) | Indicates that a resource allows basic (local user/password or key-based) authentication | Azure SQL Server, RDS Instance |
-| Contains sensitive data <br/> <br/> Prerequisite: [Enable data-aware security for storage accounts in Defender CSPM](data-security-posture-enable.md), or [leverage Microsoft Purview Data Catalog to protect sensitive data](information-protection.md). | Indicates that a resource contains sensitive data. | Azure Storage Account, Azure Storage Account Container, AWS S3 bucket, Azure SQL Server, Azure SQL Database, Azure Data Lake Storage Gen2, Azure Database for PostgreSQL, Azure Database for MySQL, Azure Synapse Analytics, Azure Cosmos DB accounts |
+| Contains sensitive data <br/> <br/> Prerequisite: [Enable data-aware security for storage accounts in Defender CSPM](data-security-posture-enable.md), or [leverage Microsoft Purview Data Catalog to protect sensitive data](information-protection.md). | Indicates that a resource contains sensitive data. | Azure Storage Account, Azure Storage Account Container, AWS S3 bucket, Azure SQL Server, Azure SQL Database, Azure Data Lake Storage Gen2, Azure Database for PostgreSQL, Azure Database for MySQL, Azure Synapse Analytics, Azure Cosmos DB accounts, GCP cloud storage bucket |
| Moves data to (Preview) | Indicates that a resource transfers its data to another resource | Storage account container, AWS S3, AWS RDS instance, AWS RDS cluster | | Gets data from (Preview) | Indicates that a resource gets its data from another resource | Storage account container, AWS S3, AWS RDS instance, AWS RDS cluster |
-| Has tags | Lists the resource tags of the cloud resource | All Azure and AWS resources |
+| Has tags | Lists the resource tags of the cloud resource | All Azure, AWS, and GCP resources |
| Installed software | Lists all software installed on the machine. This insight is applicable only for VMs that have threat and vulnerability management integration with Defender for Cloud enabled and are connected to Defender for Cloud. | Azure virtual machine, AWS EC2 |
-| Allows public access | Indicates that a public read access is allowed to the resource with no authorization required. [Learn more](concept-data-security-posture-prepare.md#exposed-to-the-internetallows-public-access) | Azure storage account, AWS S3 bucket, GitHub repository |
+| Allows public access | Indicates that a public read access is allowed to the resource with no authorization required. [Learn more](concept-data-security-posture-prepare.md#exposed-to-the-internetallows-public-access) | Azure storage account, AWS S3 bucket, GitHub repository, GCP cloud storage bucket |
| Doesn't have MFA enabled | Indicates that the user account does not have a multi-factor authentication solution enabled | Azure AD User account, IAM user | | Is external user | Indicates that the user account is outside the organization's domain | Azure AD User account | | Is managed | Indicates that an identity is managed by the cloud provider | Azure Managed Identity |
This section lists all of the cloud security graph components (connections and i
| DEASM findings | Microsoft Defender External Attack Surface Management (DEASM) internet scanning findings | Public IP | | Privileged container | Indicates that a Kubernetes container runs in a privileged mode | Kubernetes container | | Uses host network | Indicates that a Kubernetes pod uses the network namespace of its host machine | Kubernetes pod |
-| Has high severity vulnerabilities | Indicates that a resource has high severity vulnerabilities | Azure VM, AWS EC2, Container image |
-| Vulnerable to remote code execution | Indicates that a resource has vulnerabilities allowing remote code execution | Azure VM, AWS EC2, Container image |
+| Has high severity vulnerabilities | Indicates that a resource has high severity vulnerabilities | Azure VM, AWS EC2, Container image, GCP VM instance |
+| Vulnerable to remote code execution | Indicates that a resource has vulnerabilities allowing remote code execution | Azure VM, AWS EC2, Container image, GCP VM instance |
| Public IP metadata | Lists the metadata of an Public IP | Public IP | | Identity metadata | Lists the metadata of an identity | Azure AD Identity |
This section lists all of the cloud security graph components (connections and i
|--|--|--|--| | Can authenticate as | Indicates that an Azure resource can authenticate to an identity and use its privileges | Azure VM, Azure VMSS, Azure Storage Account, Azure App Services, SQL Servers | Azure AD managed identity | | Has permission to | Indicates that an identity has permissions to a resource or a group of resources | Azure AD user account, Managed Identity, IAM user, EC2 instance | All Azure & AWS resources|
-| Contains | Indicates that the source entity contains the target entity | Azure subscription, Azure resource group, AWS account, Kubernetes namespace, Kubernetes pod, Kubernetes cluster, GitHub owner, Azure DevOps project, Azure DevOps organization, Azure SQL server | All Azure & AWS resources, All Kubernetes entities, All DevOps entities, Azure SQL database |
-| Routes traffic to | Indicates that the source entity can route network traffic to the target entity | Public IP, Load Balancer, VNET, Subnet, VPC, Internet Gateway, Kubernetes service, Kubernetes pod| Azure VM, Azure VMSS, AWS EC2, Subnet, Load Balancer, Internet gateway, Kubernetes pod, Kubernetes service |
+| Contains | Indicates that the source entity contains the target entity | Azure subscription, Azure resource group, AWS account, Kubernetes namespace, Kubernetes pod, Kubernetes cluster, GitHub owner, Azure DevOps project, Azure DevOps organization, Azure SQL server, GCP project, GCP Folder, GCP Organization | All Azure, AWS, and GCP resources, All Kubernetes entities, All DevOps entities, Azure SQL database |
+| Routes traffic to | Indicates that the source entity can route network traffic to the target entity | Public IP, Load Balancer, VNET, Subnet, VPC, Internet Gateway, Kubernetes service, Kubernetes pod| Azure VM, Azure VMSS, AWS EC2, Subnet, Load Balancer, Internet gateway, Kubernetes pod, Kubernetes service, GCP VM instance, GCP instance group |
| Is running | Indicates that the source entity is running the target entity as a process | Azure VM, EC2, Kubernetes container | SQL, Arc-Enabled SQL, Hosted MongoDB, Hosted MySQL, Hosted Oracle, Hosted PostgreSQL, Hosted SQL Server, Container image, Kubernetes pod | | Member of | Indicates that the source identity is a member of the target identities group | Azure AD group, Azure AD user | Azure AD group | | Maintains | Indicates that the source Kubernetes entity manages the life cycle of the target Kubernetes entity | Kubernetes workload controller, Kubernetes replica set, Kubernetes stateful set, Kubernetes daemon set, Kubernetes jobs, Kubernetes cron job | Kubernetes pod |
defender-for-cloud Concept Agentless Data Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-agentless-data-collection.md
Previously updated : 06/29/2023 Last updated : 08/15/2023
Agentless scanning for VMs provides vulnerability assessment and software invent
|Release state:| GA | |Pricing:|Requires either [Defender Cloud Security Posture Management (CSPM)](concept-cloud-security-posture-management.md) or [Microsoft Defender for Servers Plan 2](plan-defender-for-servers-select-plan.md#plan-features)| | Supported use cases:| :::image type="icon" source="./media/icons/yes-icon.png"::: Vulnerability assessment (powered by Defender Vulnerability Management)<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Software inventory (powered by Defender Vulnerability Management)<br />:::image type="icon" source="./media/icons/yes-icon.png":::Secret scanning (Preview) |
-| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Azure Commercial clouds<br> :::image type="icon" source="./media/icons/no-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/no-icon.png"::: Microsoft Azure operated by 21Vianet<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected AWS accounts<br>:::image type="icon" source="./media/icons/no-icon.png"::: Connected GCP accounts |
+| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Azure Commercial clouds<br> :::image type="icon" source="./media/icons/no-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/no-icon.png"::: Microsoft Azure operated by 21Vianet<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected AWS accounts<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected GCP projects |
| Operating systems: | :::image type="icon" source="./media/icons/yes-icon.png"::: Windows<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Linux |
-| Instance and disk types: | **Azure**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Standard VMs<br>:::image type="icon" source="./media/icons/no-icon.png"::: Unmanaged disks<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Virtual machine scale set - Flex<br>:::image type="icon" source="./media/icons/no-icon.png"::: Virtual machine scale set - Uniform<br><br>**AWS**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: EC2<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Auto Scale instances<br>:::image type="icon" source="./media/icons/no-icon.png"::: Instances with a ProductCode (Paid AMIs) |
-| Encryption: | **Azure**<br>:::image type="icon" source="./medi) with platform-managed keys (PMK)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Encrypted ΓÇô other scenarios using platform-managed keys (PMK)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Encrypted ΓÇô customer-managed keys (CMK)<br><br>**AWS**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Unencrypted<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Encrypted - PMK<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Encrypted - CMK |
+| Instance and disk types: | **Azure**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Standard VMs<br>:::image type="icon" source="./media/icons/no-icon.png"::: Unmanaged disks<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Virtual machine scale set - Flex<br>:::image type="icon" source="./media/icons/no-icon.png"::: Virtual machine scale set - Uniform<br><br>**AWS**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: EC2<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Auto Scale instances<br>:::image type="icon" source="./media/icons/no-icon.png"::: Instances with a ProductCode (Paid AMIs)<br><br>**GCP**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Compute instances<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Instance groups (managed and unmanaged) |
+| Encryption: | **Azure**<br>:::image type="icon" source="./medi) with platform-managed keys (PMK)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Encrypted ΓÇô other scenarios using platform-managed keys (PMK)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Encrypted ΓÇô customer-managed keys (CMK)<br><br>**AWS**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Unencrypted<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Encrypted - PMK<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Encrypted - CMK<br><br>**GCP**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Google-managed encryption key<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Customer-managed encryption key (CMEK)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Customer-supplied encryption key (CSEK) |
## How agentless scanning for VMs works
defender-for-cloud Concept Cloud Security Posture Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-cloud-security-posture-management.md
Title: Overview of Cloud Security Posture Management (CSPM)
description: Learn more about the new Defender CSPM plan and the other enhanced security features that can be enabled for your multicloud environment through the Defender Cloud Security Posture Management (CSPM) plan. Previously updated : 06/20/2023 Last updated : 08/10/2023 # Cloud Security Posture Management (CSPM)
Microsoft Defender CSPM protects across all your multicloud workloads, but billi
> > - The Microsoft Defender CSPM plan protects across multicloud workloads. With Defender CSPM generally available (GA), the plan will remain free until billing starts on August 1, 2023. Billing will apply for Servers, Database, and Storage resources. Billable workloads will be VMs, Storage accounts, OSS DBs, SQL PaaS, & SQL servers on machines.ΓÇï >
-> - This price includes free vulnerability assessments for 20 unique images per charged resource, whereby the count will be based on the previous month's consumption. Every subsequent scan will be charged at $0.29 per image digest. The majority of customers are not expected to incur any additional image scan charges. For subscription that are both under the Defender CSPM and Defender for Containers plans, free vulnerability assessment will be calculated based on free image scans provided via the Defender for Containers plan, as specified [in the Microsoft Defender for Cloud pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
+> - This price includes free vulnerability assessments for 20 unique images per charged resource, whereby the count will be based on the previous month's consumption. Every subsequent scan will be charged at $0.29 per image digest. The majority of customers are not expected to incur any additional image scan charges. For subscriptions that are both under the Defender CSPM and Defender for Containers plans, free vulnerability assessment will be calculated based on free image scans provided via the Defender for Containers plan, as specified [in the Microsoft Defender for Cloud pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
## Plan availability
The following table summarizes each plan and their cloud availability.
| [Data exporting](export-to-siem.md) | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP, on-premises | | [Workflow automation](workflow-automation.md) | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP, on-premises | | Tools for remediation | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP, on-premises |
-| Microsoft Cloud Security Benchmark | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS |
+| Microsoft Cloud Security Benchmark | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP |
| [Governance](governance-rules.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP, on-premises | | [Regulatory compliance](concept-regulatory-compliance.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP, on-premises |
-| [Cloud security explorer](how-to-manage-cloud-security-explorer.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS |
-| [Attack path analysis](how-to-manage-attack-path.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS |
-| [Agentless scanning for machines](concept-agentless-data-collection.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS |
+| [Cloud security explorer](how-to-manage-cloud-security-explorer.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP |
+| [Attack path analysis](how-to-manage-attack-path.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP |
+| [Agentless scanning for machines](concept-agentless-data-collection.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP |
| [Agentless discovery for Kubernetes](concept-agentless-containers.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure | | [Container registries vulnerability assessment](concept-agentless-containers.md), including registry scanning | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure |
-| [Data aware security posture](concept-data-security-posture.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS |
-| EASM insights in network exposure | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS |
+| [Data aware security posture](concept-data-security-posture.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP |
+| EASM insights in network exposure | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP |
> [!NOTE] > If you have enabled Defender for DevOps, you will only gain cloud security graph and attack path analysis to the artifacts that arrive through those connectors.
defender-for-cloud Concept Data Security Posture Prepare https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-data-security-posture-prepare.md
The table summarizes support for data-aware posture management.
| | | |What Azure data resources can I discover? | [Block blob](../storage/blobs/storage-blobs-introduction.md) storage accounts in Azure Storage v1/v2<br/><br/> Azure Data Lake Storage Gen2<br/><br/>Storage accounts behind private networks are supported.<br/><br/> Storage accounts encrypted with a customer-managed server-side key are supported.<br/><br/> Accounts aren't supported if any of these settings are enabled: [Public network access is disabled](../storage/common/storage-network-security.md#change-the-default-network-access-rule); Storage account is defined as [Azure DNS Zone](https://techcommunity.microsoft.com/t5/azure-storage-blog/public-preview-create-additional-5000-azure-storage-accounts/ba-p/3465466); The storage account endpoint has a [custom domain mapped to it](../storage/blobs/storage-custom-domain-name.md).| |What AWS data resources can I discover? | AWS S3 buckets<br/><br/> Defender for Cloud can discover KMS-encrypted data, but not data encrypted with a customer-managed key.|
-|What permissions do I need for discovery? | Storage account: Subscription Owner<br/> **or**<br/> Microsoft.Authorization/roleAssignments/* (read, write, delete) **and** Microsoft.Security/pricings/* (read, write, delete) **and** Microsoft.Security/pricings/SecurityOperators (read, write)<br/><br/> Amazon S3 buckets: AWS account permission to run Cloud Formation (to create a role).|
+|What GCP data resources can I discover? | GCP storage buckets<br/> Standard Class<br/> Geo: region, dual region, multi region |
+|What permissions do I need for discovery? | Storage account: Subscription Owner<br/> **or**<br/> Microsoft.Authorization/roleAssignments/* (read, write, delete) **and** Microsoft.Security/pricings/* (read, write, delete) **and** Microsoft.Security/pricings/SecurityOperators (read, write)<br/><br/> Amazon S3 buckets: AWS account permission to run Cloud Formation (to create a role).<br/><br/>GCP storage buckets: Google account permission to run script (to create a role).|
|What file types are supported for sensitive data discovery? | Supported file types (you can't select a subset) - .doc, .docm, .docx, .dot, .gz, .odp, .ods, .odt, .pdf, .pot, .pps, .ppsx, .ppt, .pptm, .pptx, .xlc, .xls, .xlsb, .xlsm, .xlsx, .xlt, .csv, .json, .psv, .ssv, .tsv, .txt., xml, .parquet, .avro, .orc.| |What Azure regions are supported? | You can discover Azure storage accounts in:<br/><br/> Australia Central; Australia Central 2; Australia East; Australia Southeast; Brazil South; Canada Central; Canada East; Central India; Central US; East Asia; East US; East US 2; France Central; Germany West Central; Japan East; Japan West: Jio India West: North Central US; North Europe; Norway East; South Africa North: South Central US; South India; Sweden Central; Switzerland North; UAE North; UK South; UK West: West Central US; West Europe; West US, West US3.<br/><br/> Discovery is done locally in the region.| |What AWS regions are supported? | Asia Pacific (Mumbai); Asia Pacific (Singapore); Asia Pacific (Sydney); Asia Pacific (Tokyo); Canada (Central); Europe (Frankfurt); Europe (Ireland); Europe (London); Europe (Paris); South America (São Paulo); US East (Ohio); US East (N. Virginia); US West (N. California): US West (Oregon).<br/><br/> Discovery is done locally in the region.|
+|What GCP regions are supported? | europe-west1, us-east1, us-west1, us-central1, us-east4, asia-south1, northamerica-northeast1|
|Do I need to install an agent? | No, discovery is agentless.| |What's the cost? | The feature is included with the Defender CSPM and Defender for Storage plans, and doesnΓÇÖt include other costs except for the respective plan costs.| |What permissions do I need to view/edit data sensitivity settings? | You need one of these Azure Active directory roles: Global Administrator, Compliance Administrator, Compliance Data Administrator, Security Administrator, Security Operator.|
+| What permissions do I need to perform onboarding? | You need one of these Azure Active directory roles: Security Admin, Contributor, Owner on the subscription level (where the GCP project/s reside in). For consuming the security findings: Security Reader, Security Admin,Reader, Contributor, Owner on the subscription level (where the GCP project/s reside). |
## Configuring data sensitivity settings
Defender for Cloud starts discovering data immediately after enabling a plan, or
- It takes up to 24 hours to see the results for a first-time discovery. - After files are updated in the discovered resources, data is refreshed within eight days. - A new Azure storage account that's added to an already discovered subscription is discovered within 24 hours or less.-- A new AWS S3 bucket that's added to an already discovered AWS account is discovered within 48 hours or less.
+- A new AWS S3 bucket or GCP storage bucket that's added to an already discovered AWS account or Google account is discovered within 48 hours or less.
### Discovering AWS S3 buckets
In order to protect AWS resources in Defender for Cloud, you set up an AWS conne
- To connect AWS accounts, you need Administrator permissions on the account. - The role allows these permissions: S3 read only; KMS decrypt.
+### Discovering GCP storage buckets
+
+In order to protect GCP resources in Defender for Cloud, you can set up a Google connector using a script template to onboard the GCP account.
+
+- To discover GCP storage buckets, Defender for Cloud updates the script template.
+- The script template creates a new role in the Google account to allow permission for the Defender for Cloud scanner to access data in the GCP storage buckets.
+- To connect Google accounts, you need Administrator permissions on the account.
+ ## Exposed to the internet/allows public access Defender CSPM attack paths and cloud security graph insights include information about storage resources that are exposed to the internet and allow public access. The following table provides more details.
-**State** | **Azure storage accounts** | **AWS S3 Buckets**
- | |
-**Exposed to the internet** | An Azure storage account is considered exposed to the internet if either of these settings enabled:<br/><br/> Storage_account_name > **Networking** > **Public network access** > **Enabled from all networks**<br/><br/> or<br/><br/> Storage_account_name > **Networking** > **Public network access** > **Enable from selected virtual networks and IP addresses**. | An AWS S3 bucket is considered exposed to the internet if the AWS account/AWS S3 bucket policies don't have a condition set for IP addresses.
-**Allows public access** | An Azure storage account container is considered as allowing public access if these settings are enabled on the storage account:<br/><br/> Storage_account_name > **Configuration** > **Allow blob public access** > **Enabled**.<br/><br/>and **either** of these settings:<br/><br/> Storage_account_name > **Containers** > container_name > **Public access level** set to **Blob (anonymous read access for blobs only)**<br/><br/> Or, storage_account_name > **Containers** > container_name > **Public access level** set to **Container (anonymous read access for containers and blobs)**. | An AWS S3 bucket is considered to allow public access if both the AWS account and the AWS S3 bucket have **Block all public access** set to **Off**, and **either** of these settings is set:<br/><br/> In the policy, **RestrictPublicBuckets** isn't enabled, and the **Principal** setting is set to * and **Effect** is set to **Allow**.<br/><br/> Or, in the access control list, **IgnorePublicAcl** isn't enabled, and permission is allowed for **Everyone**, or for **Authenticated users**.
-
+**State** | **Azure storage accounts** | **AWS S3 Buckets** | **GCP Storage Buckets** |
+ | | |
+**Exposed to the internet** | An Azure storage account is considered exposed to the internet if either of these settings enabled:<br/><br/> Storage_account_name > **Networking** > **Public network access** > **Enabled from all networks**<br/><br/> or<br/><br/> Storage_account_name > **Networking** > **Public network access** > **Enable from selected virtual networks and IP addresses**. | An AWS S3 bucket is considered exposed to the internet if the AWS account/AWS S3 bucket policies don't have a condition set for IP addresses. | All GCP storage buckets are exposed to the internet by default. |
+**Allows public access** | An Azure storage account container is considered as allowing public access if these settings are enabled on the storage account:<br/><br/> Storage_account_name > **Configuration** > **Allow blob public access** > **Enabled**.<br/><br/>and **either** of these settings:<br/><br/> Storage_account_name > **Containers** > container_name > **Public access level** set to **Blob (anonymous read access for blobs only)**<br/><br/> Or, storage_account_name > **Containers** > container_name > **Public access level** set to **Container (anonymous read access for containers and blobs)**. | An AWS S3 bucket is considered to allow public access if both the AWS account and the AWS S3 bucket have **Block all public access** set to **Off**, and **either** of these settings is set:<br/><br/> In the policy, **RestrictPublicBuckets** isn't enabled, and the **Principal** setting is set to * and **Effect** is set to **Allow**.<br/><br/> Or, in the access control list, **IgnorePublicAcl** isn't enabled, and permission is allowed for **Everyone**, or for **Authenticated users**. | A GCP storage bucket is considered to allow public access if: it has an IAM (Identity and Access Management) role that meets these criteria: <br/><br/> The role is granted to the principal **allUsers** or **allAuthenticatedUsers**. <br/><br/>The role has at least one storage permission that *isn't* **storage.buckets.create** or **storage.buckets.list**. Public access in GCP is called ΓÇ£Public to internetΓÇ£.
## Next steps
defender-for-cloud Enable Agentless Scanning Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enable-agentless-scanning-vms.md
Previously updated : 06/29/2023 Last updated : 08/15/2023 # Enable agentless scanning for VMs
If you have Defender for Servers P2 already enabled and agentless scanning is tu
After you enable agentless scanning, software inventory and vulnerability information are updated automatically in Defender for Cloud.
+## Enable agentless scanning in GCP
+
+1. From Defender for Cloud's menu, select **Environment settings**.
+1. Select the relevant project or organization.
+1. For either the Defender Cloud Security Posture Management (CSPM) or Defender for Servers P2 plan, selectΓÇ» **Settings**.
+
+ :::image type="content" source="media/enable-agentless-scanning-vms/gcp-select-plan.png" alt-text="Screenshot that shows where to select the plan for GCP projects." lightbox="media/enable-agentless-scanning-vms/gcp-select-plan.png":::
+
+1. In the settings pane, turn on ΓÇ»**Agentless scanning**.
+
+ :::image type="content" source="media/enable-agentless-scanning-vms/gcp-select-agentless.png" alt-text="Screenshot that shows where to select agentless scanning." lightbox="media/enable-agentless-scanning-vms/gcp-select-agentless.png":::
+
+1. SelectΓÇ»**Save and Next: Configure Access**.
+1. Copy the onboarding script.
+1. Run the onboarding script in the GCP organization/project scope (GCP portal or gcloud CLI).
+1. Select ΓÇ»**Next: Review and generate**.
+1. Select ΓÇ»**Update**.
+ ## Exclude machines from scanning Agentless scanning applies to all of the eligible machines in the subscription. To prevent specific machines from being scanned, you can exclude machines from agentless scanning based on your pre-existing environment tags. When Defender for Cloud performs the continuous discovery for machines, excluded machines are skipped.
defender-for-cloud How To Manage Attack Path https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/how-to-manage-attack-path.md
Title: Identify and remediate attack paths
description: Learn how to manage your attack path analysis and build queries to locate vulnerabilities in your multicloud environment. Previously updated : 07/10/2023 Last updated : 08/10/2023 # Identify and remediate attack paths
You can check out the full list of [Attack path names and descriptions](attack-p
| Aspect | Details | |--|--|
-| Release state | GA (General Availability) |
+| Release state | GA (General Availability) for Azure, AWS <Br> Preview for GCP |
| Prerequisites | - [Enable agentless scanning](enable-vulnerability-assessment-agentless.md), or [Enable Defender for Server P1 (which includes MDVM)](defender-for-servers-introduction.md) or [Defender for Server P2 (which includes MDVM and Qualys)](defender-for-servers-introduction.md). <br> - [Enable Defender CSPM](enable-enhanced-security.md) <br> - Enable agentless container posture extension in Defender CSPM, or [Enable Defender for Containers](defender-for-containers-enable.md), and install the relevant agents in order to view attack paths that are related to containers. This also gives you the ability to [query](how-to-manage-cloud-security-explorer.md#build-a-query-with-the-cloud-security-explorer) containers data plane workloads in security explorer. | | Required plans | - Defender Cloud Security Posture Management (CSPM) enabled | | Required roles and permissions: | - **Security Reader** <br> - **Security Admin** <br> - **Reader** <br> - **Contributor** <br> - **Owner** |
-| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds (Azure, AWS) <br>:::image type="icon" source="./media/icons/no-icon.png"::: Commercial clouds (GCP) <br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Microsoft Azure operated by 21Vianet) |
+| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds (Azure, AWS, GCP) <br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet) |
## Features of the attack path overview page
defender-for-cloud How To Manage Cloud Security Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/how-to-manage-cloud-security-explorer.md
Title: Build queries with cloud security explorer
description: Learn how to build queries in cloud security explorer to find vulnerabilities that exist on your multicloud environment. Previously updated : 08/10/2023 Last updated : 08/16/2023 # Build queries with cloud security explorer
Defender for Cloud's contextual security capabilities assists security teams in
Use the cloud security explorer, to proactively identify security risks in your cloud environment by running graph-based queries on the cloud security graph, which is Defender for Cloud's context engine. You can prioritize your security team's concerns, while taking your organization's specific context and conventions into account.
-With the cloud security explorer, you can query all of your security issues and environment context such as assets inventory, exposure to internet, permissions, and lateral movement between resources and across multiple clouds (Azure and AWS).
+With the cloud security explorer, you can query all of your security issues and environment context such as assets inventory, exposure to internet, permissions, and lateral movement between resources and across multiple clouds (Azure AWS, and GCP).
Learn more about [the cloud security graph, attack path analysis, and the cloud security explorer](concept-attack-path.md).
Learn more about [the cloud security graph, attack path analysis, and the cloud
| Release state | GA (General Availability) | | Required plans | - Defender Cloud Security Posture Management (CSPM) enabled<br>- Defender for Servers P2 customers can use the explorer UI to query for keys and secrets, but must have Defender CSPM enabled to get the full value of the Explorer. | | Required roles and permissions: | - **Security Reader** <br> - **Security Admin** <br> - **Reader** <br> - **Contributor** <br> - **Owner** |
-| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds (Azure, AWS) <br>:::image type="icon" source="./media/icons/no-icon.png"::: Commercial clouds (GCP) <br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Microsoft Azure operated by 21Vianet) |
+| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds (Azure, AWS) <br>:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds - GCP (Preview) <br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Microsoft Azure operated by 21Vianet) |
## Prerequisites
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes description: This page is updated frequently with the latest updates in Defender for Cloud. Previously updated : 08/07/2023 Last updated : 08/15/2023 # What's new in Microsoft Defender for Cloud?
Updates in August include:
|Date |Update | |-|-|
+| August 15 | [Preview release of GCP support in Defender CSPM](#preview-release-of-gcp-support-in-defender-cspm)|
| August 7 | [New security alerts in Defender for Servers Plan 2: Detecting potential attacks abusing Azure virtual machine extensions](#new-security-alerts-in-defender-for-servers-plan-2-detecting-potential-attacks-abusing-azure-virtual-machine-extensions)
+### Preview release of GCP support in Defender CSPM
+
+August 15, 2023
+
+We are announcing the preview release of the Defender CSPM contextual cloud security graph and attack path analysis with support for GCP resources. You can leverage the power of Defender CSPM for comprehensive visibility and intelligent cloud security across GCP resources.
+
+ Key features of our GCP support include:
+
+- **Attack path analysis** - Understand the potential routes attackers might take.
+- **Cloud security explorer** - Proactively identify security risks by running graph-based queries on the security graph.
+- **Agentless scanning** - Scan servers and identify secrets and vulnerabilities without installing an agent.
+- **Data-aware security posture** - Discover and remediate risks to sensitive data in Google Cloud Storage buckets.
+
+Learn more about [Defender CSPM plan options](concept-cloud-security-posture-management.md#defender-cspm-plan-options).
+ ### New security alerts in Defender for Servers Plan 2: Detecting potential attacks abusing Azure virtual machine extensions August 7, 2023
defender-for-cloud Subassessment Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/subassessment-rest-api.md
++
+ Title: Container vulnerability assessments powered by Microsoft Defender Vulnerability Management subassessments
+description: Learn about container vulnerability assessments powered by Microsoft Defender Vulnerability Management subassessments
++ Last updated : 08/16/2023+++
+# Container vulnerability assessments powered by Microsoft Defender Vulnerability Management subassessments
+
+API Version: 2019-01-01-preview
+
+Get security subassessments on all your scanned resources inside a scope.
+
+## Overview
+
+You can access vulnerability assessment results pragmatically for both registry and runtime recommendations using the subassessments rest API.
+
+For more information on how to get started with our REST API, see [Azure REST API reference](/rest/api/azure/). Use the following information for specific information for the container vulnerability assessment results powered by Microsoft Defender Vulnerability Management.
+
+## HTTP Requests
+
+### Get
+
+#### GET
+
+`https://management.azure.com/{scope}/providers/Microsoft.Security/assessments/{assessmentName}/subAssessments/{subAssessmentName}?api-version=2019-01-01-preview`
+
+#### URI Parameters
+
+| Name | In | Required | Type | Description |
+| -- | -- | -- | | |
+| assessmentName | path | True | string | The Assessment Key - Unique key for the assessment type |
+| scope | path | True | string | Scope of the query. Can be subscription (/subscriptions/0b06d9ea-afe6-4779-bd59-30e5c2d9d13f) or management group (/providers/Microsoft.Management/managementGroups/mgName). |
+| subAssessmentName | path | True | string | The Sub-Assessment Key - Unique key for the subassessment type |
+| api-version | query | True | string | API version for the operation |
+
+#### Responses
+
+| Name | Type | Description |
+| - | | - |
+| 200 OK | [SecuritySubAssessment](/rest/api/defenderforcloud/sub-assessments/get#securitysubassessment) | OK |
+| Other Status Codes | [CloudError](/rest/api/defenderforcloud/sub-assessments/get#clouderror) | Error response describing why the operation failed. |
+
+### List
+
+#### GET
+
+`https://management.azure.com/{scope}/providers/Microsoft.Security/assessments/{assessmentName}/subAssessments?api-version=2019-01-01-preview`
+
+#### URI parameters
+
+| **Name** | **In** | **Required** | **Type** | **Description** |
+| | | | -- | |
+| **assessmentName** | path | True | string | The Assessment Key - Unique key for the assessment type |
+| **scope** | path | True | string | Scope of the query. The scope for AzureContainerVulnerability is the registry itself. |
+| **api-version** | query | True | string | API version for the operation |
+
+#### Responses
+
+| Name | Type | Description |
+| | | |
+| 200 OK | [SecuritySubAssessmentList](/rest/api/defenderforcloud/sub-assessments/list#securitysubassessmentlist) | OK |
+| Other Status Codes | [CloudError](/rest/api/defenderforcloud/sub-assessments/list#clouderror) | Error response describing why the operation failed. |
+
+## Security
+
+### azure_auth
+
+Azure Active Directory OAuth2 Flow
+
+Type: oauth2
+Flow: implicit
+Authorization URL: `https://login.microsoftonline.com/common/oauth2/authorize`
+
+Scopes
+
+| Name | Description |
+| | -- |
+| user_impersonation | impersonate your user account |
+
+### Example
+
+### HTTP
+
+#### GET
+
+`https://management.azure.com/subscriptions/ 6ebb89c4-0e91-4f62-888f-c9518e662293/resourceGroups/myResourceGroup/providers/Microsoft.ContainerRegistry/registries/myRegistry/providers/Microsoft.Security/assessments/ cf02effd-8e33-4b84-a012-1e61cf1a5638/subAssessments?api-version=2019-01-01-preview`
+
+#### Sample Response
+
+```json
+{
+ "value": [
+ {
+ "type": "Microsoft.Security/assessments/subAssessments",
+ "id": "/subscriptions/3905431d-c062-4c17-8fd9-c51f89f334c4/resourceGroups/PytorchEnterprise/providers/Microsoft.ContainerRegistry/registries/ptebic/providers/Microsoft.Security/assessments/c0b7cfc6-3172-465a-b378-53c7ff2cc0d5/subassessments/3f069764-2777-3731-9698-c87f23569a1d",
+ "name": "3f069764-2777-3731-9698-c87f23569a1d",
+ "properties": {
+ "id": "CVE-2021-39537",
+ "displayName": "CVE-2021-39537",
+ "status": {
+ "code": "NotApplicable",
+ "severity": "High",
+ "cause": "Exempt",
+ "description": "Disabled parent assessment"
+ },
+ "remediation": "Create new image with updated package libncursesw5 with version 6.2-0ubuntu2.1 or higher.",
+ "description": "This vulnerability affects the following vendors: Gnu, Apple, Red_Hat, Ubuntu, Debian, Suse, Amazon, Microsoft, Alpine. To view more details about this vulnerability please visit the vendor website.",
+ "timeGenerated": "2023-08-08T08:14:13.742742Z",
+ "resourceDetails": {
+ "source": "Azure",
+ "id": "/repositories/public/azureml/aifx/stable-ubuntu2004-cu116-py39-torch1121/images/sha256:7f107db187ff32acfbc47eaa262b44d13d725f14dd08669a726a81fba87a12d6"
+ },
+ "additionalData": {
+ "assessedResourceType": "AzureContainerRegistryVulnerability",
+ "artifactDetails": {
+ "repositoryName": "public/azureml/aifx/stable-ubuntu2004-cu116-py39-torch1121",
+ "registryHost": "ptebic.azurecr.io",
+ "digest": "sha256:7f107db187ff32acfbc47eaa262b44d13d725f14dd08669a726a81fba87a12d6",
+ "tags": [
+ "biweekly.202305.2"
+ ],
+ "artifactType": "ContainerImage",
+ "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
+ "lastPushedToRegistryUTC": "2023-05-15T16:00:40.2938142Z"
+ },
+ "softwareDetails": {
+ "osDetails": {
+ "osPlatform": "linux",
+ "osVersion": "ubuntu_linux_20.04"
+ },
+ "packageName": "libncursesw5",
+ "category": "OS",
+ "fixReference": {
+ "id": "USN-6099-1",
+ "url": "https://ubuntu.com/security/notices/USN-6099-1",
+ "description": "USN-6099-1: ncurses vulnerabilities 2023 May 23",
+ "releaseDate": "2023-05-23T00:00:00+00:00"
+ },
+ "vendor": "ubuntu",
+ "version": "6.2-0ubuntu2",
+ "evidence": [
+ "dpkg-query -f '${Package}:${Source}:\\n' -W | grep -e ^libncursesw5:.* -e .*:libncursesw5: | cut -f 1 -d ':' | xargs dpkg-query -s",
+ "dpkg-query -f '${Package}:${Source}:\\n' -W | grep -e ^libncursesw5:.* -e .*:libncursesw5: | cut -f 1 -d ':' | xargs dpkg-query -s"
+ ],
+ "language": "",
+ "fixedVersion": "6.2-0ubuntu2.1",
+ "fixStatus": "FixAvailable"
+ },
+ "vulnerabilityDetails": {
+ "cveId": "CVE-2021-39537",
+ "references": [
+ {
+ "title": "CVE-2021-39537",
+ "link": "https://nvd.nist.gov/vuln/detail/CVE-2021-39537"
+ }
+ ],
+ "cvss": {
+ "2.0": null,
+ "3.0": {
+ "base": 7.8,
+ "cvssVectorString": "CVSS:3.0/AV:L/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H/E:P/RL:U/RC:R"
+ }
+ },
+ "workarounds": [],
+ "publishedDate": "2020-08-04T00:00:00",
+ "lastModifiedDate": "2023-07-07T00:00:00",
+ "severity": "High",
+ "cpe": {
+ "uri": "cpe:2.3:a:ubuntu:libncursesw5:*:*:*:*:*:ubuntu_linux_20.04:*:*",
+ "part": "Applications",
+ "vendor": "ubuntu",
+ "product": "libncursesw5",
+ "version": "*",
+ "update": "*",
+ "edition": "*",
+ "language": "*",
+ "softwareEdition": "*",
+ "targetSoftware": "ubuntu_linux_20.04",
+ "targetHardware": "*",
+ "other": "*"
+ },
+ "weaknesses": {
+ "cwe": [
+ {
+ "id": "CWE-787"
+ }
+ ]
+ },
+ "exploitabilityAssessment": {
+ "exploitStepsVerified": false,
+ "exploitStepsPublished": false,
+ "isInExploitKit": false,
+ "types": [],
+ "exploitUris": []
+ }
+ },
+ "cvssV30Score": 7.8
+ }
+ }
+ }
+ ]
+}
+```
+
+## Definitions
+
+| Name | Description |
+| | |
+| AzureResourceDetails | Details of the Azure resource that was assessed |
+| CloudError | Common error response for all Azure Resource Manager APIs to return error details for failed operations. (This definition also follows the OData error response format.). |
+| CloudErrorBody | The error detail |
+| AzureContainerVulnerability | More context fields for container registry Vulnerability assessment |
+| CVE | CVE Details |
+| CVSS | CVSS Details |
+| ErrorAdditionalInfo | The resource management error additional info. |
+| SecuritySubAssessment | Security subassessment on a resource |
+| SecuritySubAssessmentList | List of security subassessments |
+| ArtifactDetails | Details for the affected container image |
+| SoftwareDetails | Details for the affected software package |
+| FixReference | Details on the fix, if available |
+| OS Details | Details on the os information |
+| VulnerabilityDetails | Details on the detected vulnerability |
+| CPE | Common Platform Enumeration |
+| Cwe | Common weakness enumeration |
+| VulnerabilityReference | Reference links to vulnerability |
+| ExploitabilityAssessment | Reference links to an example exploit |
+
+### AzureContainerRegistryVulnerability (MDVM)
+
+Additional context fields for Azure container registry vulnerability assessment
+
+| **Name** | **Type** | **Description** |
+| -- | -- | -- |
+| assessedResourceType | string: AzureContainerRegistryVulnerability | Subassessment resource type |
+| cvssV30Score | Numeric | CVSS V3 Score |
+| vulnerabilityDetails | VulnerabilityDetails | |
+| artifactDetails | ArtifactDetails | |
+| softwareDetails | SoftwareDetails | |
+
+### ArtifactDetails
+
+Context details for the affected container image
+
+| **Name** | **Type** | **Description** |
+| -- | -- | |
+| repositoryName | String | Repository name |
+| RepositoryHost | String | Repository host |
+| lastPublishedToRegistryUTC | Timestamp | UTC timestamp for last publish date |
+| artifactType | String: ContainerImage | |
+| mediaType | String | Layer media type |
+| Digest | String | Digest of vulnerable image |
+| Tags | String[] | Tags of vulnerable image |
+
+### Software Details
+
+Details for the affected software package
+
+| **Name** | **Type** | **Description** |
+| | | |
+| fixedVersion | String | Fixed Version |
+| category | String | Vulnerability category ΓÇô OS or Language |
+| osDetails | OsDetails | |
+| language | String | Language of affected package (for example, Python, .NET) could also be empty |
+| version | String | |
+| vendor | String | |
+| packageName | String | |
+| fixStatus | String | Unknown, FixAvailable, NoFixAvailable, Scheduled, WontFix |
+| evidence | String[] | Evidence for the package |
+| fixReference | FixReference | |
+
+### FixReference
+
+Details on the fix, if available
+
+| **Name** | **Type** | **description** |
+| -- | | |
+| ID | String | Fix ID |
+| Description | String | Fix Description |
+| releaseDate | Timestamp | Fix timestamp |
+| url | String | URL to fix notification |
+
+### OS Details
+
+Details on the os information
+
+| **Name** | **Type** | **Description** |
+| - | -- | -- |
+| osPlatform | String | For example: Linux, Windows |
+| osName | String | For example: Ubuntu |
+| osVersion | String | |
+
+### VulnerabilityDetails
+
+Details on the detected vulnerability
+
+| **Severity** | **Severity** | **The sub-assessment severity level** |
+| | -- | - |
+| LastModifiedDate | Timestamp | |
+| publishedDate | Timestamp | Published date |
+| ExploitabilityAssessment | ExploitabilityAssessment | |
+| CVSS | Dictionary <string, CVSS> | Dictionary from cvss version to cvss details object |
+| Workarounds | Workaround[] | Published workarounds for vulnerability |
+| References | VulnerabilityReference | |
+| Weaknesses | Weakness[] | |
+| cveId | String | CVE ID |
+| Cpe | CPE | |
+
+### CPE (Common Platform Enumeration)
+
+| **Name** | **Type** | **Description** |
+| | -- | |
+| language | String | Language tag |
+| softwareEdition | String | |
+| Version | String | Package version |
+| targetSoftware | String | Target Software |
+| vendor | String | Vendor |
+| product | String | Product |
+| edition | String | |
+| update | String | |
+| other | String | |
+| part | String | Applications Hardware OperatingSystems |
+| uri | String | CPE 2.3 formatted uri |
+
+### Weakness
+
+| **Name** | **Type** | **Description** |
+| -- | -- | |
+| Cwe | Cwe[] | |
+
+### Cwe (Common weakness enumeration)
+
+CWE details
+
+| **Name** | **Type** | **description** |
+| -- | -- | |
+| ID | String | CWE ID |
+
+### VulnerabilityReference
+
+Reference links to vulnerability
+
+| **Name** | **Type** | **Description** |
+| -- | -- | - |
+| link | String | Reference url |
+| title | String | Reference title |
+
+### ExploitabilityAssessment
+
+Reference links to an example exploit
+
+| **Name** | **Type** | **Description** |
+| | -- | |
+| exploitUris | String[] | |
+| exploitStepsPublished | Boolean | Had the exploits steps been published |
+| exploitStepsVerified | Boolean | Had the exploit steps verified |
+| isInExploitKit | Boolean | Is part of the exploit kit |
+| types | String[] | Exploit types, for example: NotAvailable, Dos, Local, Remote, WebApps, PrivilegeEscalation |
+
+### AzureResourceDetails
+
+Details of the Azure resource that was assessed
+
+| **Name** | **Type** | **Description** |
+| -- | -- | |
+| ID | string | Azure resource ID of the assessed resource |
+| source | string: Azure | The platform where the assessed resource resides |
+
+### CloudError
+
+Common error response for all Azure Resource Manager APIs to return error details for failed operations. (This response also follows the OData error response format.).
+
+| **Name** | **Type** | **Description** |
+| -- | | -- |
+| error.additionalInfo | [ErrorAdditionalInfo](/rest/api/defenderforcloud/sub-assessments/list#erroradditionalinfo)[] | The error additional info. |
+| error.code | string | The error code. |
+| error.details | [CloudErrorBody](/rest/api/defenderforcloud/sub-assessments/list?tabs=HTTP#clouderrorbody)[] | The error details. |
+| error.message | string | The error message. |
+| error.target | string | The error target. |
+
+### CloudErrorBody
+
+The error detail.
+
+| **Name** | **Type** | **Description** |
+| -- | | -- |
+| additionalInfo | [ErrorAdditionalInfo](/rest/api/defenderforcloud/sub-assessments/list#erroradditionalinfo)[] | The error additional info. |
+| code | string | The error code. |
+| details | [CloudErrorBody](/rest/api/defenderforcloud/sub-assessments/list#clouderrorbody)[] | The error details. |
+| message | string | The error message. |
+| target | string | The error target. |
+
+### ErrorAdditionalInfo
+
+The resource management error additional info.
+
+| **Name** | **Type** | **Description** |
+| -- | -- | - |
+| info | object | The additional info. |
+| type | string | The additional info type. |
+
+### SecuritySubAssessment
+
+Security subassessment on a resource
+
+| **Name** | **Type** | **Description** |
+| -- | | |
+| ID | string | Resource ID |
+| name | string | Resource name |
+| properties.additionalData | AdditionalData: AzureContainerRegistryVulnerability | Details of the subassessment |
+| properties.category | string | Category of the subassessment |
+| properties.description | string | Human readable description of the assessment status |
+| properties.displayName | string | User friendly display name of the subassessment |
+| properties.id | string | Vulnerability ID |
+| properties.impact | string | Description of the impact of this subassessment |
+| properties.remediation | string | Information on how to remediate this subassessment |
+| properties.resourceDetails | ResourceDetails: [AzureResourceDetails](/rest/api/defenderforcloud/sub-assessments/list#azureresourcedetails) | Details of the resource that was assessed |
+| properties.status | [SubAssessmentStatus](/rest/api/defenderforcloud/sub-assessments/list#subassessmentstatus) | Status of the subassessment |
+| properties.timeGenerated | string | The date and time the subassessment was generated |
+| type | string | Resource type |
+
+### SecuritySubAssessmentList
+
+List of security subassessments
+
+| **Name** | **Type** | **Description** |
+| -- | | - |
+| nextLink | string | The URI to fetch the next page. |
+| value | [SecuritySubAssessment](/rest/api/defenderforcloud/sub-assessments/list?tabs=HTTP#securitysubassessment)[] | Security subassessment on a resource |
defender-for-cloud Tutorial Enable Databases Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tutorial-enable-databases-plan.md
Database protection includes:
- [Microsoft Defender for open-source relational databases](defender-for-databases-introduction.md) - [Microsoft Defender for Azure Cosmos DB](concept-defender-for-cosmos.md)
-Defender for Databases protects four database protection plans at their own cost. You can learn more about Defender for Clouds pricing on [the pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
+Defender for Databases protects four database protection plans at their own cost. You can learn more about Defender for Clouds pricing on [the pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
## Prerequisites
Defender for Databases protects four database protection plans at their own cost
When you enable database protection, you enable all four of the Defender plans and protect all of the supported databases on your subscription.
-**To enable Defender for App Service on your subscription**:
+**To enable Defender for Databases on your subscription**:
1. Sign in to the [Azure portal](https://portal.azure.com).
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
Customers that rely on the `resourceID` to query DevOps recommendation data will
Queries will need to be updated to include both the old and new `resourceID` to show both, for example, total over time.
-Additionally, customers that have created custom queries using the DevOps workbook will need to update the assessment keys for the impacted DevOps security recommendations.
+Additionally, customers that have created custom queries using the DevOps workbook will need to update the assessment keys for the impacted DevOps security recommendations. The template DevOps workbook is planned to be updated to reflect the new recommendations, although during the actual migration, customers may experience some errors with the workbook.
+
+The experience on the recommendations page will be impacted and require customers to query under "All recommendations" to view the new DevOps recommendations. For Azure DevOps, deprecated assessments may continue to show for a maximum of 14 days if new pipelines are not run. Refer to [Defender for DevOps Common questions](https://learn.microsoft.com/azure/defender-for-cloud/faq-defender-for-devops#why-don-t-i-see-recommendations-for-findings-) for details.
-The recommendations page's experience will have minimal impact and deprecated assessments may continue to show for a maximum of 14 days if new scan results aren't submitted.
### DevOps Resource Deduplication for Defender for DevOps
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes.md
Title: OT monitoring software versions - Microsoft Defender for IoT description: This article lists Microsoft Defender for IoT on-premises OT monitoring software versions, including release and support dates and highlights for new features. Previously updated : 07/03/2023 Last updated : 08/09/2023 # OT monitoring software versions
This version includes the following updates and enhancements:
- [UI enhancements for downloading PCAP files from the sensor](how-to-view-alerts.md#access-alert-pcap-data) - [*cyberx* and *cyberx_host* users aren't enabled by default](roles-on-premises.md#default-privileged-on-premises-users)
+> [!NOTE]
+> Due to internal improvements to the OT sensor's device inventory, column edits made to your device inventory aren't retained after updating to version 23.1.2. If you'd previously edited the columns shown in your device inventory, you'll need to make those same edits again after updating your sensor.
+>
+ ## Versions 22.3.x ### 22.3.10
dms Known Issues Troubleshooting Dms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-troubleshooting-dms.md
Last updated 02/20/2020 -
- - seo-lt-2019
- - ignite-2022
+ # Troubleshoot common Azure Database Migration Service issues and errors
event-grid Consume Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/consume-private-endpoints.md
Title: Deliver events using private link service description: This article describes how to work around the limitation of not able to deliver events using private link service. Previously updated : 03/01/2023 Last updated : 08/16/2023 # Deliver events using private link service
To deliver events to Storage queues using managed identity, follow these steps:
1. [Add the identity to the **Storage Queue Data Message Sender**](../storage/blobs/assign-azure-role-data-access.md) role on Azure Storage queue. 1. [Configure the event subscription](managed-service-identity.md#create-event-subscriptions-that-use-an-identity) that uses a Storage queue as an endpoint to use the system-assigned or user-assigned managed identity.
-> [!NOTE]
-> - If there's no firewall or virtual network rules configured for the Azure Storage account, you can use both user-assigned and system-assigned identities to deliver events to the Azure Storage account.
-> - If a firewall or virtual network rule is configured for the Azure Storage account, you can use only the system-assigned managed identity if **Allow Azure services on the trusted service list to access the storage account** is also enabled on the storage account. You can't use user-assigned managed identity whether this option is enabled or not.
+## Firewall and virtual network rules
+If there's no firewall or virtual network rules configured for the destination Storage account, Event Hubs namespace, or Service Bus namespace, you can use both user-assigned and system-assigned identities to deliver events.
+
+If a firewall or virtual network rule is configured for the destination Storage account, Event Hubs namespace, or Service Bus namespace, you can use only the system-assigned managed identity if **Allow Azure services on the trusted service list to access the storage account** is also enabled on the destinations. You can't use user-assigned managed identity whether this option is enabled or not.
## Next steps For more information about delivering events using a managed identity, see [Event delivery using a managed identity](managed-service-identity.md).
event-grid Powershell Webhook Secure Delivery Azure Ad App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/scripts/powershell-webhook-secure-delivery-azure-ad-app.md
Title: Azure PowerShell - Secure WebHook delivery with Azure AD Application in Azure Event Grid description: Describes how to deliver events to HTTPS endpoints protected by Azure AD Application using Azure Event Grid ms.devlang: powershell-+ Last updated 10/14/2021
event-grid Powershell Webhook Secure Delivery Azure Ad User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/scripts/powershell-webhook-secure-delivery-azure-ad-user.md
Title: Azure PowerShell - Secure WebHook delivery with Azure AD User in Azure Event Grid description: Describes how to deliver events to HTTPS endpoints protected by Azure AD User using Azure Event Grid ms.devlang: powershell-+ Last updated 09/29/2021
event-grid Secure Webhook Delivery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/secure-webhook-delivery.md
Title: Secure WebHook delivery with Azure AD in Azure Event Grid description: Describes how to deliver events to HTTPS endpoints protected by Azure Active Directory using Azure Event Grid + Last updated 10/12/2022
expressroute Expressroute Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-faqs.md
ExpressRoute Local is a SKU of ExpressRoute circuit, in addition to the Standard
ExpressRoute Local may not be available for an ExpressRoute Location. For peering location and supported Azure local region, see [locations and connectivity providers](expressroute-locations-providers.md#partners).
- > [!NOTE]
- > The restriction of Azure regions in the same metro doesn't apply for ExpressRoute Local in Virtual WAN.
- ### What are the benefits of ExpressRoute Local? While you need to pay egress data transfer for your Standard or Premium ExpressRoute circuit, you don't pay egress data transfer separately for your ExpressRoute Local circuit. In other words, the price of ExpressRoute Local includes data transfer fees. ExpressRoute Local is an economical solution if you have massive amount of data to transfer and want to have your data over a private connection to an ExpressRoute peering location near your desired Azure regions.
external-attack-surface-management Understanding Asset Details https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/understanding-asset-details.md
The following fields are included in the table in the **Values** section on the
Many organizations opt to obfuscate their registry information. Sometimes contact email addresses end in *@anonymised.email*. This placeholder is used instead of a real contact address. Many fields are optional during registration configuration, so any field with an empty value wasn't included by the registrant. ++
+### Change history
+
+The "Change history" tab displays a list of modifications that have been applied to an asset over time. This information helps you track these changes over time and better understand the lifecycle of the asset. This tab displays a variety of changes, including but not limited to asset states, labels and external IDs. For each change, we list the user who implemented the change and a timestamp.
+
+[ ![Screenshot that shows the Change history tab.](media/change-history-1.png) ](media/change-history-1.png#lightbox)
+++ ## Next steps - [Understand dashboards](understanding-dashboards.md)
external-attack-surface-management Understanding Inventory Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/understanding-inventory-assets.md
All assets are labeled as one of the following states:
These asset states are uniquely processed and monitored to ensure that customers have clear visibility into the most critical assets by default. For instance, ΓÇ£Approved InventoryΓÇ¥ assets are always represented in dashboard charts and are scanned daily to ensure data recency. All other kinds of assets are not included in dashboard charts by default; however, users can adjust their inventory filters to view assets in different states as needed. Similarly, "CandidateΓÇ¥ assets are only scanned during the discovery process; itΓÇÖs important to review these assets and change their state to ΓÇ£Approved InventoryΓÇ¥ if they are owned by your organization. +
+## Tracking inventory changes
+
+Your attack surface is constantly changing, which is why Defender EASM continuously analyzes and updates your inventory to ensure accuracy. Assets are frequently added and removed from inventory, so it's important to track these changes to understand your attack surface and identify key trends. The inventory changes dashboard provides an overview of these changes, displaying the "added" and "removed" counts for each asset type. You can filter the dashboard by two date ranges: either the last 7 or 30 days. For a more granular view of these inventory changes, refer to the "Changes by date" section.
++
+[ ![Screenshot of Inventory Changes screen.](media/inventory-changes-1.png)](media/inventory-changes-1.png#lightbox)
++++ ## Next steps -- [Deploying the EASM Azure resource](deploying-the-defender-easm-azure-resource.md)
+- [Modifying inventory assets](labeling-inventory-assets.md)
- [Understanding asset details](understanding-asset-details.md) - [Using and managing discovery](using-and-managing-discovery.md)
governance Definition Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/definition-structure.md
Title: Details of the policy definition structure description: Describes how policy definitions are used to establish conventions for Azure resources in your organization. Previously updated : 08/29/2022 Last updated : 08/15/2023 + # Azure Policy definition structure Azure Policy establishes conventions for resources. Policy definitions describe resource compliance
always stay the same, however their values change based on the individual fillin
Parameters work the same way when building policies. By including parameters in a policy definition, you can reuse that policy for different scenarios by using different values.
-> [!NOTE]
-> Parameters may be added to an existing and assigned definition. The new parameter must include the
-> **defaultValue** property. This prevents existing assignments of the policy or initiative from
-> indirectly being made invalid.
+Parameters may be added to an existing and assigned definition. The new parameter must include the
+**defaultValue** property. This prevents existing assignments of the policy or initiative from
+indirectly being made invalid.
-> [!NOTE]
-> Parameters can't be removed from a policy definition that's been assigned.
+Parameters can't be removed from a policy definition because there may be an assignment that sets the parameter value, and that reference would become broken. Instead of removing, you can classify the parameter as deprecated in the parameter metadata.
### Parameter properties
governance Effects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/effects.md
These effects are currently supported in a policy definition:
## Interchanging effects
-Sometimes multiple effects can be valid for a given policy definition. Parameters are often used to specify allowed effect values so that a single definition can be more versatile. However, it's important to note that not all effects are interchangeable. Resource properties and logic in the policy rule can determine whether a certain effect is considered valid to the policy definition. For example, policy definitions with effect **AuditIfNotExists** require other details in the policy rule that aren't required for policies with effect **Audit**. The effects also behave differently. **Audit** policies will assess a resource's compliance based on its own properties, while **AuditIfNotExists** policies will assess a resource's compliance based on a child or extension resource's properties.
+Sometimes multiple effects can be valid for a given policy definition. Parameters are often used to specify allowed effect values so that a single definition can be more versatile. However, it's important to note that not all effects are interchangeable. Resource properties and logic in the policy rule can determine whether a certain effect is considered valid to the policy definition. For example, policy definitions with effect **AuditIfNotExists** require other details in the policy rule that aren't required for policies with effect **Audit**. The effects also behave differently. **Audit** policies assess a resource's compliance based on its own properties, while **AuditIfNotExists** policies assess a resource's compliance based on a child or extension resource's properties.
The following list is some general guidance around interchangeable effects: - **Audit**, **Deny**, and either **Modify** or **Append** are often interchangeable.
related resources to match.
- When the condition values for **if.field.type** and **then.details.type** match, then **Name** becomes _required_ and must be `[field('name')]`, or `[field('fullName')]` for a child resource. However, an [audit](#audit) effect should be considered instead.+
+> [!NOTE]
+>
+> **Type** and **Name** segments can be combined to generically retrieve nested resources.
+>
+> To retrieve a specific resource, you can use `"type": "Microsoft.ExampleProvider/exampleParentType/exampleNestedType"` and `"name": "parentResourceName/nestedResourceName"`.
+>
+> To retrieve a collection of nested resources, a wildcard character `?` can be provided in place of the last name segment. For example, `"type": "Microsoft.ExampleProvider/exampleParentType/exampleNestedType"` and `"name": "parentResourceName/?"`. This can be combined with field functions to access resources related to the evaluated resource, such as `"name": "[concat(field('name'), '/?')]"`."
+ - **ResourceGroupName** (optional) - Allows the matching of the related resource to come from a different resource group. - Doesn't apply if **type** is a resource that would be underneath the **if** condition resource.
assignment.
#### Subscription deletion
-Policy won't block removal of resources that happens during a subscription deletion.
+Policy doesn't block removal of resources that happens during a subscription deletion.
#### Resource group deletion
-Policy will evaluate resources that support location and tags against `DenyAction` policies during a resource group deletion. Only policies that have the `cascadeBehaviors` set to `deny` in the policy rule will block a resource group deletion. Policy won't block removal of resources that don't support location and tags nor any policy with `mode:all`.
+Policy evaluates resources that support location and tags against `DenyAction` policies during a resource group deletion. Only policies that have the `cascadeBehaviors` set to `deny` in the policy rule block a resource group deletion. Policy doesn't block removal of resources that don't support location and tags nor any policy with `mode:all`.
#### Cascade deletion
-Cascade deletion occurs when deleting of a parent resource is implicitly deletes all its child resources. Policy won't block removal of child resources when a delete action targets the parent resources. For example, `Microsoft.Insights/diagnosticSettings` is a child resource of `Microsoft.Storage/storageaccounts`. If a `denyAction` policy targets `Microsoft.Insights/diagnosticSettings`, a delete call to the diagnostic setting (child) will fail, but a delete to the storage account (parent) will implicitly delete the diagnostic setting (child).
+Cascade deletion occurs when deleting of a parent resource is implicitly deletes all its child resources. Policy doesn't block removal of child resources when a delete action targets the parent resources. For example, `Microsoft.Insights/diagnosticSettings` is a child resource of `Microsoft.Storage/storageaccounts`. If a `denyAction` policy targets `Microsoft.Insights/diagnosticSettings`, a delete call to the diagnostic setting (child) will fail, but a delete to the storage account (parent) will implicitly delete the diagnostic setting (child).
[!INCLUDE [policy-denyAction](../../../../includes/azure-policy-deny-action.md)]
related resources to match and the template deployment to execute.
resource instead of all resources of the specified type. - When the condition values for **if.field.type** and **then.details.type** match, then **Name** becomes _required_ and must be `[field('name')]`, or `[field('fullName')]` for a child resource.+
+> [!NOTE]
+>
+> **Type** and **Name** segments can be combined to generically retrieve nested resources.
+>
+> To retrieve a specific resource, you can use `"type": "Microsoft.ExampleProvider/exampleParentType/exampleNestedType"` and `"name": "parentResourceName/nestedResourceName"`.
+>
+> To retrieve a collection of nested resources, a wildcard character `?` can be provided in place of the last name segment. For example, `"type": "Microsoft.ExampleProvider/exampleParentType/exampleNestedType"` and `"name": "parentResourceName/?"`. This can be combined with field functions to access resources related to the evaluated resource, such as `"name": "[concat(field('name'), '/?')]"`."
+ - **ResourceGroupName** (optional) - Allows the matching of the related resource to come from a different resource group. - Doesn't apply if **type** is a resource that would be underneath the **if** condition resource.
logs, and the policy effect don't occur. For more information, see
## Manual
-The new `manual` effect enables you to self-attest the compliance of resources or scopes. Unlike other policy definitions that actively scan for evaluation, the Manual effect allows for manual changes to the compliance state. To change the compliance of a resource or scope targeted by a manual policy, you'll need to create an [attestation](attestation-structure.md). The [best practice](attestation-structure.md#best-practices) is to design manual policies that target the scope that defines the boundary of resources whose compliance need attesting.
+The new `manual` effect enables you to self-attest the compliance of resources or scopes. Unlike other policy definitions that actively scan for evaluation, the Manual effect allows for manual changes to the compliance state. To change the compliance of a resource or scope targeted by a manual policy, you need to create an [attestation](attestation-structure.md). The [best practice](attestation-structure.md#best-practices) is to design manual policies that target the scope that defines the boundary of resources whose compliance need attesting.
> [!NOTE] > Support for manual policy is available through various Microsoft Defender
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/overview.md
Title: Overview of Azure Resource Graph description: Understand how the Azure Resource Graph service enables complex querying of resources at scale across subscriptions and tenants. Previously updated : 06/15/2022 Last updated : 08/15/2023
provide the following abilities:
- Query resources with complex filtering, grouping, and sorting by resource properties. - Explore resources iteratively based on governance requirements. - Assess the impact of applying policies in a vast cloud environment.-- [Query changes made to resource properties](./how-to/get-resource-changes.md)
- (preview).
+- [Query changes made to resource properties](./how-to/get-resource-changes.md).
-In this documentation, you'll go over each feature in detail.
+In this documentation, you review each feature in detail.
> [!NOTE] > Azure Resource Graph powers Azure portal's search bar, the new browse **All resources** experience,
With Azure Resource Graph, you can:
- Access the properties returned by resource providers without needing to make individual calls to each resource provider.-- View the last seven days of resource configuration changes to see what properties changed and
- when. (preview)
+- View the last 14 days of resource configuration changes to see which properties changed and
+ when.
> [!NOTE] > As a _preview_ feature, some `type` objects have additional non-Resource Manager properties
First, for details on operations and functions that can be used with Azure Resou
## Permissions in Azure Resource Graph
-To use Resource Graph, you must have appropriate rights in [Azure role-based access
-control (Azure RBAC)](../../role-based-access-control/overview.md) with at least read access to the
-resources you want to query. Without at least `read` permissions to the Azure object or object
-group, results won't be returned.
+To use Resource Graph, you must have appropriate rights in [Azure role-based access control (Azure
+RBAC)](../../role-based-access-control/overview.md) with at least `read` access to the resources you
+want to query. No results are returned if you don't have at least `read` permissions to the Azure
+object or object group.
> [!NOTE] > Resource Graph uses the subscriptions available to a principal during login. To see resources of a > new subscription added during an active session, the principal must refresh the context. This > action happens automatically when logging out and back in.
-Azure CLI and Azure PowerShell use subscriptions that the user has access to. When using REST API
-directly, the subscription list is provided by the user. If the user has access to any of the
+Azure CLI and Azure PowerShell use subscriptions that the user has access to. When you use a REST
+API, the subscription list is provided by the user. If the user has access to any of the
subscriptions in the list, the query results are returned for the subscriptions the user has access
-to. This behavior is the same as when calling
-[Resource Groups - List](/rest/api/resources/resourcegroups/list) \- you get resource groups you've
-access to without any indication that the result may be partial. If there are no subscriptions in
-the subscription list that the user has appropriate rights to, the response is a _403_ (Forbidden).
+to. This behavior is the same as when calling [Resource Groups - List](/rest/api/resources/resourcegroups/list)
+because you get resource groups that you can access, without any indication that the result may be
+partial. If there are no subscriptions in the subscription list that the user has appropriate rights
+to, the response is a _403_ (Forbidden).
> [!NOTE] > In the **preview** REST API version `2020-04-01-preview`, the subscription list may be omitted.
hdinsight Apache Domain Joined Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/domain-joined/apache-domain-joined-architecture.md
Title: Azure HDInsight architecture with Enterprise Security Package
description: Learn how to plan Azure HDInsight security with Enterprise Security Package. -+ Last updated 05/11/2023
healthcare-apis Find Identity Object Ids https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/find-identity-object-ids.md
+ Last updated 06/03/2022
healthcare-apis Smart On Fhir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/smart-on-fhir.md
Below tutorials describe steps to enable SMART on FHIR applications with FHIR Se
- After registering the application, make note of the applicationId for client application. - Ensure you have access to Azure Subscription of FHIR service, to create resources and add role assignments.
-## SMART on FHIR using AHDS Samples OSS
+## SMART on FHIR using AHDS Samples OSS (SMART on FHIR(Enhanced))
### Step 1: Set up FHIR SMART user role Follow the steps listed under section [Manage Users: Assign Users to Role](https://learn.microsoft.com/azure/active-directory/fundamentals/active-directory-users-assign-role-azure-portal). Any user added to role - "FHIR SMART User" will be able to access the FHIR Service if their requests comply with the SMART on FHIR implementation Guide, such as request having access token, which includes a fhirUser claim and a clinical scopes claim. The access granted to the users in this role will then be limited by the resources associated to their fhirUser compartment and the restrictions in the clinical scopes.
Follow the steps listed under section [Manage Users: Assign Users to Role](https
<summary> Click to expand! </summary> > [!NOTE]
-> This is another option to using "SMART on FHIR using AHDS Samples OSS" mentioned above. SMART on FHIR Proxy option only enables EHR launch sequence.
+> This is another option to SMART on FHIR(Enhanced) mentioned above. SMART on FHIR Proxy option only enables EHR launch sequence.
### Step 1: Set admin consent for your client application To use SMART on FHIR, you must first authenticate and authorize the app. The first time you use SMART on FHIR, you also must get administrative consent to let the app access your FHIR resources.
healthcare-apis Dicom Cast Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-cast-overview.md
- Title: DICOMcast overview - Azure Health Data Services
-description: In this article, you'll learn the concepts of DICOMcast.
---- Previously updated : 06/03/2022---
-# DICOMcast overview
-
-> [!NOTE]
-> On **July 31, 2023** DICOMcast will be retired. DICOMcast will continue to be available as an open-source component that can be self-hosted. For more information about deploying the DICOMcast service, see the [migration guidance](https://aka.ms/dicomcast-migration).
-
-DICOMcast offers customers the ability to synchronize the data from a DICOM service to a [FHIR service](../../healthcare-apis/fhir/overview.md), which allows healthcare organizations to integrate clinical and imaging data. DICOMcast expands the use cases for health data by supporting both a streamlined view of longitudinal patient data and the ability to effectively create cohorts for medical studies, analytics, and machine learning.
-
-## Architecture
-
-[ ![Architecture diagram of DICOMcast](media/dicom-cast-architecture.png) ](media/dicom-cast-architecture.png#lightbox)
--
-1. **Poll for batch of changes**: DICOMcast polls for any changes via the [Change Feed](dicom-change-feed-overview.md), which captures any changes that occur in your Medical Imaging Server for DICOM.
-1. **Fetch corresponding FHIR resources, if any**: If any DICOM service changes and correspond to FHIR resources, DICOMcast will fetch the related FHIR resources. DICOMcast synchronizes DICOM tags to the FHIR resource types *Patient* and *ImagingStudy*.
-1. **Merge FHIR resources and 'PUT' as a bundle in a transaction**: The FHIR resources corresponding to the DICOMcast captured changes will be merged. The FHIR resources will be 'PUT' as a bundle in a transaction into your FHIR service.
-1. **Persist state and process next batch**: DICOMcast will then persist the current state to prepare for next batch of changes.
-
-The current implementation of DICOMcast:
--- Supports a single-threaded process that reads from the DICOM change feed and writes to a FHIR service.-- Is hosted by Azure Container Instance in our sample template, but can be run elsewhere.-- Synchronizes DICOM tags to *Patient* and *ImagingStudy* FHIR resource types*.-- Is configurated to ignore invalid tags when syncing data from the change feed to FHIR resource types.
- - If `EnforceValidationOfTagValues` is enabled, then the change feed entry won't be written to the FHIR service unless every tag that's mapped is valid. For more information, see the [Mappings](#mappings) section below.
- - If `EnforceValidationOfTagValues` is disabled (default), and if a value is invalid, but it's not required to be mapped, then that particular tag won't be mapped. The rest of the change feed entry will be mapped to FHIR resources. If a required tag is invalid, then the change feed entry won't be written to the FHIR service. For more information about the required tags, see [Patient](#patient) and [Imaging Study](#imagingstudy)
-- Logs errors to Azure Table Storage.
- - Errors occur when processing change feed entries that are persisted in Azure Table storage that are in different tables.
- - `InvalidDicomTagExceptionTable`: Stores information about tags with invalid values. Entries here don't necessarily mean that the entire change feed entry wasn't stored in FHIR service, but that the particular value had a validation issue.
- - `DicomFailToStoreExceptionTable`: Stores information about change feed entries that weren't stored to FHIR service due to an issue with the change feed entry (such as invalid required tag). All entries in this table weren't stored to FHIR service.
- - `FhirFailToStoreExceptionTable`: Stores information about change feed entries that weren't stored to FHIR service due to an issue with the FHIR service (such as conflicting resource already exists). All entries in this table weren't stored to FHIR service.
- - `TransientRetryExceptionTable`: Stores information about change feed entries that faced a transient error (such as FHIR service too busy) and are being retried. Entries in this table note how many times they've been retried, but it doesn't necessarily mean that they eventually failed or succeeded to store to FHIR service.
- - `TransientFailureExceptionTable`: Stores information about change feed entries that had a transient error, and went through the retry policy and still failed to store to FHIR service. All entries in this table failed to store to FHIR service.
-
-## Mappings
-
-The current implementation of DICOMcast has the following mappings:
-
-### Patient
-
-| Property | Tag ID | Tag Name | Required Tag?| Note |
-| :- | :-- | :- | :-- | :-- |
-| Patient.identifier.where(system = '') | (0010,0020) | PatientID | Yes | For now, the system will be empty string. We'll add support later for allowing the system to be specified. |
-| Patient.name.where(use = 'usual') | (0010,0010) | PatientName | No | PatientName will be split into components and added as HumanName to the Patient resource. |
-| Patient.gender | (0010,0040) | PatientSex | No | |
-| Patient.birthDate | (0010,0030) | PatientBirthDate | No | PatientBirthDate only contains the date. This implementation assumes that the FHIR and DICOM services have data from the same time zone. |
-
-### Endpoint
-
-| Property | Tag ID | Tag Name | Note |
-| :- | :-- | :- | : |
-| Endpoint.status ||| The value 'active' will be used when creating the endpoint. |
-| Endpoint.connectionType ||| The system 'http://terminology.hl7.org/CodeSystem/endpoint-connection-type' and value 'dicom-wado-rs' will be used when creating the endpoint. |
-| Endpoint.address ||| The root URL to the DICOMWeb service will be used when creating the endpoint. The rule is described in 'http://hl7.org/fhir/imagingstudy.html#endpoint'. |
-
-### ImagingStudy
-
-| Property | Tag ID | Tag Name | Required | Note |
-| :- | :-- | :- | : | : |
-| ImagingStudy.identifier.where(system = 'urn:dicom:uid') | (0020,000D) | StudyInstanceUID | Yes | The value will have prefix of `urn:oid:`. |
-| ImagingStudy.status | | | No | The value 'available' will be used when creating ImagingStudy. |
-| ImagingStudy.modality | (0008,0060) | Modality | No | |
-| ImagingStudy.subject | | | No | It will be linked to the [Patient](#mappings). |
-| ImagingStudy.started | (0008,0020), (0008,0030), (0008,0201) | StudyDate, StudyTime, TimezoneOffsetFromUTC | No | Refer to the section for details about how the [timestamp](#timestamp) is constructed. |
-| ImagingStudy.endpoint | | | | It will be linked to the [Endpoint](#endpoint). |
-| ImagingStudy.note | (0008,1030) | StudyDescription | No | |
-| ImagingStudy.series.uid | (0020,000E) | SeriesInstanceUID | Yes | |
-| ImagingStudy.series.number | (0020,0011) | SeriesNumber | No | |
-| ImagingStudy.series.modality | (0008,0060) | Modality | Yes | |
-| ImagingStudy.series.description | (0008,103E) | SeriesDescription | No | |
-| ImagingStudy.series.started | (0008,0021), (0008,0031), (0008,0201) | SeriesDate, SeriesTime, TimezoneOffsetFromUTC | No | Refer to the section for details about how the [timestamp](#timestamp) is constructed. |
-| ImagingStudy.series.instance.uid | (0008,0018) | SOPInstanceUID | Yes | |
-| ImagingStudy.series.instance.sopClass | (0008,0016) | SOPClassUID | Yes | |
-| ImagingStudy.series.instance.number | (0020,0013) | InstanceNumber | No| |
-| ImagingStudy.identifier.where(type.coding.system='http://terminology.hl7.org/CodeSystem/v2-0203' and type.coding.code='ACSN')) | (0008,0050) | Accession Number | No | Refer to http://hl7.org/fhir/imagingstudy.html#notes. |
-
-### Timestamp
-
-DICOM has different date time VR types. Some tags (like Study and Series) have the date, time, and UTC offset stored separately. This means that the date might be partial. This code attempts to translate this into a partial date syntax allowed by the FHIR service.
-
-## Summary
-
-In this concept, we reviewed the architecture and mappings of DICOMcast. This feature is available as an open-source component that can be self-hosted. For more information about deploying the DICOMcast service, see the [deployment instructions](https://github.com/microsoft/dicom-server/blob/main/docs/quickstarts/deploy-dicom-cast.md).
-
-> [!IMPORTANT]
-> Ensure that you include the **resource IDs** of your DICOM service and FHIR service when you submit a support ticket.
-
-
-## Next steps
-
-To get started using the DICOM service, see
-
->[!div class="nextstepaction"]
->[Deploy DICOM service to Azure](deploy-dicom-services-in-azure.md)
-
->[!div class="nextstepaction"]
->[Using DICOMweb&trade;Standard APIs with DICOM service](dicomweb-standard-apis-with-dicom-services.md)
-
-FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis References For Dicom Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/references-for-dicom-service.md
This article describes our open-source projects on GitHub that provide source co
* [Azure DICOM service with OHIF viewer](https://github.com/microsoft/dicom-ohif): The [OHIF viewer](https://ohif.org/) is an open-source, non-diagnostic DICOM viewer that uses DICOMweb APIs to find and render DICOM images. This project provides the guidance and sample templates for deploying the OHIF viewer and configuring it to integrate with the DICOM service. ### Medical imaging network demo environment
-* [Medical Imaging Network Demo Environment](https://github.com/Azure-Samples/azure-health-data-services-samples/tree/main/samples/dicom-demo-env#readme): This hands-on lab / demo highlights how an organization with existing on-prem radiology infrastructure can take the first steps to intelligently moving their data to the cloud, without disruptions to the current workflow.
+* [Medical Imaging Network Demo Environment](https://github.com/Azure-Samples/azure-health-data-services-samples/tree/main/samples/dicom-demo-env#readme): This hands-on lab / demo highlights how an organization with existing on-premises radiology infrastructure can take the first steps to intelligently moving their data to the cloud, without disruptions to the current workflow.
## Next steps
For more information about using the DICOM service, see
For more information about DICOM cast, see >[!div class="nextstepaction"]
->[DICOM cast overview](dicom-cast-overview.md)
+>[DICOM cast overview](https://github.com/microsoft/dicom-server/blob/main/docs/concepts/dicom-cast.md)
FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Events Consume Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-consume-logic-apps.md
Follow these steps to create a Logic App workflow to consume FHIR events:
## Prerequisites
-Before you begin this tutorial, you need to have deployed a FHIR service and enabled events. For more information about deploying events, see [Deploy Events in the Azure portal](events-deploy-portal.md).
+Before you begin this tutorial, you need to have deployed a FHIR service and enabled events. For more information about deploying events, see [Deploy events using the Azure portal](events-deploy-portal.md).
## Creating a Logic App
You now need to fill out the details of your Logic App. Specify information for
:::image type="content" source="media/events-logic-apps/events-logic-tabs.png" alt-text="Screenshot of the five tabs for specifying your Logic App." lightbox="media/events-logic-apps/events-logic-tabs.png"::: -- Tab 1 - Basics-- Tab 2 - Hosting-- Tab 3 - Monitoring-- Tab 4 - Tags-- Tab 5 - Review + Create
+- Tab 1 - **Basics**
+- Tab 2 - **Hosting**
+- Tab 3 - **Monitoring**
+- Tab 4 - **Tags**
+- Tab 5 - **Review + Create**
### Basics - Tab 1
Enabling your plan makes it zone redundant.
### Hosting - Tab 2
-Continue specifying your Logic App by clicking "Next: Hosting".
+Continue specifying your Logic App by selecting **Next: Hosting**.
#### Storage
Choose the type of storage you want to use and the storage account. You can use
### Monitoring - Tab 3
-Continue specifying your Logic App by clicking "Next: Monitoring".
+Continue specifying your Logic App by selecting **Next: Monitoring**.
#### Monitoring with Application Insights
Enable Azure Monitor Application Insights to automatically monitor your applicat
### Tags - Tab 4
-Continue specifying your Logic App by clicking **Next: Tags**.
+Continue specifying your Logic App by selecting **Next: Tags**.
#### Use tags to categorize resources
This example doesn't use tagging.
### Review + create - Tab 5
-Finish specifying your Logic App by clicking **Next: Review + create**.
+Finish specifying your Logic App by selecting **Next: Review + create**.
#### Review your Logic App
If there are no errors, you'll finally see a notification telling you that your
#### Your Logic App dashboard
-Azure creates a dashboard when your Logic App is complete. The dashboard shows you the status of your app. You can return to your dashboard by clicking Overview in the Logic App menu. Here's a Logic App dashboard:
+Azure creates a dashboard when your Logic App is complete. The dashboard shows you the status of your app. You can return to your dashboard by selecting **Overview** in the Logic App menu. Here's a Logic App dashboard:
:::image type="content" source="media/events-logic-apps/events-logic-overview.png" alt-text="Screenshot of your Logic Apps overview dashboard." lightbox="media/events-logic-apps/events-logic-overview.png":::
To set up a new workflow, fill in these details:
Specify a new name for your workflow. Indicate whether you want the workflow to be stateful or stateless. Stateful is for business processes and stateless is for processing IoT events.
-When you've specified the details, select "Create" to begin designing your workflow.
+When you've specified the details, select **Create** to begin designing your workflow.
### Designing the workflow In your new workflow, select the name of the enabled workflow.
-You can write code to design a workflow for your application, but for this tutorial, choose the Designer option on the Developer menu.
+You can write code to design a workflow for your application, but for this tutorial, choose the **Designer** option on the **Developer** menu.
-Next, select "Choose an operation" to display the "Add a Trigger" blade on the right. Then search for "Azure Event Grid" and select the "Azure" tab below. The Event Grid isn't a Logic App Built-in.
+Next, select **Choose an operation** to display the **Add a Trigger** blade on the right. Then search for "Azure Event Grid" and select the **Azure** tab below. The Event Grid isn't a Logic App Built-in.
:::image type="content" source="media/events-logic-apps/events-logic-grid.png" alt-text="Screenshot of the search results for Azure Event Grid." lightbox="media/events-logic-apps/events-logic-grid.png":::
-When you see the "Azure Event Grid" icon, select on it to display the Triggers and Actions available from Event Grid. For more information about Event Grid, see [What is Azure Event Grid?](./../../event-grid/overview.md).
+When you see the "Azure Event Grid" icon, select on it to display the **Triggers and Actions** available from Event Grid. For more information about Event Grid, see [What is Azure Event Grid?](./../../event-grid/overview.md).
-Select "When a resource event occurs" to set up a trigger for the Azure Event Grid.
+Select **When a resource event occurs** to set up a trigger for the Azure Event Grid.
To tell Event Grid how to respond to the trigger, you must specify parameters and add actions.
Fill in the details for subscription, resource type, and resource name. Then you
- Resource deleted - Resource updated
-For more information about event types, see [What FHIR resource events does Events support?](events-faqs.md#what-fhir-resource-events-does-events-support).
+For more information about supported event types, see [Frequently asked questions about events](events-faqs.md).
### Adding an HTTP action
-Once you've specified the trigger events, you must add more details. Select the "+" below the "When a resource event occurs" button.
+Once you've specified the trigger events, you must add more details. Select the **+** below the **When a resource event occurs** button.
-You need to add a specific action. Select "Choose an operation" to continue. Then, for the operation, search for "HTTP" and select on "Built-in" to select an HTTP operation. The HTTP action will allow you to query the FHIR service.
+You need to add a specific action. Select **Choose an operation** to continue. Then, for the operation, search for "HTTP" and select on **Built-in** to select an HTTP operation. The HTTP action will allow you to query the FHIR service.
The options in this example are:
The options in this example are:
At this point, you need to give the FHIR Reader access to your app, so it can verify that the event details are correct. Follow these steps to give it access:
-1. The first step is to go back to your Logic App and select the Identity menu item.
+1. The first step is to go back to your Logic App and select the **Identity** menu item.
-2. In the System assigned tab, make sure the Status is "On".
+2. In the System assigned tab, make sure the **Status** is "On".
-3. Select on Azure role assignments. Select "Add role assignment".
+3. Select on Azure role assignments. Select **Add role assignment**.
4. Specify the following options:
At this point, you need to give the FHIR Reader access to your app, so it can ve
- Subscription = your subscription - Role = FHIR Data Reader.
-When you've specified the first four steps, add the role assignment by Managed identity, using Subscription, Managed identity (Logic App Standard), and select your Logic App by clicking the name and then clicking the Select button. Finally, select "Review + assign" to assign the role.
+When you've specified the first four steps, add the role assignment by Managed identity, using Subscription, Managed identity (Logic App Standard), and select your Logic App by selecting the name and then selecting the **Select** button. Finally, select **Review + assign** to assign the role.
### Add a condition
-After you have given FHIR Reader access to your app, go back to the Logic App workflow Designer. Then add a condition to determine whether the event is one you want to process. Select the "+" below HTTP to "Choose an operation". On the right, search for the word "condition". Select on "Built-in" to display the Control icon. Next select Actions and choose Condition.
+After you have given FHIR Reader access to your app, go back to the Logic App workflow Designer. Then add a condition to determine whether the event is one you want to process. Select the **+** below HTTP to "Choose an operation". On the right, search for the word "condition". Select on **Built-in** to display the Control icon. Next select **Actions** and choose **Condition**.
When the condition is ready, you can specify what actions happen if the condition is true or false. ### Choosing a condition criteria
-In order to specify whether you want to take action for the specific event, begin specifying the criteria by clicking on **Condition** in the workflow. A set of condition choices are then displayed.
+In order to specify whether you want to take action for the specific event, begin specifying the criteria by selecting on **Condition** in the workflow. A set of condition choices are then displayed.
Under the **And** box, add these two conditions:
The expression for getting the resourceType is `body('HTTP')?['resourceType']`.
#### Event Type
-You can select Event Type from the Dynamic Content.
+You can select **Event Type** from the Dynamic Content.
Here's an example of the Condition criteria:
When you've entered the condition criteria, save your workflow.
#### Workflow dashboard
-To check the status of your workflow, select Overview in the workflow menu. Here's a dashboard for a workflow:
+To check the status of your workflow, select **Overview** in the workflow menu. Here's a dashboard for a workflow:
:::image type="content" source="media/events-logic-apps/events-logic-dashboard.png" alt-text="Screenshot of the Logic App workflow dashboard." lightbox="media/events-logic-apps/events-logic-dashboard.png":::
You can do the following operations from your workflow dashboard:
### Condition testing
-Save your workflow by clicking the "Save" button.
+Save your workflow by selecting the **Save** button.
To test your new workflow, do the following steps:
In this tutorial, you learned how to use Logic Apps to process FHIR events.
To learn about Events, see > [!div class="nextstepaction"]
-> [What are Events?](events-overview.md)
+> [What are events?](events-overview.md)
To learn about the Events frequently asked questions (FAQs), see > [!div class="nextstepaction"]
-> [Frequently asked questions about Events](events-faqs.md)
+> [Frequently asked questions about events](events-faqs.md)
FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Events Deploy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-deploy-portal.md
Title: Deploy Events using the Azure portal - Azure Health Data Services
-description: Learn how to deploy the Events feature using the Azure portal.
+ Title: Deploy events using the Azure portal - Azure Health Data Services
+description: Learn how to deploy the events feature using the Azure portal.
Last updated 06/23/2022
-# Quickstart: Deploy Events using the Azure portal
+# Quickstart: Deploy events using the Azure portal
> [!NOTE] > [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
-In this quickstart, learn how to deploy the Azure Health Data Services Events feature in the Azure portal to send FHIR and DICOM event messages.
+In this quickstart, learn how to deploy the events feature in the Azure portal to send FHIR and DICOM event messages.
## Prerequisites
-It's important that you have the following prerequisites completed before you begin the steps of deploying the Events feature in Azure Health Data Services.
+It's important that you have the following prerequisites completed before you begin the steps of deploying the events feature.
* [An active Azure account](https://azure.microsoft.com/free/search/?OCID=AID2100131_SEM_c4b0772dc7df1f075552174a854fd4bc:G:s&ef_id=c4b0772dc7df1f075552174a854fd4bc:G:s&msclkid=c4b0772dc7df1f075552174a854fd4bc) * [Microsoft Azure Event Hubs namespace and an event hub deployed in the Azure portal](../../event-hubs/event-hubs-create.md)
It's important that you have the following prerequisites completed before you be
* [FHIR service deployed in the workspace](../fhir/fhir-portal-quickstart.md) or [DICOM service deployed in the workspace](../dicom/deploy-dicom-services-in-azure.md) > [!IMPORTANT]
-> You will also need to make sure that the Microsoft.EventGrid resource provider has been successfully registered with your Azure subscription to deploy the Events feature. For more information, see [Azure resource providers and types - Register resource provider](../../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider).
+> You will also need to make sure that the Microsoft.EventGrid resource provider has been successfully registered with your Azure subscription to deploy the events feature. For more information, see [Azure resource providers and types - Register resource provider](../../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider).
> [!NOTE]
-> For the purposes of this quickstart, we'll be using a basic Events set up and an event hub as the endpoint for Events messages. To learn how to deploy Azure Event Hubs, see [Quickstart: Create an event hub using Azure portal](../../event-hubs/event-hubs-create.md).
+> For the purposes of this quickstart, we'll be using a basic events set up and an event hub as the endpoint for events messages. To learn how to deploy Azure Event Hubs, see [Quickstart: Create an event hub using Azure portal](../../event-hubs/event-hubs-create.md).
-## Deploy Events
+## Deploy events
-1. Browse to the workspace that contains the FHIR or DICOM service you want to send Events messages from and select the **Events** button on the left hand side of the portal.
+1. Browse to the workspace that contains the FHIR or DICOM service you want to send events messages from and select the **Events** button on the left hand side of the portal.
:::image type="content" source="media/events-deploy-in-portal/events-workspace-select.png" alt-text="Screenshot of workspace and select Events button." lightbox="media/events-deploy-in-portal/events-workspace-select.png":::
It's important that you have the following prerequisites completed before you be
3. In the **Create Event Subscription** box, enter the following subscription information.
- * **Name**: Provide a name for your Events subscription.
- * **System Topic Name**: Provide a name for your System Topic.
+ * **Name**: Provide a name for your events subscription.
+ * **System Topic Name**: Provide a name for your system topic.
> [!NOTE]
- > The first time you set up the Events feature, you will be required to enter a new **System Topic Name**. Once the system topic for the workspace is created, the **System Topic Name** will be used for any additional Events subscriptions that you create within the workspace.
+ > The first time you set up the events feature, you will be required to enter a new **System Topic Name**. Once the system topic for the workspace is created, the **System Topic Name** will be used for any additional events subscriptions that you create within the workspace.
* **Event types**: Type of FHIR or DICOM events to send messages for (for example: create, updated, and deleted).
- * **Endpoint Details**: Endpoint to send Events messages to (for example: an Azure Event Hubs namespace + an event hub).
+ * **Endpoint Details**: Endpoint to send events messages to (for example: an Azure Event Hubs namespace + an event hub).
>[!NOTE] > For the purposes of this quickstart, we'll use the **Event Schema** and the **Managed Identity Type** settings at their default values.
It's important that you have the following prerequisites completed before you be
## Next steps
-In this quickstart, you learned how to deploy Events using the Azure portal.
+In this quickstart, you learned how to deploy events using the Azure portal.
-To learn how to enable the Events metrics, see
+To learn how to enable the events metrics, see
> [!div class="nextstepaction"]
-> [How to use Events metrics](events-use-metrics.md)
+> [How to use events metrics](events-use-metrics.md)
To learn how to export Event Grid system diagnostic logs and metrics, see > [!div class="nextstepaction"]
-> [How to enable diagnostic settings for Events](events-enable-diagnostic-settings.md)
+> [How to enable diagnostic settings for events](events-enable-diagnostic-settings.md)
FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Events Disable Delete Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-disable-delete-workspace.md
Title: How to disable the Events feature and delete Azure Health Data Services workspaces - Azure Health Data Services
-description: Learn how to disable the Events feature and delete Azure Health Data Services workspaces.
+ Title: How to disable the events feature and delete Azure Health Data Services workspaces - Azure Health Data Services
+description: Learn how to disable the events feature and delete Azure Health Data Services workspaces.
Last updated 07/11/2023
-# How to disable the Events feature and delete Azure Health Data Services workspaces
+# How to disable the events feature and delete Azure Health Data Services workspaces
> [!NOTE] > [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
-In this article, learn how to disable the Events feature and delete Azure Health Data Services workspaces.
+In this article, learn how to disable the events feature and delete Azure Health Data Services workspaces.
-## Disable Events
+## Disable events
-To disable Events from sending event messages for a single **Event Subscription**, the **Event Subscription** must be deleted.
+To disable events from sending event messages for a single **Event Subscription**, the **Event Subscription** must be deleted.
1. Select the **Event Subscription** to be deleted. In this example, we select an Event Subscription named **fhir-events**.
To disable Events from sending event messages for a single **Event Subscription*
:::image type="content" source="media/disable-delete-workspaces/events-select-subscription-delete.png" alt-text="Screenshot of events subscriptions and select delete and confirm the event subscription to be deleted." lightbox="media/disable-delete-workspaces/events-select-subscription-delete.png":::
-3. To completely disable Events, delete all **Event Subscriptions** so that no **Event Subscriptions** remain.
+3. To completely disable events, delete all **Event Subscriptions** so that no **Event Subscriptions** remain.
:::image type="content" source="media/disable-delete-workspaces/events-disable-no-subscriptions.png" alt-text="Screenshot of Events subscriptions and delete all event subscriptions to disable events." lightbox="media/disable-delete-workspaces/events-disable-no-subscriptions.png":::
To avoid errors and successfully delete workspaces, follow these steps and in th
## Next steps
-In this article, you learned how to disable the Events feature and delete workspaces.
+In this article, you learned how to disable the events feature and delete workspaces.
-To learn about how to troubleshoot Events, see
+To learn about how to troubleshoot events, see
> [!div class="nextstepaction"]
-> [Troubleshoot Events](events-troubleshooting-guide.md)
+> [Troubleshoot events](events-troubleshooting-guide.md)
FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Events Enable Diagnostic Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-enable-diagnostic-settings.md
Title: Enable Events diagnostic settings for diagnostic logs and metrics export - Azure Health Data Services
-description: Learn how to enable Events diagnostic settings for diagnostic logs and metrics exporting.
+ Title: Enable events diagnostic settings for diagnostic logs and metrics export - Azure Health Data Services
+description: Learn how to enable events diagnostic settings for diagnostic logs and metrics exporting.
Last updated 06/23/2022
-# How to enable diagnostic settings for Events
+# How to enable diagnostic settings for events
> [!NOTE] > [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
-In this article, learn how to enable the Events diagnostic settings for Azure Event Grid system topics.
+In this article, learn how to enable the events diagnostic settings for Azure Event Grid system topics.
## Resources
In this article, learn how to enable the Events diagnostic settings for Azure Ev
|More information about how to work with diagnostics logs.|[Azure Resource Log documentation](../../azure-monitor/essentials/platform-logs-overview.md)| > [!NOTE]
-> It might take up to 15 minutes for the first Events diagnostic logs and metrics to display in the destination of your choice.
+> It might take up to 15 minutes for the first events diagnostic logs and metrics to display in the destination of your choice.
## Next steps
-In this article, you learned how to enable diagnostic settings for Events.
+In this article, you learned how to enable diagnostic settings for events.
-To learn how to use Events metrics using the Azure portal, see
+To learn how to use events metrics using the Azure portal, see
> [!div class="nextstepaction"]
-> [How to use Events metrics](events-use-metrics.md)
+> [How to use events metrics](events-use-metrics.md)
FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Events Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-faqs.md
Title: Frequently asked questions about Events - Azure Health Data Services
-description: Learn about the frequently asked questions about Events.
+ Title: Frequently asked questions about events - Azure Health Data Services
+description: Learn about the frequently asked questions about events.
Last updated 07/11/2023
-# Frequently asked questions about Events
+# Frequently asked questions about events
> [!NOTE] > [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification. ## Events: The basics
-## Can I use Events with a different FHIR/DICOM service other than the Azure Health Data Services FHIR/DICOM service?
+## Can I use events with a different FHIR/DICOM service other than the Azure Health Data Services FHIR/DICOM service?
-No. The Azure Health Data Services Events feature only currently supports the Azure Health Data Services FHIR and DICOM services.
+No. The Azure Health Data Services events feature only currently supports the Azure Health Data Services FHIR and DICOM services.
-## What FHIR resource events does Events support?
+## What FHIR resource changes does events support?
Events are generated from the following FHIR service types:
Events are generated from the following FHIR service types:
For more information about the FHIR service delete types, see [FHIR REST API capabilities for Azure Health Data Services FHIR service](../../healthcare-apis/fhir/fhir-rest-api-capabilities.md).
-## Does Events support FHIR bundles?
+## Does events support FHIR bundles?
-Yes. The Events feature is designed to emit notifications of data changes at the FHIR resource level.
+Yes. The events feature is designed to emit notifications of data changes at the FHIR resource level.
Events support these [FHIR bundle types](http://hl7.org/fhir/R4/valueset-bundle-type.html) in the following ways:
Events support these [FHIR bundle types](http://hl7.org/fhir/R4/valueset-bundle-
> [!NOTE] > Events are not sent in the sequence of the data operations in the FHIR bundle.
-## What DICOM image events does Events support?
+## What DICOM image changes does events support?
Events are generated from the following DICOM service types:
Events are generated from the following DICOM service types:
* **DicomImageUpdated** - The event emitted after a DICOM image gets updated successfully.
-## What is the payload of an Events message?
+## What is the payload of an events message?
-For a detailed description of the Events message structure and both required and nonrequired elements, see [Events troubleshooting guide](events-troubleshooting-guide.md).
+For a detailed description of the events message structure and both required and nonrequired elements, see [Events message structures](events-message-structure.md).
-## What is the throughput for the Events messages?
+## What is the throughput for the events messages?
The throughput of the FHIR or DICOM service and the Event Grid govern the throughput of FHIR and DICOM events. When a request made to the FHIR service is successful, it returns a 2xx HTTP status code. It also generates a FHIR resource or DICOM image changing event. The current limitation is 5,000 events/second per workspace for all FHIR or DICOM service instances in the workspace.
-## How am I charged for using Events?
+## How am I charged for using events?
-There are no extra charges for using [Azure Health Data Services Events](https://azure.microsoft.com/pricing/details/health-data-services/). However, applicable charges for the [Event Grid](https://azure.microsoft.com/pricing/details/event-grid/) are assessed against your Azure subscription.
+There are no extra charges for using [Azure Health Data Services events](https://azure.microsoft.com/pricing/details/health-data-services/). However, applicable charges for the [Event Grid](https://azure.microsoft.com/pricing/details/event-grid/) are assessed against your Azure subscription.
## How do I subscribe to multiple FHIR and/or DICOM services in the same workspace separately?
Yes. We recommend that you use different subscribers for each individual FHIR or
Yes. Event Grid supports customer's Health Insurance Portability and Accountability Act (HIPAA) and Health Information Trust Alliance (HITRUST) obligations. For more information, see [Microsoft Azure Compliance Offerings](https://azure.microsoft.com/resources/microsoft-azure-compliance-offerings/).
-## What is the expected time to receive an Events message?
+## What is the expected time to receive an events message?
On average, you should receive your event message within one second after a successful HTTP request. 99.99% of the event messages should be delivered within five seconds unless the limitation of either the FHIR service, DICOM service, or [Event Grid](../../event-grid/quotas-limits.md) has been met.
-## Is it possible to receive duplicate Events messages?
+## Is it possible to receive duplicate events messages?
-Yes. The Event Grid guarantees at least one Events message delivery with its push mode. There may be chances that the event delivery request returns with a transient failure status code for random reasons. In this situation, the Event Grid considers that as a delivery failure and resends the Events message. For more information, see [Azure Event Grid delivery and retry](../../event-grid/delivery-and-retry.md).
+Yes. The Event Grid guarantees at least one events message delivery with its push mode. There may be chances that the event delivery request returns with a transient failure status code for random reasons. In this situation, the Event Grid considers that as a delivery failure and resends the events message. For more information, see [Azure Event Grid delivery and retry](../../event-grid/delivery-and-retry.md).
Generally, we recommend that developers ensure idempotency for the event subscriber. The event ID or the combination of all fields in the `data` property of the message content are unique per each event. The developer can rely on them to deduplicate.
Generally, we recommend that developers ensure idempotency for the event subscri
[FAQs about the Azure Health Data Services](../healthcare-apis-faqs.md)
-[FAQs about Azure Health Data Services FHIR service](../fhir/fhir-faq.md)
- [FAQs about Azure Health Data Services DICOM service](../dicom/dicom-services-faqs.yml)
+[FAQs about Azure Health Data Services FHIR service](../fhir/fhir-faq.md)
+ [FAQs about Azure Health Data Services MedTech service](../iot/iot-connector-faqs.md) FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Events Message Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-message-structure.md
Title: Events message structure - Azure Health Data Services
-description: Learn about the Events message structures and required values.
+description: Learn about the events message structures and required values.
# Events message structures
-In this article, learn about the Events message structures, required and nonrequired elements, and see samples of Events message payloads.
+In this article, learn about the events message structures, required and nonrequired elements, and see samples of events message payloads.
> [!IMPORTANT]
-> Events currently supports only the following operations:
+> Events currently supports the following operations:
> > * **FhirResourceCreated** - The event emitted after a FHIR resource gets created successfully. >
In this article, learn about the Events message structures, required and nonrequ
## Next steps
-In this article, you learned about the Events message structures.
+In this article, you learned about the events message structures.
-To learn how to deploy Events using the Azure portal, see
+To learn how to deploy events using the Azure portal, see
> [!div class="nextstepaction"]
-> [Deploy Events using the Azure portal](events-deploy-portal.md)
+> [Deploy events using the Azure portal](events-deploy-portal.md)
FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Events Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-overview.md
Title: What are Events? - Azure Health Data Services
-description: Learn about Events, its features, integrations, and next steps.
+ Title: What are events? - Azure Health Data Services
+description: Learn about events, its features, integrations, and next steps.
Last updated 07/11/2023
-# What are Events?
+# What are events?
> [!NOTE] > [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
-Events are a notification and subscription feature in the Azure Health Data Services. Events enable customers to utilize and enhance the analysis and workflows of structured and unstructured data like vitals and clinical or progress notes, operations data, health data, and medical imaging data.
+Events are a subscription and notification feature in the Azure Health Data Services. Events enable customers to utilize and enhance the analysis and workflows of structured and unstructured data like vitals and clinical or progress notes, operations data, health data, and medical imaging data.
-When FHIR resource changes or Digital Imaging and Communications in Medicine (DICOM) image changes are successfully written to the Azure Health Data Services, the Events feature sends notification messages to Events subscribers. These event notification occurrences can be sent to multiple endpoints to trigger automation ranging from starting workflows to sending email and text messages to support the changes occurring from the health data it originated from. The Events feature integrates with the [Azure Event Grid service](../../event-grid/overview.md) and creates a system topic for the Azure Health Data Services workspace.
+When FHIR resource changes or Digital Imaging and Communications in Medicine (DICOM) image changes are successfully written to the Azure Health Data Services, the events feature sends notification messages to events subscribers. These event notification occurrences can be sent to multiple endpoints to trigger automation ranging from starting workflows to sending email and text messages to support the changes occurring from the health data it originated from. The events feature integrates with the [Azure Event Grid service](../../event-grid/overview.md) and creates a system topic for the Azure Health Data Services workspace.
> [!IMPORTANT]
-> FHIR resource and DICOM image change data is only written and event messages are sent when the Events feature is turned on. The Event feature doesn't send messages on past resource changes or when the feature is turned off.
+> FHIR resource and DICOM image change data is only written and event messages are sent when the events feature is turned on. The event feature doesn't send messages on past resource changes or when the feature is turned off.
> [!TIP] > For more information about the features, configurations, and to learn about the use cases of the Azure Event Grid service, see [Azure Event Grid](../../event-grid/overview.md) > [!IMPORTANT] > Events currently supports the following operations:
Events are designed to support growth and changes in healthcare technology needs
## Configurable
-Choose the FHIR and DICOM event types that you want to receive messages about. Use the advanced features like filters, dead-lettering, and retry policies to tune Events message delivery options.
+Choose the FHIR and DICOM event types that you want to receive messages about. Use the advanced features like filters, dead-lettering, and retry policies to tune events message delivery options.
> [!NOTE] > The advanced features come as part of the Event Grid service. ## Extensible
-Use Events to send FHIR resource and DICOM image change messages to services like [Azure Event Hubs](../../event-hubs/event-hubs-about.md) or [Azure Functions](../../azure-functions/functions-overview.md) to trigger downstream automated workflows to enhance items such as operational data, data analysis, and visibility to the incoming data capturing near real time.
+Use events to send FHIR resource and DICOM image change messages to services like [Azure Event Hubs](../../event-hubs/event-hubs-about.md) or [Azure Functions](../../azure-functions/functions-overview.md) to trigger downstream automated workflows to enhance items such as operational data, data analysis, and visibility to the incoming data capturing near real time.
## Secure
-Built on a platform that supports protected health information compliance with privacy, safety, and security in mind, the Events messages don't transmit sensitive data as part of the message payload.
+Events are built on a platform that supports protected health information compliance with privacy, safety, and security in mind.
-Use [Azure Managed identities](../../active-directory/managed-identities-azure-resources/overview.md) to provide secure access from your Event Grid system topic to the Events message receiving endpoints of your choice.
+Use [Azure Managed identities](../../active-directory/managed-identities-azure-resources/overview.md) to provide secure access from your Event Grid system topic to the events message receiving endpoints of your choice.
## Next steps
-To learn about deploying Events using the Azure portal, see
+To learn about deploying events using the Azure portal, see
> [!div class="nextstepaction"]
-> [Deploy Events using the Azure portal](./events-deploy-portal.md)
+> [Deploy events using the Azure portal](events-deploy-portal.md)
-To learn about the frequently asks questions (FAQs) about Events, see
-
-> [!div class="nextstepaction"]
-> [Frequently asked questions about Events](./events-faqs.md)
+To learn about troubleshooting events, see
-To learn about troubleshooting Events, see
+> [!div class="nextstepaction"]
+> [Troubleshoot events](events-troubleshooting-guide.md)
+To learn about the frequently asks questions (FAQs) about events, see
+
> [!div class="nextstepaction"]
-> [Troubleshoot Events](./events-troubleshooting-guide.md)
+> [Frequently asked questions about Events](events-faqs.md)
FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Events Troubleshooting Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-troubleshooting-guide.md
Title: Troubleshoot Events - Azure Health Data Services
-description: Learn how to troubleshoot Events.
+ Title: Troubleshoot events - Azure Health Data Services
+description: Learn how to troubleshoot events.
Last updated 07/12/2023
-# Troubleshoot Events
+# Troubleshoot events
> [!NOTE] > [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
-This article provides resources for troubleshooting Events.
+This article provides resources to troubleshoot events.
> [!IMPORTANT]
-> FHIR resource and DICOM image change data is only written and event messages are sent when the Events feature is turned on. The Event feature doesn't send messages on past FHIR resource or DICOM image changes or when the feature is turned off.
+> FHIR resource and DICOM image change data is only written and event messages are sent when the Events feature is turned on. The event feature doesn't send messages on past FHIR resource or DICOM image changes or when the feature is turned off.
## Resources for troubleshooting > [!IMPORTANT]
-> Events currently supports only the following operations:
+> Events currently supports the following operations:
> > * **FhirResourceCreated** - The event emitted after a FHIR resource gets created successfully. >
This article provides resources for troubleshooting Events.
### Events message structures
-Use this resource to learn about the Events message structures, required and nonrequired elements, and see sample Events messages:
-* [Events message structures](./events-message-structure.md)
+Use this resource to learn about the events message structures, required and nonrequired elements, and see sample Events messages:
+* [Events message structures](events-message-structure.md)
### How to's
-Use this resource to learn how to deploy Events in the Azure portal:
-* [Deploy Events using the Azure portal](./events-deploy-portal.md)
+Use this resource to learn how to deploy events in the Azure portal:
+* [Deploy events using the Azure portal](events-deploy-portal.md)
> [!IMPORTANT] > The Event Subscription requires access to whichever endpoint you chose to send Events messages to. For more information, see [Enable managed identity for a system topic](../../event-grid/enable-identity-system-topics.md).
-Use this resource to learn how to use Events metrics:
-* [How to use Events metrics](./events-display-metrics.md)
+Use this resource to learn how to use events metrics:
+* [How to use events metrics](events-display-metrics.md)
-Use this resource to learn how to enable diagnostic settings for Events:
-* [How to enable diagnostic settings for Events](./events-export-logs-metrics.md)
+Use this resource to learn how to enable diagnostic settings for events:
+* [How to enable diagnostic settings for events](events-export-logs-metrics.md)
## Contact support If you have a technical question about Events or if you have a support related issue, see [Create a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview) and complete the required fields under the **Problem description** tab. For more information about Azure support options, see [Azure support plans](https://azure.microsoft.com/support/options/#support-plans). ## Next steps
-In this article, you were provided resources for troubleshooting Events.
+In this article, you were provided resources for troubleshooting events.
-To learn about the frequently asked questions (FAQs) about Events, see
+To learn about the frequently asked questions (FAQs) about events, see
> [!div class="nextstepaction"]
-> [Frequently asked questions about Events](events-faqs.md)
+> [Frequently asked questions about events](events-faqs.md)
FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Events Use Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-use-metrics.md
Title: Use Events metrics - Azure Health Data Services
-description: Learn how use Events metrics.
+ Title: Use events metrics - Azure Health Data Services
+description: Learn how use events metrics.
Last updated 07/11/2023
-# How to use Events metrics
+# How to use events metrics
> [!NOTE] > [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
-In this article, learn how to use Events metrics using the Azure portal.
+In this article, learn how to use events metrics using the Azure portal.
> [!TIP] > To learn more about Azure Monitor and metrics, see [Azure Monitor Metrics overview](../../azure-monitor/essentials/data-platform-metrics.md). > [!NOTE]
-> For the purposes of this article, an [Azure Event Hubs](../../event-hubs/event-hubs-about.md) was used as the Events message endpoint.
+> For the purposes of this article, an [Azure Event Hubs](../../event-hubs/event-hubs-about.md) was used as the events message endpoint.
## Use metrics
In this article, learn how to use Events metrics using the Azure portal.
:::image type="content" source="media\events-display-metrics\events-metrics-subscription.png" alt-text="Screenshot of select the metrics button." lightbox="media\events-display-metrics\events-metrics-subscription.png":::
-4. From this page, notice that the Event Hubs received the incoming message presented in the previous Events subscription metrics pages.
+4. From this page, notice that the Event Hubs received the incoming message presented in the previous Events Subscription metrics pages.
:::image type="content" source="media\events-display-metrics\events-metrics-event-hub.png" alt-text="Screenshot of displaying event hubs metrics." lightbox="media\events-display-metrics\events-metrics-event-hub.png"::: ## Next steps
-In this tutorial, you learned how to use Events metrics using the Azure portal.
+In this tutorial, you learned how to use events metrics using the Azure portal.
-To learn how to export Events Azure Event Grid system diagnostic logs and metrics, see
+To learn how to enable events diagnostic settings, see
> [!div class="nextstepaction"]
-> [Enable diagnostic settings for Events](events-enable-diagnostic-settings.md)
+> [Enable diagnostic settings for events](events-enable-diagnostic-settings.md)
FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Fhir Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-faq.md
For more information, see [Supported FHIR features](fhir-features-supported.md).
FHIR service is our implementation of the FHIR specification that sits in the Azure Health Data Services, which allows you to have a FHIR service and a DICOM service within a single workspace. Azure API for FHIR was our initial GA product and is still available as a stand-alone product. The main feature differences are: * FHIR service has a limit of 4 TB, and Azure API for FHIR supports more than 4 TB.
-* FHIR service support [transaction bundles](https://www.hl7.org/fhir/http.html#transaction).
+* FHIR service support additional capabilities as
+** [Transaction bundles](https://www.hl7.org/fhir/http.html#transaction).
+** [Incremental Import](configure-import-data.md).
+** [Autoscaling](fhir-service-autoscale.md) is enabled by default.
* Azure API for FHIR has more platform features (such as customer managed keys, and cross region DR) that aren't yet available in FHIR service in Azure Health Data Services. ### What's the difference between the FHIR service in Azure Health Data Services and the open-source FHIR server?
SMART (Substitutable Medical Applications and Reusable Technology) on FHIR is a
### Does the FHIR service support SMART on FHIR?
-We have a basic SMART on FHIR proxy as part of the managed service. If this doesnΓÇÖt meet your needs, you can use the open-source FHIR proxy for more advanced SMART scenarios.
+Yes, SMART on FHIR capability is supported using [AHDS samples](https://aka.ms/azure-health-data-services-smart-on-fhir-sample). This is referred to as SMART on FHIR(Enhanced). SMART on FHIR(Enhanced) can be considered to meet requirements with [SMART on FHIR Implementation Guide (v 1.0.0)](https://hl7.org/fhir/smart-app-launch/1.0.0/) and [§170.315(g)(10) Standardized API for patient and population services criterion](https://www.healthit.gov/test-method/standardized-api-patient-and-population-services#ccg). For more information, visit [SMART on FHIR(Enhanced) Documentation](smart-on-fhir.md).
+ ### Can I create a custom FHIR resource?
There are two basic Delete types supported within the FHIR service. These are [D
### Can I perform health checks on FHIR service?
-To perform health check on FHIR service , enter `{{fhirurl}}/health/check` in the GET request. You should be able to see Status of FHIR service. HTTP Status code response with 200 and OverallStatus as "Healthy" in response, means your health check is succesful.
-In case of errors, you will recieve error response with HTTP status code 404 (Not Found) or status code 500 (Internal Server Error), and detailed information in response body in some scenarios.
+To perform health check on FHIR service , enter `{{fhirurl}}/health/check` in the GET request. You should be able to see Status of FHIR service. HTTP Status code response with 200 and OverallStatus as "Healthy" in response, means your health check is successful.
+In case of errors, you will receive error response with HTTP status code 404 (Not Found) or status code 500 (Internal Server Error), and detailed information in response body in some scenarios.
## Next steps
healthcare-apis Smart On Fhir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/smart-on-fhir.md
Below tutorials provide steps to enable SMART on FHIR applications with FHIR Ser
- After registering the application, make note of the applicationId for client application. - Ensure you have access to Azure Subscription of FHIR service, to create resources and add role assignments.
-## SMART on FHIR Enhanced using Azure Health Data Services Samples
+## SMART on FHIR using Azure Health Data Services Samples (SMART on FHIR (Enhanced))
### Step 1 : Set up FHIR SMART user role Follow the steps listed under section [Manage Users: Assign Users to Role](https://learn.microsoft.com/azure/active-directory/fundamentals/active-directory-users-assign-role-azure-portal). Any user added to this role will be able to access the FHIR Service if their requests comply with the SMART on FHIR implementation Guide, such as request having access token which includes a fhirUser claim and a clinical scopes claim. The access granted to the users in this role will then be limited by the resources associated to their fhirUser compartment and the restrictions in the clinical scopes.
Follow the steps listed under section [Manage Users: Assign Users to Role](https
### Step 2 : FHIR server integration with samples For integration with Azure Health Data Services samples, you would need to follow the steps in samples open source solution.
-**[Click on the link](https://aka.ms/azure-health-data-services-smart-on-fhir-sample)** to navigate to Azure Health Data Service Samples OSS. This steps listed in the document will enable integration of FHIR server with other Azure Services (such as APIM, Azure functions and more).
+**[Click on the link](https://aka.ms/azure-health-data-services-smart-on-fhir-sample)** to navigate to Azure Health Data Service Samples OSS. This step listed in the document will enable integration of FHIR server with other Azure Services (such as APIM, Azure functions and more).
> [!NOTE] > Samples are open-source code, and you should review the information and licensing terms on GitHub before using it. They are not part of the Azure Health Data Service and are not supported by Microsoft Support. These samples can be used to demonstrate how Azure Health Data Services and other open-source tools can be used together to demonstrate [§170.315(g)(10) Standardized API for patient and population services criterion](https://www.healthit.gov/test-method/standardized-api-patient-and-population-services#ccg) compliance, using Azure Active Directory as the identity provider workflow.
For integration with Azure Health Data Services samples, you would need to follo
<summary> Click to expand! </summary> > [!NOTE]
-> This is another option to using "SMART on FHIR Enhanced using AHDS Samples" mentioned above. We suggest you to adopt SMART on FHIR enhanced. SMART on FHIR Proxy option is legacy option.
-> SMART on FHIR enhanced version provides added capabilities than SMART on FHIR proxy. SMART on FHIR enhanced capability can be considered to meet requirements with [SMART on FHIR Implementation Guide (v 1.0.0)](https://hl7.org/fhir/smart-app-launch/1.0.0/) and [§170.315(g)(10) Standardized API for patient and population services criterion](https://www.healthit.gov/test-method/standardized-api-patient-and-population-services#ccg).
+> This is another option to SMART on FHIR(Enhanced) using AHDS Samples mentioned above. We suggest you to adopt SMART on FHIR(Enhanced). SMART on FHIR Proxy option is legacy option.
+> SMART on FHIR(Enhanced) provides added capabilities than SMART on FHIR proxy. SMART on FHIR(Enhanced) can be considered to meet requirements with [SMART on FHIR Implementation Guide (v 1.0.0)](https://hl7.org/fhir/smart-app-launch/1.0.0/) and [§170.315(g)(10) Standardized API for patient and population services criterion](https://www.healthit.gov/test-method/standardized-api-patient-and-population-services#ccg).
-### Step 1 : Set admin consent for your client application
+### Step 1: Set admin consent for your client application
To use SMART on FHIR, you must first authenticate and authorize the app. The first time you use SMART on FHIR, you also must get administrative consent to let the app access your FHIR resources.
Add the reply URL to the public client application that you created earlier for
<!![Reply URL configured for the public client](media/tutorial-smart-on-fhir/configure-reply-url.png)>
-### Step 3 : Get a test patient
+### Step 3: Get a test patient
To test the FHIR service and the SMART on FHIR proxy, you'll need to have at least one patient in the database. If you've not interacted with the API yet, and you don't have data in the database, see [Access the FHIR service using Postman](./../fhir/use-postman.md) to load a patient. Make a note of the ID of a specific patient.
-### Step 4 : Download the SMART on FHIR app launcher
+### Step 4: Download the SMART on FHIR app launcher
The open-source [FHIR Server for Azure repository](https://github.com/Microsoft/fhir-server) includes a simple SMART on FHIR app launcher and a sample SMART on FHIR app. In this tutorial, use this SMART on FHIR launcher locally to test the setup.
Use this command to run the application:
dotnet run ```
-### Step 5 : Test the SMART on FHIR proxy
+### Step 5: Test the SMART on FHIR proxy
After you start the SMART on FHIR app launcher, you can point your browser to `https://localhost:5001`, where you should see the following screen:
healthcare-apis Healthcare Apis Configure Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/healthcare-apis-configure-private-link.md
Ensure the region for the new private endpoint is the same as the region for you
[![Screen image of the Azure portal Basics Tab.](media/private-link/private-link-basics.png)](media/private-link/private-link-basics.png#lightbox)
-For the resource type, search and select **Microsoft.HealthcareApis/services** from the drop-down list. For the resource, select the workspace in the resource group. The target subresource, **healthcareworkspace**, is automatically populated.
+For the resource type, search and select **Microsoft.HealthcareApis/workspaces** from the drop-down list. For the resource, select the workspace in the resource group. The target subresource, **healthcareworkspace**, is automatically populated.
[![Screen image of the Azure portal Resource tab.](media/private-link/private-link-resource.png)](media/private-link/private-link-resource.png#lightbox)
industry Get Sensor Data From Sensor Partner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industry/agriculture/get-sensor-data-from-sensor-partner.md
Title: Get sensor data from the partners
description: This article describes how to get sensor data from partners. + Last updated 11/04/2019
industry Ingest Historical Telemetry Data In Azure Farmbeats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industry/agriculture/ingest-historical-telemetry-data-in-azure-farmbeats.md
Last updated 11/04/2019 -+ # Ingest historical telemetry data
iot-central Howto Use Commands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-use-commands.md
The following screenshot shows how the successful command response displays in t
:::image type="content" source="media/howto-use-commands/simple-command-ui.png" alt-text="Screenshot showing how to view command payload for a standard command." lightbox="media/howto-use-commands/simple-command-ui.png":::
+> [!NOTE]
+> For standard commands, there's a timeout of 30 seconds. If a device doesn't respond within 30 seconds, IoT Central assumes that the command failed. This timeout period isn't configurable.
+ ## Long-running commands In a long-running command, a device doesn't immediately complete the command. Instead, the device acknowledges receipt of the command and then later confirms that the command completed. This approach lets a device complete a long-running operation without keeping the connection to IoT Central open.
iot-edge How To Manage Device Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-manage-device-certificates.md
Title: Manage IoT Edge certificates+ description: How to install and manage certificates on an Azure IoT Edge device to prepare for production deployment.
All IoT Edge devices use certificates to create secure connections between the runtime and any modules running on the device. IoT Edge devices functioning as gateways use these same certificates to connect to their downstream devices, too. > [!NOTE]
-> The term *root CA* used throughout this article refers to the topmost authority's certificate in the certificate chain for your IoT solution. You do not need to use the certificate root of a syndicated certificate authority, or the root of your organization's certificate authority. In many cases, it's actually an intermediate CA certificate.
+> The term *root CA* used throughout this article refers to the topmost authority's certificate in the certificate chain for your IoT solution. You don't need to use the certificate root of a syndicated certificate authority, or the root of your organization's certificate authority. Often, it's actually an intermediate CA certificate.
## Prerequisites
Edge Daemon issues module server and identity certificates for use by Edge modul
### Renewal
-Server certificates may be issued off the Edge CA certificate or through a DPS-configured CA. Regardless of the issuance method, these certificates must be renewed by the module.
+Server certificates may be issued off the Edge CA certificate. Regardless of the issuance method, these certificates must be renewed by the module. If you develop a custom module, you must implement the renewal logic in your module.
+
+The *edgeHub* module supports a certificate renewal feature. You can configure the *edgeHub* module server certificate renewal using the following environment variables:
+
+* **ServerCertificateRenewAfterInMs**: Sets the duration in milliseconds when the *edgeHub* server certificate is renewed irrespective of certificate expiry time.
+* **MaxCheckCertExpiryInMs**: Sets the duration in milliseconds when *edgeHub* service checks the *edgeHub* server certificate expiration. If the variable is set, the check happens irrespective of certificate expiry time.
+
+For more information about the environment variables, see [EdgeHub and EdgeAgent environment variables](https://github.com/Azure/iotedge/blob/main/doc/EnvironmentVariables.md).
## Changes in 1.2 and later
iot-edge Iot Edge Certs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-certs.md
Title: Understand how IoT Edge uses certificates for security+ description: How Azure IoT Edge uses certificate to validate devices, modules, and downstream devices enabling secure connections between them.
iot-hub Iot Hub Devguide Quotas Throttling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-quotas-throttling.md
Previously updated : 02/09/2023 Last updated : 08/15/2023
IoT Hub enforces other operational limits:
| Operation | Limit | | | -- |
-| Devices | The total number of devices plus modules that can be registered to a single IoT hub is capped at 1,000,000. The only way to increase this limit is to contact [Microsoft Support](https://azure.microsoft.com/support/options/).|
+| Devices | The total number of devices plus modules that can be registered to a single IoT hub is capped at 1,000,000. |
| File uploads | 10 concurrent file uploads per device. | | Jobs<sup>1</sup> | Maximum concurrent jobs are 1 (for Free and S1), 5 (for S2), and 10 (for S3). However, the max concurrent [device import/export jobs](iot-hub-bulk-identity-mgmt.md) is 1 for all tiers. <br/>Job history is retained up to 30 days. | | Additional endpoints | Basic and standard SKU hubs may have 10 additional endpoints. Free SKU hubs may have one additional endpoint. |
key-vault Assign Access Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/assign-access-policy.md
description: How to use the Azure CLI to assign a Key Vault access policy to a s
tags: azure-resource-manager-+
lab-services How To Prepare Windows Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-prepare-windows-template.md
Title: Prepare Windows lab template
description: Prepare a Windows-based lab template in Azure Lab Services. Configure commonly used software and OS settings, such as Windows Update, OneDrive, and Microsoft 365. +
Install other apps commonly used for teaching through the Windows Store app. Sug
## Next steps -- Learn how to manage cost by [controlling Windows shutdown behavior](how-to-windows-shutdown.md)
+- Learn how to manage cost by [controlling Windows shutdown behavior](how-to-windows-shutdown.md)
load-balancer Load Balancer Test Frontend Reachability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-test-frontend-reachability.md
Based on the current health probe state of your backend instances, you receive d
## Usage considerations - ICMP pings can't be disabled and are allowed by default on Standard Public Load Balancers.
+- By design, ICMP pings with packet sizes larger than 64 bytes will be dropped, leading to timeouts.
> [!NOTE] > ICMP ping requests are not sent to the backend instances; they are handled by the Load Balancer.
logic-apps Logic Apps Enterprise Integration As2 Mdn Acknowledgment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-as2-mdn-acknowledgment.md
Title: AS2 MDN acknowledgments
description: Learn about Message Disposition Notification (MDN) acknowledgments for AS2 messages in Azure Logic Apps. ms.suite: integration-- Previously updated : 08/23/2022 Last updated : 08/15/2023 # MDN acknowledgments for AS2 messages in Azure Logic Apps
logic-apps Logic Apps Enterprise Integration As2 Message Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-as2-message-settings.md
Previously updated : 08/23/2022 Last updated : 08/15/2023 # Reference for AS2 message settings in agreements for Azure Logic Apps
Last updated 08/23/2022
This reference describes the properties that you can set in an AS2 agreement for specifying how to handle messages between [trading partners](logic-apps-enterprise-integration-partners.md). Set up these properties based on your agreement with the partner that exchanges messages with you.
-<a name="AS2-incoming-messages"></a>
+<a name="as2-inbound-messages"></a>
## AS2 Receive settings
-![Select "Receive Settings"](./media/logic-apps-enterprise-integration-as2-message-settings/receive-settings.png)
+![Screenshot shows Azure portal and AS2 agreement settings for inbound messages.](./media/logic-apps-enterprise-integration-as2-message-settings/receive-settings.png)
| Property | Required | Description | |-|-|-| | **Override message properties** | No | Overrides the properties on incoming messages with your property settings. |
-| **Message should be signed** | No | Specifies whether all incoming messages must be digitally signed. If you require signing, from the **Certificate** list, select an existing guest partner public certificate for validating the signature on the messages. If you don't have a certificate, learn more about [adding certificates](../logic-apps/logic-apps-enterprise-integration-certificates.md). |
-| **Message should be encrypted** | No | Specifies whether all incoming messages must be encrypted. Non-encrypted messages are rejected. If you require encryption, from the **Certificate** list, select an existing host partner private certificate for decrypting incoming messages. If you don't have a certificate, learn more about [adding certificates](../logic-apps/logic-apps-enterprise-integration-certificates.md). |
+| **Message should be signed** | No | Specifies whether all incoming messages must be digitally signed. If you require signing, from the **Certificate** list, select an existing guest partner public certificate for validating the signature on the messages. If you don't have a certificate, learn more about [adding certificates](logic-apps-enterprise-integration-certificates.md). |
+| **Message should be encrypted** | No | Specifies whether all incoming messages must be encrypted. Non-encrypted messages are rejected. If you require encryption, from the **Certificate** list, select an existing host partner private certificate for decrypting incoming messages. If you don't have a certificate, learn more about [adding certificates](logic-apps-enterprise-integration-certificates.md). |
| **Message should be compressed** | No | Specifies whether all incoming messages must be compressed. Non-compressed messages are rejected. | | **Disallow Message ID duplicates** | No | Specifies whether to allow messages with duplicate IDs. If you disallow duplicate IDs, select the number of days between checks. You can also choose whether to suspend duplicates. | | **MDN Text** | No | Specifies the default message disposition notification (MDN) that you want sent to the message sender. |
-| **Send MDN** | No | Specifies whether to send synchronous MDNs for received messages. |
+| **Send MDN** | No | Specifies whether to send synchronous MDNs for received messages. |
| **Send signed MDN** | No | Specifies whether to send signed MDNs for received messages. If you require signing, from the **MIC Algorithm** list, select the algorithm to use for signing messages. | | **Send asynchronous MDN** | No | Specifies whether to send MDNs asynchronously. If you select asynchronous MDNs, in the **URL** box, specify the URL for where to send the MDNs. |
-||||
-<a name="AS2-outgoing-messages"></a>
+<a name="as2-outbound-messages"></a>
## AS2 Send settings
-![Select "Send Settings"](./media/logic-apps-enterprise-integration-as2-message-settings/send-settings.png)
+![Screenshot shows Azure portal and AS2 agreement settings for outbound messages.](./media/logic-apps-enterprise-integration-as2-message-settings/send-settings.png)
| Property | Required | Description | |-|-|-|
-| **Enable message signing** | No | Specifies whether all outgoing messages must be digitally signed. If you require signing, select these values: <p>- From the **Signing Algorithm** list, select the algorithm to use for signing messages. <br>- From the **Certificate** list, select an existing host partner private certificate for signing messages. If you don't have a certificate, learn more about [adding certificates](../logic-apps/logic-apps-enterprise-integration-certificates.md). |
-| **Enable message encryption** | No | Specifies whether all outgoing messages must be encrypted. If you require encryption, select these values: <p>- From the **Encryption Algorithm** list, select the guest partner public certificate algorithm to use for encrypting messages. <br>- From the **Certificate** list, select an existing guest partner public certificate for encrypting outgoing messages. If you don't have a certificate, learn more about [adding certificates](../logic-apps/logic-apps-enterprise-integration-certificates.md). |
+| **Enable message signing** | No | Specifies whether all outgoing messages must be digitally signed. If you require signing, select these values: <br><br>- From the **Signing Algorithm** list, select the algorithm to use for signing messages. <br>- From the **Certificate** list, select an existing host partner private certificate for signing messages. If you don't have a certificate, learn more about [adding certificates](logic-apps-enterprise-integration-certificates.md). |
+| **Enable message encryption** | No | Specifies whether all outgoing messages must be encrypted. If you require encryption, select these values: <br><br>- From the **Encryption Algorithm** list, select the guest partner public certificate algorithm to use for encrypting messages. <br>- From the **Certificate** list, select an existing guest partner public certificate for encrypting outgoing messages. If you don't have a certificate, learn more about [adding certificates](logic-apps-enterprise-integration-certificates.md). |
| **Enable message compression** | No | Specifies whether all outgoing messages must be compressed. | | **Unfold HTTP headers** | No | Puts the HTTP `content-type` header onto a single line. | | **Transmit file name in MIME header** | No | Specifies whether to include the file name in the MIME header. |
This reference describes the properties that you can set in an AS2 agreement for
| **Request asynchronous MDN** | No | Specifies whether to receive MDNs asynchronously. If you select asynchronous MDNs, in the **URL** box, specify the URL for where to send the MDNs. | | **Enable NRR** | No | Specifies whether to require non-repudiation receipt (NRR). This communication attribute provides evidence that the data was received as addressed. | | **SHA2 Algorithm format** | No | Specifies the MIC algorithm format to use for signing in the headers for the outgoing AS2 messages or MDN |
-||||
## Next steps
-[Exchange AS2 messages](../logic-apps/logic-apps-enterprise-integration-as2.md)
+[Exchange AS2 messages](logic-apps-enterprise-integration-as2.md)
logic-apps Logic Apps Enterprise Integration As2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-as2.md
Previously updated : 10/20/2022 Last updated : 08/15/2023 # Exchange AS2 messages using workflows in Azure Logic Apps
To send and receive AS2 messages in workflows that you create using Azure Logic
Except for tracking capabilities, the **AS2 (v2)** connector provides the same capabilities as the original **AS2** connector, runs natively with the Azure Logic Apps runtime, and offers significant performance improvements in message size, throughput, and latency. Unlike the original **AS2** connector, the **AS2 (v2)** connector doesn't require that you create a connection to your integration account. Instead, as described in the prerequisites, make sure that you link your integration account to the logic app resource where you plan to use the connector.
-This article shows how to add the AS2 encoding and decoding actions to an existing logic app workflow. The **AS2 (v2)** connector doesn't include any triggers, so you can use any trigger to start your workflow. The examples in this article use the [Request](../connectors/connectors-native-reqres.md) trigger.
+This how-to guide shows how to add the AS2 encoding and decoding actions to an existing logic app workflow. The **AS2 (v2)** connector doesn't include any triggers, so you can use any trigger to start your workflow. The examples in this guide use the [Request trigger](../connectors/connectors-native-reqres.md).
## Connector technical reference
The **AS2 (v2)** connector has no triggers. The following table describes the ac
* An [integration account resource](logic-apps-enterprise-integration-create-integration-account.md) to define and store artifacts for use in enterprise integration and B2B workflows.
- > [!IMPORTANT]
- >
- > To work together, both your integration account and logic app resource must exist in the same Azure subscription and Azure region.
+ * Both your integration account and logic app resource must exist in the same Azure subscription and Azure region.
-* At least two [trading partners](logic-apps-enterprise-integration-partners.md) in your integration account. The definitions for both partners must use the same *business identity* qualifier, which is **AS2Identity** for this scenario.
+ * Defines at least two [trading partners](logic-apps-enterprise-integration-partners.md) that participate in the AS2 operation used in your workflow. The definitions for both partners must use the same *business identity* qualifier, which is **AS2Identity** for this scenario.
-* An [AS2 agreement](logic-apps-enterprise-integration-agreements.md) in your integration account between the trading partners that participate in your workflow. Each agreement requires a host partner and a guest partner. The content in the messages between you and the other partner must match the agreement type.
+ * Defines an [AS2 agreement](logic-apps-enterprise-integration-agreements.md) between the trading partners that participate in your workflow. Each agreement requires a host partner and a guest partner. The content in the messages between you and the other partner must match the agreement type. For information about agreement settings to use when receiving and sending messages, see [AS2 message settings](logic-apps-enterprise-integration-as2-message-settings.md).
* Based on whether you're working on a Consumption or Standard logic app workflow, your logic app resource might require a link to your integration account:
Select the tab for either Consumption or Standard logic app workflows:
1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer.
-1. On the designer, under the trigger or action where you want to add the **AS2 (v2)** action, select **New step**.
+1. In the designer, [follow these general steps to add the **AS2 (v2)** action named **AS2 Encode** to your workflow](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
-1. Under the **Choose an operation** search box, select **Standard**. In the search box, enter **as2**.
-
-1. From the actions list, select the action named **AS2 Encode**.
-
- ![Screenshot showing the Azure portal, designer for Consumption workflow, and "AS2 Encode" action selected.](./media/logic-apps-enterprise-integration-as2/select-as2-v2-encode-consumption.png)
-
-1. In the action information box, provide the following information.
+1. In the action information box, provide the following information:
| Property | Required | Description | |-|-|-|
Select the tab for either Consumption or Standard logic app workflows:
1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer.
-1. On the designer, under the trigger or action where you want to add the **AS2** action, select **New step**.
-
-1. Under the **Choose an operation** search box, select **Standard**. In the search box, enter **as2 encode**.
-
-1. From the actions list, select the action named **Encode to AS2 message**.
-
- ![Screenshot showing the Azure portal, designer for Consumption workflow, and "Encode to AS2 message" action selected.](./media/logic-apps-enterprise-integration-as2/select-encode-as2-consumption.png)
+1. In the designer, [follow these general steps to add the **AS2** action named **Encode to AS2 message** to your workflow](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
1. When prompted to create a connection to your integration account, provide the following information:
Select the tab for either Consumption or Standard logic app workflows:
1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer.
-1. On the designer, under the trigger or action where you want to add the **AS2 (v2)** action, select **Insert a new step** (plus sign), and then select **Add an action**.
-
-1. Under the **Choose an operation** search box, select **Built-in**. In the search box, enter **as2 encode**.
-
-1. From the actions list, select the action named **AS2 Encode**.
-
- ![Screenshot showing the Azure portal, designer for Standard workflow, and "AS2 Encode" action selected.](./media/logic-apps-enterprise-integration-as2/select-as2-v2-encode-built-in-standard.png)
+1. In the designer, [follow these general steps to add the **AS2 (v2)** action named **AS2 Encode** to your workflow](create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
1. In the action information pane, provide the following information:
Select the tab for either Consumption or Standard logic app workflows:
1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer.
-1. On the designer, under the trigger or action where you want to add the **AS2** action, select **Insert a new step** (plus sign), and then select **Add an action**.
-
-1. Under the **Choose an operation** search box, select **Azure**. In the search box, enter **as2 encode**.
-
-1. From the actions list, select the action named **Encode to AS2 message**.
-
- ![Screenshot showing the Azure portal, workflow designer for Standard, and "Encode to AS2 message" action selected.](./media/logic-apps-enterprise-integration-as2/select-encode-as2-message-managed-standard.png)
+1. In the designer, [follow these general steps to add the **AS2** action named **Encode to AS2 message** to your workflow](create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
1. When prompted to create a connection to your integration account, provide the following information:
Select the tab for either Consumption or Standard logic app workflows:
1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer.
-1. On the designer, under the trigger or action where you want to add the **AS2 (v2)** action, select **New step**.
-
-1. Under the **Choose an operation** search box, select **Standard**. In the search box, enter **as2**.
-
-1. From the actions list, select the action named **AS2 Decode**.
-
- ![Screenshot showing the Azure portal, designer for Consumption workflow, and "AS2 Decode" action selected.](media/logic-apps-enterprise-integration-as2/select-as2-v2-decode-consumption.png)
+1. In the designer, [follow these general steps to add the **AS2 (v2)** action named **AS2 Decode** to your workflow](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
1. In the action information box, provide the following information:
Select the tab for either Consumption or Standard logic app workflows:
1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer.
-1. On the designer, under the trigger or action where you want to add the **AS2** action, select **New step**.
-
-1. Under the **Choose an operation** search box, select **Standard**. In the search box, enter **as2 decode**.
-
-1. From the actions list, select the action named **Decode AS2 message**.
-
- ![Screenshot showing the Azure portal, designer for Consumption workflow, and "Decode AS2 message" action selected.](./media/logic-apps-enterprise-integration-as2/select-decode-as2-consumption.png)
+1. In the designer, [follow these general steps to add the **AS2** action named **Decode AS2 message** to your workflow](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
1. When prompted to create a connection to your integration account, provide the following information:
Select the tab for either Consumption or Standard logic app workflows:
1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer.
-1. On the designer, under the trigger or action where you want to add the AS2 action, select **Insert a new step** (plus sign), and then select **Add an action**.
-
-1. Under the **Choose an operation** search box, select **Built-in**. In the search box, enter **as2 decode**.
-
-1. From the actions list, select the action named **AS2 Decode**.
-
- ![Screenshot showing the Azure portal, designer for Standard workflow, and "AS2 Decode" action selected.](./media/logic-apps-enterprise-integration-as2/select-as2-v2-decode-built-in-standard.png)
+1. In the designer, [follow these general steps to add the **AS2 (v2)** action named **AS2 Decode** to your workflow](create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
1. In the action information pane, provide the following information:
Select the tab for either Consumption or Standard logic app workflows:
1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer.
-1. On the designer, under the trigger or action where you want to add the AS2 action, select **Insert a new step** (plus sign), and then select **Add an action**.
-
-1. Under the **Choose an operation** search box, select **Azure**. In the search box, enter **as2 decode**.
-
-1. From the actions list, select the action named **Decode AS2 message**.
-
- ![Screenshot showing the Azure portal, designer for Standard workflow, and "Decode AS2 message" operation selected.](./media/logic-apps-enterprise-integration-as2/select-decode-as2-message-managed-standard.png)
+1. In the designer, [follow these general steps to add the **AS2** action named **Decode AS2 message** to your workflow](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
1. When prompted to create a connection to your integration account, provide the following information:
logic-apps Logic Apps Enterprise Integration Edifact Contrl Acknowledgment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-edifact-contrl-acknowledgment.md
Previously updated : 08/20/2022 Last updated : 08/15/2023 # CONTRL acknowledgments and error codes for EDIFACT messages in Azure Logic Apps
logic-apps Logic Apps Enterprise Integration Edifact Message Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-edifact-message-settings.md
Previously updated : 08/20/2022 Last updated : 08/15/2023 # Reference for EDIFACT message settings in agreements for Azure Logic Apps
Last updated 08/20/2022
This reference describes the properties that you can set in an EDIFACT agreement for specifying how to handle messages between [trading partners](logic-apps-enterprise-integration-partners.md). Set up these properties based on your agreement with the partner that exchanges messages with you.
-<a name="EDIFACT-inbound-messages"></a>
+<a name="edifact-inbound-messages"></a>
-## EDIFACT Receive Settings
+## EDIFACT Receive settings
-![Screenshot showing Azure portal, EDIFACT agreement settings for inbound messages.](./media/logic-apps-enterprise-integration-edifact-message-settings/edifact-receive-settings.png)
+![Screenshot showing Azure portal and EDIFACT agreement settings for inbound messages.](./media/logic-apps-enterprise-integration-edifact-message-settings/edifact-receive-settings.png)
### Identifiers
This reference describes the properties that you can set in an EDIFACT agreement
|-|-| | **UNB6.1 (Recipient Reference Password)** | An alphanumeric value that is 1-14 characters. | | **UNB6.2 (Recipient Reference Qualifier)** | An alphanumeric value that is 1-2 characters. |
-|||
### Acknowledgments | Property | Description | |-|-| | **Receipt of Message (CONTRL)** | Return a technical (CONTRL) acknowledgment to the interchange sender, based on the agreement's Send Settings. |
-| **Acknowledgement (CONTRL)** | Return a functional (CONTRL) acknowledgment to the interchange sender, based on the agreement's Send settings. |
-|||
+| **Acknowledgment (CONTRL)** | Return a functional (CONTRL) acknowledgment to the interchange sender, based on the agreement's Send settings. |
<a name="receive-settings-schemas"></a>
This reference describes the properties that you can set in an EDIFACT agreement
| **UNH2.5 (Associated Assigned Code)** | The assigned code that is alphanumeric and is 1-6 characters. | | **UNG2.1 (App Sender ID)** |Enter an alphanumeric value with a minimum of one character and a maximum of 35 characters. | | **UNG2.2 (App Sender Code Qualifier)** |Enter an alphanumeric value, with a maximum of four characters. |
-| **Schema** | The previously uploaded schema that you want to use in from either resource type: <p>- Consumption: An integration account linked to your logic app. <br>- Standard: Your logic app resource |
-|||
+| **Schema** | The previously uploaded schema that you want to use in from either resource type: <br><br>- Consumption: An integration account linked to your logic app. <br>- Standard: Your logic app resource |
### Control Numbers
This reference describes the properties that you can set in an EDIFACT agreement
| **Check for duplicate UNB5 every (days)** | If you chose to disallow duplicate interchange control numbers, you can specify the number of days between running the check. | | **Disallow Group control number duplicates** | Block interchanges that have duplicate group control numbers (UNG5). | | **Disallow Transaction set control number duplicates** | Block interchanges that have duplicate transaction set control numbers (UNH1). |
-| **EDIFACT Acknowledgement Control Number** | Assign the transaction set reference numbers to use in an acknowledgment by entering a value for the prefix, a range of reference numbers, and a suffix. |
-|||
+| **EDIFACT Acknowledgment Control Number** | Assign the transaction set reference numbers to use in an acknowledgment by entering a value for the prefix, a range of reference numbers, and a suffix. |
### Validation
After you finish setting up a validation row, the next row automatically appears
| **Extended Validation** | If the data type isn't EDI, validation runs on the data element requirement and allowed repetition, enumerations, and data element length validation (min and max). | | **Allow Leading/Trailing Zeroes** | Keep any extra leading or trailing zero and space characters. Don't remove these characters. | | **Trim Leading/Trailing Zeroes** | Remove the leading or trailing zero and space characters. |
-| **Trailing Separator Policy** | Generate trailing separators. <p> - **Not Allowed**: Prohibit trailing delimiters and separators in the received interchange. If the interchange has trailing delimiters and separators, the interchange is declared not valid. <p>- **Optional**: Accept interchanges with or without trailing delimiters and separators. <p>- **Mandatory**: The received interchange must have trailing delimiters and separators. |
-|||
+| **Trailing Separator Policy** | Generate trailing separators. <br><br>- **Not Allowed**: Prohibit trailing delimiters and separators in the received interchange. If the interchange has trailing delimiters and separators, the interchange is declared not valid. <br><br>- **Optional**: Accept interchanges with or without trailing delimiters and separators. <br><br>- **Mandatory**: The received interchange must have trailing delimiters and separators. |
### Internal Settings
After you finish setting up a validation row, the next row automatically appears
| **Split Interchange as transaction sets - suspend interchange on error** | Parse each transaction set in an interchange into a separate XML document by applying the appropriate envelope. Suspend the entire interchange when one or more transaction sets in the interchange fail validation. | | **Preserve Interchange - suspend transaction sets on error** | Keep the interchange intact, create an XML document for the entire batched interchange. Suspend only the transaction sets that fail validation, while continuing to process all other transaction sets. | | **Preserve Interchange - suspend interchange on error** | Keep the interchange intact, create an XML document for the entire batched interchange. Suspend the entire interchange when one or more transaction sets in the interchange fail validation. |
-|||
-<a name="EDIFACT-outbound-messages"></a>
+<a name="edifact-outbound-messages"></a>
-## EDIFACT Send Settings
+## EDIFACT Send settings
-![Screenshot showing Azure portal, EDIFACT agreement settings for outbound messages.](./media/logic-apps-enterprise-integration-edifact-message-settings/edifact-send-settings.png)
+![Screenshot showing Azure portal and EDIFACT agreement settings for outbound messages.](./media/logic-apps-enterprise-integration-edifact-message-settings/edifact-send-settings.png)
### Identifiers
After you finish setting up a validation row, the next row automatically appears
| **UNB6.1 (Recipient Reference Password)** | An alphanumeric value that is 1-14 characters. | | **UNB6.2 (Recipient Reference Qualifier)** | An alphanumeric value that is 1-2 characters. | | **UNB7 (Application Reference ID)** | An alphanumeric value that is 1-14 characters. |
-|||
### Acknowledgment | Property | Description | |-|-| | **Receipt of Message (CONTRL)** | The host partner that sends the message requests a technical (CONTRL) acknowledgment from the guest partner. |
-| **Acknowledgement (CONTRL)** | The host partner that sends the message expects requests a functional (CONTRL) acknowledgment from the guest partner. |
+| **Acknowledgment (CONTRL)** | The host partner that sends the message expects requests a functional (CONTRL) acknowledgment from the guest partner. |
| **Generate SG1/SG4 loop for accepted transaction sets** | If you chose to request a functional acknowledgment, this setting forces the generation of SG1/SG4 loops in the functional acknowledgments for accepted transaction sets. |
-|||
### Schemas
After you finish setting up a validation row, the next row automatically appears
| **UNH2.1 (Type)** | The transaction set type. | | **UNH2.2 (Version)** | The message version number. | | **UNH2.3 (Release)** | The message release number. |
-| **Schema** | The previously uploaded schema that you want to use in from either resource type: <p>- Consumption: An integration account linked to your logic app. <br>- Standard: Your logic app resource |
-|||
+| **Schema** | The previously uploaded schema that you want to use in from either resource type: <br><br>- Consumption: An integration account linked to your logic app. <br>- Standard: Your logic app resource |
### Envelopes
After you finish setting up an envelope row, the next row automatically appears.
| **UNB10 (Communication Agreement)** | An alphanumeric value that is 1-40 characters. | | **UNB11 (Test Indicator)** | Indicate that the generated interchange is test data. | | **Apply UNA Segment (Service String Advice)** | Generate a UNA segment for the interchange to send. |
-| **Apply UNG Segments (Function Group Header)** | Create grouping segments in the functional group header for messages sent to the guest partner. The following values are used to create the UNG segments: <p>- **Schema**: The previously uploaded schema that you want to use in from either resource type: <p>- Consumption: An integration account linked to your logic app. <p>- Standard: Your logic app resource <p>- **UNG1**: An alphanumeric value that is 1-6 characters. <p>- **UNG2.1**: An alphanumeric value that is 1-35 characters. <p>- **UNG2.2**: An alphanumeric value that is 1-4 characters. <p>- **UNG3.1**: An alphanumeric value that is 1-35 characters. <p>- **UNG3.2**: An alphanumeric value that is 1-4 characters. <p>- **UNG6**: An alphanumeric value that is 1-3 characters. <p>- **UNG7.1**: An alphanumeric value that is 1-3 characters. <p>- **UNG7.2**: An alphanumeric value that is 1-3 characters. <p>- **UNG7.3**: An alphanumeric value that is 1-6 characters. <p>- **UNG8**: An alphanumeric value that is 1-14 characters. |
-|||
+| **Apply UNG Segments (Function Group Header)** | Create grouping segments in the functional group header for messages sent to the guest partner. The following values are used to create the UNG segments: <br><br>- **Schema**: The previously uploaded schema that you want to use in from either resource type: <br><br>- Consumption: An integration account linked to your logic app. <br><br>- Standard: Your logic app resource <br><br>- **UNG1**: An alphanumeric value that is 1-6 characters. <br><br>- **UNG2.1**: An alphanumeric value that is 1-35 characters. <br><br>- **UNG2.2**: An alphanumeric value that is 1-4 characters. <br><br>- **UNG3.1**: An alphanumeric value that is 1-35 characters. <br><br>- **UNG3.2**: An alphanumeric value that is 1-4 characters. <br><br>- **UNG6**: An alphanumeric value that is 1-3 characters. <br><br>- **UNG7.1**: An alphanumeric value that is 1-3 characters. <br><br>- **UNG7.2**: An alphanumeric value that is 1-3 characters. <br><br>- **UNG7.3**: An alphanumeric value that is 1-6 characters. <br><br>- **UNG8**: An alphanumeric value that is 1-14 characters. |
### Character Sets and Separators
Other than the character set, you can specify a different set of delimiters to u
| Property | Description | |-|-| | **UNB1.1 (System Identifier)** | The EDIFACT character set to apply to the outbound interchange. |
-| **Schema** | The previously uploaded schema that you want to use in from either resource type: <p>- Consumption: An integration account linked to your logic app. <p>- Standard: Your logic app resource <p>For the selected schema, select the separators set that you want to use, based on the following separator descriptions. After you finish setting up a schema row, the next row automatically appears. |
+| **Schema** | The previously uploaded schema that you want to use in from either resource type: <br><br>- Consumption: An integration account linked to your logic app. <br><br>- Standard: Your logic app resource <br><br>For the selected schema, select the separators set that you want to use, based on the following separator descriptions. After you finish setting up a schema row, the next row automatically appears. |
| **Input Type** | The input type for the message. | | **Component Separator** | A single character to use for separating composite data elements. | | **Data Element Separator** | A single character to use for separating simple data elements within composite data elements. |
Other than the character set, you can specify a different set of delimiters to u
| **UNA5 (Repetition Separator)** | A value to use for the repetition separator that separates segments that repeat within a transaction set. | | **Segment Terminator** | A single character that indicates the end in an EDI segment. | | **Suffix** | The character to use with the segment identifier. If you designate a suffix, the segment terminator data element can be empty. If the segment terminator is left empty, you have to designate a suffix. |
-|||
### Control Numbers
Other than the character set, you can specify a different set of delimiters to u
| **UNB5 (Interchange Control Number)** | A prefix, a range of values to use as the interchange control number, and a suffix. These values are used to generate an outbound interchange. The control number is required, but the prefix and suffix are optional. The control number is incremented for each new message, while the prefix and suffix stay the same. | | **UNG5 (Group Control Number)** | A prefix, a range of values to use as the interchange control number, and a suffix. These values are used to generate the group control number. The control number is required, but the prefix and suffix are optional. The control number is incremented for each new message until the maximum value is reached, while the prefix and suffix stay the same. | | **UNH1 (Message Header Reference Number)** | A prefix, a range of values for the interchange control number, and a suffix. These values are used to generate the message header reference number. The reference number is required, but the prefix and suffix are optional. The prefix and suffix are optional, while the reference number is required. The reference number is incremented for each new message, while the prefix and suffix stay the same. |
-|||
### Validation
After you finish setting up a validation row, the next row automatically appears
| **Extended Validation** | If the data type isn't EDI, run validation on the data element requirement and allowed repetition, enumerations, and data element length validation (min/max). | | **Allow Leading/Trailing Zeroes** | Keep any extra leading or trailing zero and space characters. Don't remove these characters. | | **Trim Leading/Trailing Zeroes** | Remove leading or trailing zero characters. |
-| **Trailing Separator Policy** | Generate trailing separators. <p>- **Not Allowed**: Prohibit trailing delimiters and separators in the sent interchange. If the interchange has trailing delimiters and separators, the interchange is declared not valid. <p>- **Optional**: Send interchanges with or without trailing delimiters and separators. <p>- **Mandatory**: The sent interchange must have trailing delimiters and separators. |
-|||
+| **Trailing Separator Policy** | Generate trailing separators. <br><br>- **Not Allowed**: Prohibit trailing delimiters and separators in the sent interchange. If the interchange has trailing delimiters and separators, the interchange is declared not valid. <br><br>- **Optional**: Send interchanges with or without trailing delimiters and separators. <br><br>- **Mandatory**: The sent interchange must have trailing delimiters and separators. |
## Next steps
-[Exchange EDIFACT messages](../logic-apps/logic-apps-enterprise-integration-edifact.md)
+[Exchange EDIFACT messages](logic-apps-enterprise-integration-edifact.md)
logic-apps Logic Apps Enterprise Integration Edifact https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-edifact.md
Previously updated : 09/29/2021 Last updated : 08/15/2023 # Exchange EDIFACT messages using workflows in Azure Logic Apps
-To send and receive EDIFACT messages in workflows that you create using Azure Logic Apps, use the **EDIFACT** connector, which provides triggers and actions that support and manage EDIFACT communication.
+To send and receive EDIFACT messages in workflows that you create using Azure Logic Apps, use the **EDIFACT** connector, which provides operations that support and manage EDIFACT communication.
-This article shows how to add the EDIFACT encoding and decoding actions to an existing logic app workflow. Although you can use any trigger to start your workflow, the examples use the [Request](../connectors/connectors-native-reqres.md) trigger. For more information about the **EDIFACT** connector's triggers, actions, and limits version, review the [connector's reference page](/connectors/edifact/) as documented by the connector's Swagger file.
+This how-to guide shows how to add the EDIFACT encoding and decoding actions to an existing logic app workflow. The **EDIFACT** connector doesn't include any triggers, so you can use any trigger to start your workflow. The examples in this guide use the [Request trigger](../connectors/connectors-native-reqres.md).
-![Overview screenshot showing the "Decode EDIFACT message" operation with the message decoding properties.](./media/logic-apps-enterprise-integration-edifact/overview-edifact-message-consumption.png)
+## Connector technical reference
-## EDIFACT encoding and decoding
+The **EDIFACT** connector has one version across workflows in [multi-tenant Azure Logic Apps, single-tenant Azure Logic Apps, and the integration service environment (ISE)](logic-apps-overview.md#resource-environment-differences). For technical information about the **EDIFACT** connector, see the following documentation:
-The following sections describe the tasks that you can complete using the EDIFACT encoding and decoding actions.
+* [Connector reference page](/connectors/edifact/), which describes the triggers, actions, and limits as documented by the connector's Swagger file
+
+* [B2B protocol limits for message sizes](logic-apps-limits-and-config.md#b2b-protocol-limits)
+
+ For example, in an [integration service environment (ISE)](connect-virtual-network-vnet-isolated-environment-overview.md), this connector's ISE version uses the [B2B message limits for ISE](logic-apps-limits-and-config.md#b2b-protocol-limits).
+
+The following sections provide more information about the tasks that you can complete using the EDIFACT encoding and decoding actions.
### Encode to EDIFACT message action
+This action performs the following tasks:
+ * Resolve the agreement by matching the sender qualifier & identifier and receiver qualifier and identifier. * Serialize the Electronic Data Interchange (EDI), which converts XML-encoded messages into EDI transaction sets in the interchange.
The following sections describe the tasks that you can complete using the EDIFAC
### Decode EDIFACT message action
+This action performs the following tasks:
+ * Validate the envelope against the trading partner agreement. * Resolve the agreement by matching the sender qualifier and identifier along with the receiver qualifier and identifier.
The following sections describe the tasks that you can complete using the EDIFAC
* A functional acknowledgment that acknowledges the acceptance or rejection for the received interchange or group.
-## Connector reference
-
-For technical information about the **EDIFACT** connector, review the [connector's reference page](/connectors/edifact/), which describes the triggers, actions, and limits as documented by the connector's Swagger file. Also, review the [B2B protocol limits for message sizes](logic-apps-limits-and-config.md#b2b-protocol-limits) for workflows running in [multi-tenant Azure Logic Apps, single-tenant Azure Logic Apps, or the integration service environment (ISE)](logic-apps-overview.md#resource-environment-differences). For example, in an [integration service environment (ISE)](connect-virtual-network-vnet-isolated-environment-overview.md), this connector's ISE version uses the [B2B message limits for ISE](logic-apps-limits-and-config.md#b2b-protocol-limits).
- ## Prerequisites * An Azure account and subscription. If you don't have a subscription yet, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * An [integration account resource](logic-apps-enterprise-integration-create-integration-account.md) where you define and store artifacts, such as trading partners, agreements, certificates, and so on, for use in your enterprise integration and B2B workflows. This resource has to meet the following requirements:
- * Is associated with the same Azure subscription as your logic app resource.
-
- * Exists in the same location or Azure region as your logic app resource.
+ * Both your integration account and logic app resource must exist in the same Azure subscription and Azure region.
- * When you use the [**Logic App (Consumption)** resource type](logic-apps-overview.md#resource-environment-differences) and the **EDIFACT** operations, your logic app resource doesn't need a link to your integration account. However, you still need this account to store artifacts, such as partners, agreements, and certificates, along with using the EDIFACT, [X12](logic-apps-enterprise-integration-x12.md), or [AS2](logic-apps-enterprise-integration-as2.md) operations. Your integration account still has to meet other requirements, such as using the same Azure subscription and existing in the same location as your logic app resource.
+ * Defines at least two [trading partners](logic-apps-enterprise-integration-partners.md) that participate in the **EDIFACT** operation used in your workflow. The definitions for both partners must use the same *business identity* qualifier, which is **ZZZ - Mutually Defined** for this scenario.
- * When you use the [**Logic App (Standard)** resource type](logic-apps-overview.md#resource-environment-differences) and the **EDIFACT** operations, your workflow requires a connection to your integration account that you create directly from your workflow when you add the AS2 operation.
+ * Defines an [EDIFACT agreement](logic-apps-enterprise-integration-agreements.md) between the trading partners that participate in your workflow. Each agreement requires a host partner and a guest partner. The content in the messages between you and the other partner must match the agreement type. For information about agreement settings to use when receiving and sending messages, see [EDIFACT message settings](logic-apps-enterprise-integration-edifact-message-settings.md).
-* At least two [trading partners](logic-apps-enterprise-integration-partners.md) in your integration account. The definitions for both partners must use the same *business identity* qualifier, which is **ZZZ - Mutually Defined** for this scenario.
+ > [!IMPORTANT]
+ >
+ > The EDIFACT connector supports only UTF-8 characters. If your output contains
+ > unexpected characters, check that your EDIFACT messages use the UTF-8 character set.
-* An [EDIFACT agreement](logic-apps-enterprise-integration-agreements.md) in your integration account between the trading partners that participate in your workflow. Each agreement requires a host partner and a guest partner. The content in the messages between you and the other partner must match the agreement type.
+* Based on whether you're working on a Consumption or Standard logic app workflow, your logic app resource might require a link to your integration account:
- > [!IMPORTANT]
- > The EDIFACT connector supports only UTF-8 characters. If your output contains
- > unexpected characters, check that your EDIFACT messages use the UTF-8 character set.
+ | Logic app workflow | Link required? |
+ |--|-|
+ | Consumption | Connection to integration account required, but no link required. You can create the connection when you add the **EDIFACT** operation to your workflow. |
+ | Standard | Connection to integration account required, but no link required. You can create the connection when you add the **EDIFACT** operation to your workflow. |
* The logic app resource and workflow where you want to use the EDIFACT operations.
For technical information about the **EDIFACT** connector, review the [connector
1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer.
-1. On the designer, under the trigger or action where you want to add the EDIFACT action, select **New step**.
-
-1. Under the **Choose an operation** search box, select **All**. In the search box, enter `edifact encode`. For this example, select the action named **Encode to EDIFACT message by agreement name**.
-
- ![Screenshot showing the Azure portal, workflow designer, and "Encode to EDIFACT message by agreement name" action selected.](./media/logic-apps-enterprise-integration-edifact/select-encode-edifact-message-consumption.png)
+1. In the designer, [follow these general steps to add the **EDIFACT** action named **Encode to EDIFACT message by agreement name** to your workflow](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
> [!NOTE]
- > You can choose to select the **Encode to EDIFACT message by identities** action instead, but you later have to
- > provide different values, such as the **Sender identifier** and **Receiver identifier** that's specified by
- > your EDIFACT agreement. You also have to specify the **XML message to encode**, which can be the output from
- > the trigger or a preceding action.
+ >
+ > If you want to use **Encode to EDIFACT message by identities** action instead,
+ > you later have to provide different values, such as the **Sender identifier**
+ > and **Receiver identifier** that's specified by your EDIFACT agreement.
+ > You also have to specify the **XML message to encode**, which can be the output
+ > from the trigger or a preceding action.
-1. When prompted to create a connection to your integration account, provide the following information:
+1. When prompted, provide the following connection information for your integration account:
| Property | Required | Description | |-|-|-| | **Connection name** | Yes | A name for the connection | | **Integration account** | Yes | From the list of available integration accounts, select the account to use. |
- ||||
For example:
For technical information about the **EDIFACT** connector, review the [connector
1. When you're done, select **Create**.
-1. After the EDIFACT operation appears on the designer, provide information for the following properties specific to this operation:
+1. In the EDIFACT action information box, provide the following property values:
| Property | Required | Description | |-|-|-| | **Name of EDIFACT agreement** | Yes | The EDIFACT agreement to use. | | **XML message to encode** | Yes | The business identifier for the message sender as specified by your EDIFACT agreement | | Other parameters | No | This operation includes the following other parameters: <p>- **Data element separator** <br>- **Release indicator** <br>- **Component separator** <br>- **Repetition separator** <br>- **Segment terminator** <br>- **Segment terminator suffix** <br>- **Decimal indicator** <p>For more information, review [EDIFACT message settings](logic-apps-enterprise-integration-edifact-message-settings.md). |
- ||||
For example, the XML message payload can be the **Body** content output from the Request trigger:
For technical information about the **EDIFACT** connector, review the [connector
1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer.
-1. On the designer, under the trigger or action where you want to add the EDIFACT action, select **Insert a new step** (plus sign), and then select **Add an action**.
-
-1. Under the **Choose an operation** search box, select **Azure**. In the search box, enter `edifact encode`. Select the action named **Encode to EDIFACT message by agreement name**.
-
- ![Screenshot showing the Azure portal, workflow designer, and "Encode to EDIFACT message by agreement name" operation selected.](./media/logic-apps-enterprise-integration-edifact/select-encode-edifact-message-standard.png)
+1. In the designer, [follow these general steps to add the **EDIFACT** action named **Encode to EDIFACT message by agreement name** to your workflow](create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
> [!NOTE]
- > You can choose to select the **Encode to EDIFACT message by identities** action instead, but you later have to
- > provide different values, such as the **Sender identifier** and **Receiver identifier** that's specified by
- > your EDIFACT agreement. You also have to specify the **XML message to encode**, which can be the output from
- > the trigger or a preceding action.
+ >
+ > If you want to use **Encode to EDIFACT message by identities** action instead,
+ > you later have to provide different values, such as the **Sender identifier**
+ > and **Receiver identifier** that's specified by your EDIFACT agreement.
+ > You also have to specify the **XML message to encode**, which can be the output
+ > from the trigger or a preceding action.
-1. When prompted to create a connection to your integration account, provide the following information:
+1. When prompted, provide the following connection information for your integration account:
| Property | Required | Description | |-|-|-| | **Connection name** | Yes | A name for the connection | | **Integration account** | Yes | From the list of available integration accounts, select the account to use. |
- ||||
For example:
For technical information about the **EDIFACT** connector, review the [connector
1. When you're done, select **Create**.
-1. After the EDIFACT details pane appears on the designer, provide information for the following properties:
+1. In the EDIFACT action information box, provide the following property values:
| Property | Required | Description | |-|-|-| | **Name of EDIFACT agreement** | Yes | The EDIFACT agreement to use. | | **XML message to encode** | Yes | The business identifier for the message sender as specified by your EDIFACT agreement | | Other parameters | No | This operation includes the following other parameters: <p>- **Data element separator** <br>- **Release indicator** <br>- **Component separator** <br>- **Repetition separator** <br>- **Segment terminator** <br>- **Segment terminator suffix** <br>- **Decimal indicator** <p>For more information, review [EDIFACT message settings](logic-apps-enterprise-integration-edifact-message-settings.md). |
- ||||
For example, the message payload is the **Body** content output from the Request trigger:
For technical information about the **EDIFACT** connector, review the [connector
1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer.
-1. On the designer, under the trigger or action where you want to add the EDIFACT action, select **New step**.
+1. In the designer, [follow these general steps to add the **EDIFACT** action named **Decode EDIFACT message** to your workflow](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
-1. Under the **Choose an operation** search box, select **All**. In the search box, enter `edifact encode`. Select the action named **Decode EDIFACT message**.
-
-1. When prompted to create a connection to your integration account, provide the following information:
+1. When prompted, provide the following connection information for your integration account:
| Property | Required | Description | |-|-|-| | **Connection name** | Yes | A name for the connection | | **Integration account** | Yes | From the list of available integration accounts, select the account to use. |
- ||||
For example:
For technical information about the **EDIFACT** connector, review the [connector
1. When you're done, select **Create**.
-1. After the EDIFACT operation appears on the designer, provide information for the following properties specific to this operation:
+1. In the EDIFACT action information box, provide the following property values:
| Property | Required | Description | |-|-|-| | **EDIFACT flat file message to decode** | Yes | The XML flat file message to decode. | | Other parameters | No | This operation includes the following other parameters: <p>- **Component separator** <br>- **Data element separator** <br>- **Release indicator** <br>- **Repetition separator** <br>- **Segment terminator** <br>- **Segment terminator suffix** <br>- **Decimal indicator** <br>- **Payload character set** <br>- **Segment terminator suffix** <br>- **Preserve Interchange** <br>- **Suspend Interchange On Error** <p>For more information, review [EDIFACT message settings](logic-apps-enterprise-integration-edifact-message-settings.md). |
- ||||
For example, the XML message payload to decode can be the **Body** content output from the Request trigger:
For technical information about the **EDIFACT** connector, review the [connector
1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer.
-1. On the designer, under the trigger or action where you want to add the EDIFACT action, select **Insert a new step** (plus sign), and then select **Add an action**.
-
-1. Under the **Choose an operation** search box, select **Azure**. In the search box, enter `edifact encode`. Select the action named **Decode EDIFACT message**.
-
- ![Screenshot showing the Azure portal, workflow designer, and "Decode EDIFACT message" operation selected.](./media/logic-apps-enterprise-integration-edifact/select-decode-edifact-message-standard.png)
+1. In the designer, [follow these general steps to add the **EDIFACT** action named **Decode EDIFACT message** to your workflow](create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
-1. When prompted to create a connection to your integration account, provide the following information:
+1. When prompted, provide the following connection information for your integration account:
| Property | Required | Description | |-|-|-| | **Connection name** | Yes | A name for the connection | | **Integration account** | Yes | From the list of available integration accounts, select the account to use. |
- ||||
For example:
For technical information about the **EDIFACT** connector, review the [connector
1. When you're done, select **Create**.
-1. After the EDIFACT details pane appears on the designer, provide information for the following properties:
+1. In the EDIFACT action information box, provide the following property values:
| Property | Required | Description | |-|-|-| | **Name of EDIFACT agreement** | Yes | The EDIFACT agreement to use. | | **XML message to encode** | Yes | The business identifier for the message sender as specified by your EDIFACT agreement | | Other parameters | No | This operation includes the following other parameters: <p>- **Data element separator** <br>- **Release indicator** <br>- **Component separator** <br>- **Repetition separator** <br>- **Segment terminator** <br>- **Segment terminator suffix** <br>- **Decimal indicator** <p>For more information, review [EDIFACT message settings](logic-apps-enterprise-integration-edifact-message-settings.md). |
- ||||
For example, the message payload is the **Body** content output from the Request trigger:
For technical information about the **EDIFACT** connector, review the [connector
## Handle UNH2.5 segments in EDIFACT documents
-In an EDIFACT document, the [UNH2.5 segment](logic-apps-enterprise-integration-edifact-message-settings.md#receive-settings-schemas) is used for used for schema lookup. For example, in this sample EDIFACT message, the UNH field is `EAN008`:
+In an EDIFACT document, the [UNH2.5 segment](logic-apps-enterprise-integration-edifact-message-settings.md#receive-settings-schemas) is used for schema lookup. For example, in this sample EDIFACT message, the UNH field is `EAN008`:
`UNH+SSDD1+ORDERS:D:03B:UN:EAN008`
To handle an EDIFACT document or process an EDIFACT message that has a UN2.5 seg
For example, suppose the schema root name for the sample UNH field is `EFACT_D03B_ORDERS_EAN008`. For each `D03B_ORDERS` that has a different UNH2.5 segment, you have to deploy an individual schema.
-1. In the [Azure portal](https://portal.azure.com), add the schema to your integration account resource or logic app resource, which is based on whether you're working with the **Logic App (Consumption)** or **Logic App (Standard)** resource type respectively.
+1. In the [Azure portal](https://portal.azure.com), add the schema to your integration account resource or logic app resource, based on whether you have a Consumption or Standard logicapp workflow respectively.
1. Whether you're using the EDIFACT decoding or encoding action, upload your schema and set up the schema settings in your EDIFACT agreement's **Receive Settings** or **Send Settings** sections respectively.
logic-apps Logic Apps Enterprise Integration X12 997 Acknowledgment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-x12-997-acknowledgment.md
Previously updated : 08/20/2022 Last updated : 08/15/2023 # 997 functional acknowledgments and error codes for X12 messages in Azure Logic Apps
The optional AK3 segment reports errors in a data segment and identifies the loc
||-| | AK301 | Mandatory, identifies the segment in error with the X12 segment ID, for example, NM1. | | AK302 | Mandatory, identifies the segment count of the segment in error. The ST segment is `1`, and each segment increments the segment count by one. |
-| AK303 | Mandatory, identifies a bounded loop, which is a loop surrounded by an Loop Start (LS) segment and a Loop End (LE) segment. AK303 contains the values of the LS and LE segments that bound the segment in error. |
+| AK303 | Mandatory, identifies a bounded loop, which is a loop surrounded by a Loop Start (LS) segment and a Loop End (LE) segment. AK303 contains the values of the LS and LE segments that bound the segment in error. |
| AK304 | Optional, specifies the code for the error in the data segment. Although AK304 is optional, the element is required when an error exists for the identified segment. For AK304 error codes, review [997 ACK error codes - Data Segment Note](#997-ack-error-codes). | |||
logic-apps Logic Apps Enterprise Integration X12 Decode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-x12-decode.md
- Title: Decode X12 messages
-description: Validate EDI and generate acknowledgements with X12 message decoder in Azure Logic Apps with Enterprise Integration Pack.
----- Previously updated : 01/27/2017--
-# Decode X12 messages in Azure Logic Apps with Enterprise Integration Pack
-
-With the Decode X12 message connector, you can validate the envelope against a trading partner agreement, validate EDI and partner-specific properties, split interchanges into transactions sets or preserve entire interchanges, and generate acknowledgments for processed transactions.
-To use this connector, you must add the connector to an existing trigger in your logic app.
-
-## Before you start
-
-Here's the items you need:
-
-* An Azure account; you can create a [free account](https://azure.microsoft.com/free)
-* An [integration account](logic-apps-enterprise-integration-create-integration-account.md)
-that's already defined and associated with your Azure subscription.
-You must have an integration account to use the Decode X12 message connector.
-* At least two [partners](logic-apps-enterprise-integration-partners.md)
-that are already defined in your integration account
-* An [X12 agreement](logic-apps-enterprise-integration-x12.md)
-that's already defined in your integration account
-
-## Decode X12 messages
-
-1. Create a logic app workflow. For more information, see the following documentation:
-
- * [Create an example Consumption logic app workflow in multi-tenant Azure Logic Apps](quickstart-create-example-consumption-workflow.md)
-
- * [Create an example Standard logic app workflow in single-tenant Azure Logic Apps](create-single-tenant-workflows-azure-portal.md)
-
-2. The Decode X12 message connector doesn't have triggers,
-so you must add a trigger for starting your logic app, like a Request trigger.
-In the Logic App Designer, add a trigger, and then add an action to your logic app.
-
-3. In the search box, enter "x12" for your filter.
-Select **X12 - Decode X12 message**.
-
- ![Search for "x12"](media/logic-apps-enterprise-integration-x12-decode/x12decodeimage1.png)
-
-3. If you didn't previously create any connections to your integration account,
-you're prompted to create that connection now. Name your connection,
-and select the integration account that you want to connect.
-
- ![Provide integration account connection details](media/logic-apps-enterprise-integration-x12-decode/x12decodeimage4.png)
-
- Properties with an asterisk are required.
-
- | Property | Details |
- | | |
- | Connection Name * |Enter any name for your connection. |
- | Integration Account * |Enter a name for your integration account. Make sure that your integration account and logic app are in the same Azure location. |
-
-5. When you're done, your connection details should look similar to this example.
-To finish creating your connection, choose **Create**.
-
- ![integration account connection details](media/logic-apps-enterprise-integration-x12-decode/x12decodeimage5.png)
-
-6. After your connection is created, as shown in this example,
-select the X12 flat file message to decode.
-
- ![integration account connection created](media/logic-apps-enterprise-integration-x12-decode/x12decodeimage6.png)
-
- For example:
-
- ![Select X12 flat file message for decoding](media/logic-apps-enterprise-integration-x12-decode/x12decodeimage7.png)
-
- > [!NOTE]
- > The actual message content or payload for the message array, good or bad,
- > is base64 encoded. So, you must specify an expression that processes this content.
- > Here is an example that processes the content as XML that you can
- > enter in code view
- > or by using expression builder in the designer.
- > ``` json
- > "content": "@xml(base64ToBinary(item()?['Payload']))"
- > ```
- > ![Content example](media/logic-apps-enterprise-integration-x12-decode/content-example.png)
- >
--
-## X12 Decode details
-
-The X12 Decode connector performs these tasks:
-
-* Validates the envelope against trading partner agreement
-* Validates EDI and partner-specific properties
- * EDI structural validation, and extended schema validation
- * Validation of the structure of the interchange envelope.
- * Schema validation of the envelope against the control schema.
- * Schema validation of the transaction-set data elements against the message schema.
- * EDI validation performed on transaction-set data elements
-* Verifies that the interchange, group, and transaction set control numbers are not duplicates
- * Checks the interchange control number against previously received interchanges.
- * Checks the group control number against other group control numbers in the interchange.
- * Checks the transaction set control number against other transaction set control numbers in that group.
-* Splits the interchange into transaction sets, or preserves the entire interchange:
- * Split Interchange as transaction sets - suspend transaction sets on error:
- Splits interchange into transaction sets and parses each transaction set.
- The X12 Decode action outputs only those transaction sets
- that fail validation to `badMessages`, and outputs the remaining transactions sets to `goodMessages`.
- * Split Interchange as transaction sets - suspend interchange on error:
- Splits interchange into transaction sets and parses each transaction set.
- If one or more transaction sets in the interchange fail validation,
- the X12 Decode action outputs all the transaction sets in that interchange to `badMessages`.
- * Preserve Interchange - suspend transaction sets on error:
- Preserve the interchange and process the entire batched interchange.
- The X12 Decode action outputs only those transaction sets that fail validation to `badMessages`,
- and outputs the remaining transactions sets to `goodMessages`.
- * Preserve Interchange - suspend interchange on error:
- Preserve the interchange and process the entire batched interchange.
- If one or more transaction sets in the interchange fail validation,
- the X12 Decode action outputs all the transaction sets in that interchange to `badMessages`.
-* Generates a Technical and/or Functional acknowledgment (if configured).
- * A Technical Acknowledgment generates as a result of header validation. The technical acknowledgment reports the status of the processing of an interchange header and trailer by the address receiver.
- * A Functional Acknowledgment generates as a result of body validation. The functional acknowledgment reports each error encountered while processing the received document
-
-## View the swagger
-See the [swagger details](/connectors/x12/).
-
-## Next steps
-[Learn more about the Enterprise Integration Pack](../logic-apps/logic-apps-enterprise-integration-overview.md "Learn about Enterprise Integration Pack")
-
logic-apps Logic Apps Enterprise Integration X12 Encode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-x12-encode.md
- Title: Encode X12 messages
-description: Validate EDI and convert XML-encoded messages with X12 message encoder in Azure Logic Apps with Enterprise Integration Pack.
----- Previously updated : 01/27/2017--
-# Encode X12 messages in Azure Logic Apps with Enterprise Integration Pack
-
-With the Encode X12 message connector, you can validate EDI and partner-specific properties,
-convert XML-encoded messages into EDI transaction sets in the interchange,
-and request a Technical Acknowledgement, Functional Acknowledgment, or both.
-To use this connector, you must add the connector to an existing trigger in your logic app.
-
-## Before you start
-
-Here's the items you need:
-
-* An Azure account; you can create a [free account](https://azure.microsoft.com/free)
-* An [integration account](logic-apps-enterprise-integration-create-integration-account.md)
-that's already defined and associated with your Azure subscription.
-You must have an integration account to use the Encode X12 message connector.
-* At least two [partners](logic-apps-enterprise-integration-partners.md)
-that are already defined in your integration account
-* An [X12 agreement](logic-apps-enterprise-integration-x12.md)
-that's already defined in your integration account
-
-## Encode X12 messages
-
-1. Create a logic app workflow. For more information, see the following documentation:
-
- * [Create an example Consumption logic app workflow in multi-tenant Azure Logic Apps](quickstart-create-example-consumption-workflow.md)
-
- * [Create an example Standard logic app workflow in single-tenant Azure Logic Apps](create-single-tenant-workflows-azure-portal.md)
-
-2. The Encode X12 message connector doesn't have triggers,
-so you must add a trigger for starting your logic app, like a Request trigger.
-In the Logic App Designer, add a trigger, and then add an action to your logic app.
-
-3. In the search box, enter "x12" for your filter.
-Select either **X12 - Encode to X12 message by agreement name**
-or **X12 - Encode to X12 message by identities**.
-
- ![Search for "x12"](./media/logic-apps-enterprise-integration-x12-encode/x12decodeimage1.png)
-
-3. If you didn't previously create any connections to your integration account,
-you're prompted to create that connection now. Name your connection,
-and select the integration account that you want to connect.
-
- ![integration account connection](./media/logic-apps-enterprise-integration-x12-encode/x12encodeimage1.png)
-
- Properties with an asterisk are required.
-
- | Property | Details |
- | | |
- | Connection Name * |Enter any name for your connection. |
- | Integration Account * |Enter a name for your integration account. Make sure that your integration account and logic app are in the same Azure location. |
-
-5. When you're done, your connection details should look similar to this example.
-To finish creating your connection, choose **Create**.
-
- ![integration account connection created](./media/logic-apps-enterprise-integration-x12-encode/x12encodeimage2.png)
-
- Your connection is now created.
-
- ![integration account connection details](./media/logic-apps-enterprise-integration-x12-encode/x12encodeimage3.png)
-
-#### Encode X12 messages by agreement name
-
-If you chose to encode X12 messages by agreement name,
-open the **Name of X12 agreement** list,
-enter or select your existing X12 agreement. Enter the XML message to encode.
-
-![Enter X12 agreement name and XML message to encode](./media/logic-apps-enterprise-integration-x12-encode/x12encodeimage4.png)
-
-#### Encode X12 messages by identities
-
-If you choose to encode X12 messages by identities, enter the sender identifier,
-sender qualifier, receiver identifier, and receiver qualifier as
-configured in your X12 agreement. Select the XML message to encode.
-
-![Provide identities for sender and receiver, select XML message to encode](./media/logic-apps-enterprise-integration-x12-encode/x12encodeimage5.png)
-
-## X12 Encode details
-
-The X12 Encode connector performs these tasks:
-
-* Agreement resolution by matching sender and receiver context properties.
-* Serializes the EDI interchange, converting XML-encoded messages into EDI transaction sets in the interchange.
-* Applies transaction set header and trailer segments
-* Generates an interchange control number, a group control number, and a transaction set control number for each outgoing interchange
-* Replaces separators in the payload data
-* Validates EDI and partner-specific properties
- * Schema validation of the transaction-set data elements against the message Schema
- * EDI validation performed on transaction-set data elements.
- * Extended validation performed on transaction-set data elements
-* Requests a Technical and/or Functional acknowledgment (if configured).
- * A Technical Acknowledgment generates as a result of header validation. The technical acknowledgment reports the status of the processing of an interchange header and trailer by the address receiver
- * A Functional Acknowledgment generates as a result of body validation. The functional acknowledgment reports each error encountered while processing the received document
-
-## View the swagger
-See the [swagger details](/connectors/x12/).
-
-## Next steps
-[Learn more about the Enterprise Integration Pack](logic-apps-enterprise-integration-overview.md "Learn about Enterprise Integration Pack")
-
logic-apps Logic Apps Enterprise Integration X12 Message Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-x12-message-settings.md
+
+ Title: X12 message settings
+description: Reference guide for X12 message settings in agreements for Azure Logic Apps with Enterprise Integration Pack.
+
+ms.suite: integration
++++ Last updated : 08/15/2023++
+# Reference for X12 message settings in agreements for Azure Logic Apps
++
+This reference describes the properties that you can set in an X12 agreement for specifying how to handle messages between [trading partners](logic-apps-enterprise-integration-partners.md). Set up these properties based on your agreement with the partner that exchanges messages with you.
+
+<a name="x12-inbound-messages"></a>
+
+## X12 Receive Settings
+
+![Screenshot showing Azure portal and X12 agreement settings for inbound messages.](./media/logic-apps-enterprise-integration-x12-message-settings/x12-receive-settings.png)
+
+<a name="inbound-identifiers"></a>
+
+### Identifiers
+
+| Property | Description |
+|-|-|
+| **ISA1 (Authorization Qualifier)** | The Authorization Qualifier value that you want to use. The default value is **00 - No Authorization Information Present**. <br><br>**Note**: If you select other values, specify a value for the **ISA2** property. |
+| **ISA2** | The Authorization Information value to use when the **ISA1** property is not **00 - No Authorization Information Present**. This property value must have a minimum of one alphanumeric character and a maximum of 10. |
+| **ISA3 (Security Qualifier)** | The Security Qualifier value that you want to use. The default value is **00 - No Security Information Present**. <br><br>**Note**: If you select other values, specify a value for the **ISA4** property. |
+| **ISA4** | The Security Information value to use when the **ISA3** property is not **00 - No Security Information Present**. This property value must have a minimum of one alphanumeric character and a maximum of 10. |
+
+<a name="inbound-acknowledgment"></a>
+
+### Acknowledgment
+
+| Property | Description |
+|-|-|
+| **TA1 Expected** | Return a technical acknowledgment (TA1) to the interchange sender. |
+| **FA Expected** | Return a functional acknowledgment (FA) to the interchange sender. <br><br>For the **FA Version** property, based on the schema version, select the 997 or 999 acknowledgments. <br><br>To enable generation of AK2 loops in functional acknowledgments for accepted transaction sets, select **Include AK2 / IK2 Loop**. |
+
+<a name="inbound-schemas"></a>
+
+### Schemas
+
+For this section, select a [schema](logic-apps-enterprise-integration-schemas.md) from your [integration account](logic-apps-enterprise-integration-create-integration-account.md) for each transaction type (ST01) and Sender Application (GS02). The EDI Receive Pipeline disassembles the incoming message by matching the values and schema that you set in this section with the values for ST01 and GS02 in the incoming message and with the schema of the incoming message. After you complete each row, a new empty row automatically appears.
+
+| Property | Description |
+|-|-|
+| **Version** | The X12 version for the schema |
+| **Transaction Type (ST01)** | The transaction type |
+| **Sender Application (GS02)** | The sender application |
+| **Schema** | The schema file that you want to use |
+
+<a name="inbound-envelopes"></a>
+
+### Envelopes
+
+| Property | Description |
+|-|-|
+| **ISA11 Usage** | The separator to use in a transaction set: <br><br>- **Standard Identifier**: Use a period (.) for decimal notation, rather than the decimal notation of the incoming document in the EDI Receive Pipeline. <br><br>- **Repetition Separator**: Specify the separator for repeated occurrences of a simple data element or a repeated data structure. For example, usually the carat (^) is used as the repetition separator. For HIPAA schemas, you can only use the carat. |
+
+<a name="inbound-control-numbers"></a>
+
+### Control Numbers
+
+| Property | Description |
+|-|-|
+| **Disallow Interchange control number duplicates** | Block duplicate interchanges. Check the interchange control number (ISA13) for the received interchange control number. If a match is detected, the EDI Receive Pipeline doesn't process the interchange. <br><br><br><br>To specify the number of days to perform the check, enter a value for the **Check for duplicate ISA13 every (days)** property. |
+| **Disallow Group control number duplicates** | Block interchanges that have duplicate group control numbers. |
+| **Disallow Transaction set control number duplicates** | Block interchanges that have duplicate transaction set control numbers. |
+
+<a name="inbound-validations"></a>
+
+### Validations
+
+The **Default** row shows the validation rules that are used for an EDI message type. If you want to define different rules, select each box where you want the rule set to **true**. After you complete each row, a new empty row automatically appears.
+
+| Property | Description |
+|-|-|
+| **Message Type** | The EDI message type |
+| **EDI Validation** | Perform EDI validation on data types as defined by the schema's EDI properties, length restrictions, empty data elements, and trailing separators. |
+| **Extended Validation** | If the data type isn't EDI, validation is on the data element requirement and allowed repetition, enumerations, and data element length validation (min or max). |
+| **Allow Leading/Trailing Zeroes** | Keep any additional leading or trailing zero and space characters. Don't remove these characters. |
+| **Trim Leading/Trailing Zeroes** | Remove any leading or trailing zero and space characters. |
+| **Trailing Separator Policy** | Generate trailing separators. <br><br>- **Not Allowed**: Prohibit trailing delimiters and separators in the inbound interchange. If the interchange has trailing delimiters and separators, the interchange is declared not valid. <br><br>- **Optional**: Accept interchanges with or without trailing delimiters and separators. <br><br>- **Mandatory**: The inbound interchange must have trailing delimiters and separators. |
+
+<a name="inbound-internal-settings"></a>
+
+### Internal Settings
+
+| Property | Description |
+|-|-|
+| **Convert implied decimal format Nn to a base 10 numeric value** | Convert an EDI number that is specified with the format "Nn" into a base-10 numeric value. |
+| **Create empty XML tags if trailing separators are allowed** | Have the interchange sender include empty XML tags for trailing separators. |
+| **Split Interchange as transaction sets - suspend transaction sets on error** | Parse each transaction set that's in an interchange into a separate XML document by applying the appropriate envelope to the transaction set. Suspend only the transactions where the validation fails. |
+| **Split Interchange as transaction sets - suspend interchange on error** | Parse each transaction set that's in an interchange into a separate XML document by applying the appropriate envelope. Suspend the entire interchange when one or more transaction sets in the interchange fail validation. |
+| **Preserve Interchange - suspend transaction sets on error** | Leave the interchange intact and create an XML document for the entire batched interchange. Suspend only the transaction sets that fail validation, but continue to process all other transaction sets. |
+| **Preserve Interchange - suspend interchange on error** |Leaves the interchange intact, creates an XML document for the entire batched interchange. Suspends the entire interchange when one or more transaction sets in the interchange fail validation. |
+
+<a name="x12-outbound-settings"></a>
+
+## X12 Send settings
+
+![Screenshot showing Azure portal and X12 agreement settings for outbound messages.](./media/logic-apps-enterprise-integration-x12-message-settings/x12-send-settings.png)
+
+<a name="outbound-identifiers"></a>
+
+### Identifiers
+
+| Property | Description |
+|-|-|
+| **ISA1 (Authorization Qualifier)** | The Authorization Qualifier value that you want to use. The default value is **00 - No Authorization Information Present**. <br><br>**Note**: If you select other values, specify a value for the **ISA2** property. |
+| **ISA2** | The Authorization Information value to use when the **ISA1** property is not **00 - No Authorization Information Present**. This property value must have a minimum of one alphanumeric character and a maximum of 10. |
+| **ISA3 (Security Qualifier)** | The Security Qualifier value that you want to use. The default value is **00 - No Security Information Present**. <br><br>**Note**: If you select other values, specify a value for the **ISA4** property. |
+| **ISA4** | The Security Information value to use when the **ISA3** property is not **00 - No Security Information Present**. This property value must have a minimum of one alphanumeric character and a maximum of 10. |
+
+<a name="outbound-acknowledgment"></a>
+
+### Acknowledgment
+
+| Property | Description |
+|-|-|
+| **TA1 Expected** | Return a technical acknowledgment (TA1) to the interchange sender. <br><br>This setting specifies that the host partner, who is sending the message, requests an acknowledgment from the guest partner in the agreement. These acknowledgments are expected by the host partner based on the agreement's Receive Settings. |
+| **FA Expected** | Return a functional acknowledgment (FA) to the interchange sender. For the **FA Version** property, based on the schema version, select the 997 or 999 acknowledgments. <br><br>This setting specifies that the host partner, who is sending the message, requests an acknowledgment from the guest partner in the agreement. These acknowledgments are expected by the host partner based on the agreement's Receive Settings. |
+
+<a name="outbound-schemas"></a>
+
+### Schemas
+
+For this section, select a [schema](../logic-apps/logic-apps-enterprise-integration-schemas.md) from your [integration account](./logic-apps-enterprise-integration-create-integration-account.md) for each transaction type (ST01). After you complete each row, a new empty row automatically appears.
+
+| Property | Description |
+|-|-|
+| **Version** | The X12 version for the schema |
+| **Transaction Type (ST01)** | The transaction type for the schema |
+| **Schema** | The schema file that you want to use. If you select the schema first, the version and transaction type are automatically set. |
+
+<a name="outbound-envelopes"></a>
+
+### Envelopes
+
+| Property | Description |
+|-|-|
+| **ISA11 Usage** | The separator to use in a transaction set: <br><br>- **Standard Identifier**: Use a period (.) for decimal notation, rather than the decimal notation of the outbound document in the EDI Send Pipeline. <br><br>- **Repetition Separator**: Specify the separator for repeated occurrences of a simple data element or a repeated data structure. For example, usually the carat (^) is used as the repetition separator. For HIPAA schemas, you can only use the carat. |
+
+<a name="outbound-control-version-number"></a>
+
+#### Control Version Number
+
+For this section, select a [schema](../logic-apps/logic-apps-enterprise-integration-schemas.md) from your [integration account](./logic-apps-enterprise-integration-create-integration-account.md) for each interchange. After you complete each row, a new empty row automatically appears.
+
+| Property | Description |
+|-|-|
+| **Control Version Number (ISA12)** | The version of the X12 standard |
+| **Usage Indicator (ISA15)** | The context of an interchange, which is either **Test** data, **Information** data, or **Production** data |
+| **Schema** | The schema to use for generating the GS and ST segments for an X12-encoded interchange that's sent to the EDI Send Pipeline. |
+| **GS1** | Optional, select the functional code. |
+| **GS2** | Optional, specify the application sender. |
+| **GS3** | Optional, specify the application receiver. |
+| **GS4** | Optional, select **CCYYMMDD** or **YYMMDD**. |
+| **GS5** | Optional, select **HHMM**, **HHMMSS**, or **HHMMSSdd**. |
+| **GS7** | Optional, select a value for the responsible agency. |
+| **GS8** | Optional, specify the schema document version. |
+
+<a name="outbound-control-numbers"></a>
+
+### Control Numbers
+
+| Property | Description |
+|-|-|
+| **Interchange Control Number (ISA13)** | The range of values for the interchange control number, which can have a minimum of value 1 and a maximum value of 999999999 |
+| **Group Control Number (GS06)** | The range of values for the group control number, which can have a minimum value of 1 and a maximum value of 999999999 |
+| **Transaction Set Control Number (ST02)** | The range of values for the transaction set control number, which can have a minimum value of 1 and a maximum value of 999999999 <br><br>- **Prefix**: Optional, an alphanumeric value <br>- **Suffix**: Optional, an alphanumeric value |
+
+<a name="outbound-character-sets-separators"></a>
+
+### Character Sets and Separators
+
+The **Default** row shows the character set that's used as delimiters for a message schema. If you don't want to use the **Default** character set, you can enter a different set of delimiters for each message type. After you complete each row, a new empty row automatically appears.
+
+> [!TIP]
+>
+> To provide special character values, edit the agreement as JSON and provide the ASCII value for the special character.
+
+| Property | Description |
+|-|-|
+| **Character Set to be used** | The X12 character set, which is either **Basic**, **Extended**, or **UTF8**. |
+| **Schema** | The schema that you want to use. After you select the schema, select the character set that you want to use, based on the separator descriptions below. |
+| **Input Type** | The input type for the character set |
+| **Component Separator** | A single character that separates composite data elements |
+| **Data Element Separator** | A single character that separates simple data elements within composite data |
+| **Replacement Character Separator** | A replacement character that replaces all separator characters in the payload data when generating the outbound X12 message |
+| **Segment Terminator** | A single character that indicates the end of an EDI segment |
+| **Suffix** | The character to use with the segment identifier. If you specify a suffix, the segment terminator data element can be empty. If the segment terminator is left empty, you must designate a suffix. |
+
+<a name="outbound-validation"></a>
+
+### Validation
+
+The **Default** row shows the validation rules that are used for an EDI message type. If you want to define different rules, select each box where you want the rule set to **true**. After you complete each row, a new empty row automatically appears.
+
+| Property | Description |
+|-|-|
+| **Message Type** | The EDI message type |
+| **EDI Validation** | Perform EDI validation on data types as defined by the schema's EDI properties, length restrictions, empty data elements, and trailing separators. |
+| **Extended Validation** | If the data type isn't EDI, validation is on the data element requirement and allowed repetition, enumerations, and data element length validation (min or max). |
+| **Allow Leading/Trailing Zeroes** | Keep any additional leading or trailing zero and space characters. Don't remove these characters. |
+| **Trim Leading/Trailing Zeroes** | Remove any leading or trailing zero and space characters. |
+| **Trailing Separator Policy** | Generate trailing separators. <br><br>- **Not Allowed**: Prohibit trailing delimiters and separators in the outbound interchange. If the interchange has trailing delimiters and separators, the interchange is declared not valid. <br><br>- **Optional**: Send interchanges with or without trailing delimiters and separators. <br><br>- **Mandatory**: The outbound interchange must have trailing delimiters and separators. |
+
+<a name="hipaa-schemas"></a>
+
+## HIPAA schemas and message types
+
+When you work with HIPAA schemas and the 277 or 837 message types, you need to perform a few extra steps. The [document version numbers (GS8)](#outbound-control-version-number) for these message types have more than nine characters, for example, "005010X222A1". Also, some document version numbers map to variant message types. If you don't reference the correct message type in your schema and in your agreement, you get this error message:
+
+`"The message has an unknown document type and did not resolve to any of the existing schemas configured in the agreement."`
+
+This table lists the affected message types, any variants, and the document version numbers that map to those message types:
+
+| Message type or variant | Description | Document version number (GS8) |
+|-|--|-|
+| 277 | Health Care Information Status Notification | 005010X212 |
+| 837_I | Health Care Claim Institutional | 004010X096A1 <br>005010X223A1 <br>005010X223A2 |
+| 837_D | Health Care Claim Dental | 004010X097A1 <br>005010X224A1 <br>005010X224A2 |
+| 837_P | Health Care Claim Professional | 004010X098A1 <br>005010X222 <br>005010X222A1 |
+
+You also need to disable EDI validation when you use these document version numbers because they result in an error that the character length is invalid.
+
+To specify these document version numbers and message types, follow these steps:
+
+1. In your HIPAA schema, replace the current message type with the variant message type for the document version number that you want to use.
+
+ For example, suppose you want to use document version number `005010X222A1` with the `837` message type. In your schema, replace each `"X12_00501_837"` value with the `"X12_00501_837_P"` value instead.
+
+ To update your schema, follow these steps:
+
+ 1. In the Azure portal, go to your integration account. Find and download your schema. Replace the message type and rename the schema file, and upload your revised schema to your integration account. For more information, see [Edit a schema](logic-apps-enterprise-integration-schemas.md#edit-schema).
+
+ 1. In your agreement's message settings, select the revised schema.
+
+1. In your agreement's `schemaReferences` object, add another entry that specifies the variant message type that matches your document version number.
+
+ For example, suppose you want to use document version number `005010X222A1` for the `837` message type. Your agreement has a `schemaReferences` section with these properties and values:
+
+ ```json
+ "schemaReferences": [
+ {
+ "messageId": "837",
+ "schemaVersion": "00501",
+ "schemaName": "X12_00501_837"
+ }
+ ]
+ ```
+
+ In this `schemaReferences` section, add another entry that has these values:
+
+ * `"messageId": "837_P"`
+ * `"schemaVersion": "00501"`
+ * `"schemaName": "X12_00501_837_P"`
+
+ When you're done, your `schemaReferences` section looks like this:
+
+ ```json
+ "schemaReferences": [
+ {
+ "messageId": "837",
+ "schemaVersion": "00501",
+ "schemaName": "X12_00501_837"
+ },
+ {
+ "messageId": "837_P",
+ "schemaVersion": "00501",
+ "schemaName": "X12_00501_837_P"
+ }
+ ]
+ ```
+
+1. In your agreement's message settings, disable EDI validation by clearing the **EDI Validation** checkbox either for each message type or for all message types if you're using the **Default** values.
+
+ ![Screenshot shows X12 agreement settings to disable validation for all message types or each message type.](./media/logic-apps-enterprise-integration-x12-message-settings/x12-disable-validation.png)
+
+## Next steps
+
+[Exchange X12 messages](logic-apps-enterprise-integration-x12.md)
logic-apps Logic Apps Enterprise Integration X12 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-x12.md
Title: Exchange X12 messages for B2B integration
-description: Send, receive, and process X12 messages when building B2B enterprise integration solutions with Azure Logic Apps and the Enterprise Integration Pack.
+ Title: Exchange X12 messages in B2B workflows
+description: Exchange X12 messages between partners by creating workflows with Azure Logic Apps and Enterprise Integration Pack.
ms.suite: integration -+ Previously updated : 08/20/2022 Last updated : 08/15/2023
-# Exchange X12 messages for B2B enterprise integration using Azure Logic Apps and Enterprise Integration Pack
+# Exchange X12 messages using workflows in Azure Logic Apps
[!INCLUDE [logic-apps-sku-consumption-standard](../../includes/logic-apps-sku-consumption-standard.md)]
-In Azure Logic Apps, you can create workflows that work with X12 messages by using **X12** operations. These operations include triggers and actions that you can use in your workflow to handle X12 communication. You can add X12 triggers and actions in the same way as any other trigger and action in a workflow, but you need to meet extra prerequisites before you can use X12 operations.
+To send and receive X12 messages in workflows that you create using Azure Logic Apps, use the **X12** connector, which provides operations that support and manage X12 communication.
-This article describes the requirements and settings for using X12 triggers and actions in your workflow. If you're looking for EDIFACT messages instead, review [Exchange EDIFACT messages](logic-apps-enterprise-integration-edifact.md). If you're new to logic apps, see [What is Azure Logic Apps](logic-apps-overview.md) and the following documentation:
+This how-to guide shows how to add the X12 encoding and decoding actions to an existing logic app workflow. The **X12** connector doesn't include any triggers, so you can use any trigger to start your workflow. The examples in this guide use the [Request trigger](../connectors/connectors-native-reqres.md).
-* [Create an example Consumption logic app workflow in multi-tenant Azure Logic Apps](quickstart-create-example-consumption-workflow.md)
+## Connector technical reference
-* [Create an example Standard logic app workflow in single-tenant Azure Logic Apps](create-single-tenant-workflows-azure-portal.md)
+The **X12** connector has one version across workflows in [multi-tenant Azure Logic Apps, single-tenant Azure Logic Apps, and the integration service environment (ISE)](logic-apps-overview.md#resource-environment-differences). For technical information about the **X12** connector, see the following documentation:
+
+* [Connector reference page](/connectors/x12/), which describes the triggers, actions, and limits as documented by the connector's Swagger file
+
+* [B2B protocol limits for message sizes](logic-apps-limits-and-config.md#b2b-protocol-limits)
+
+ For example, in an [integration service environment (ISE)](connect-virtual-network-vnet-isolated-environment-overview.md), this connector's ISE version uses the [B2B message limits for ISE](logic-apps-limits-and-config.md#b2b-protocol-limits).
## Prerequisites * An Azure account and subscription. If you don't have an Azure subscription yet, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* A logic app resource and workflow where you want to use an X12 trigger or action. To use an X12 trigger, you need a blank workflow. To use an X12 action, you need a workflow that has an existing trigger.
+* An [integration account resource](logic-apps-enterprise-integration-create-integration-account.md) where you define and store artifacts, such as trading partners, agreements, certificates, and so on, for use in your enterprise integration and B2B workflows. This resource has to meet the following requirements:
-* An [integration account](logic-apps-enterprise-integration-create-integration-account.md) that's linked to your logic app resource. Both your logic app and integration account have to use the same Azure subscription and exist in the same Azure region or location.
+ * Both your integration account and logic app resource must exist in the same Azure subscription and Azure region.
- Your integration account also need to include the following B2B artifacts:
+ * Defines at least two [trading partners](logic-apps-enterprise-integration-partners.md) that participate in the **X12** operation used in your workflow. The definitions for both partners must use the same X12 business identity qualifier.
- * At least two [trading partners](logic-apps-enterprise-integration-partners.md) that use the X12 identity qualifier.
-
- * An X12 [agreement](logic-apps-enterprise-integration-agreements.md) defined between your trading partners. For information about settings to use when receiving and sending messages, review [Receive Settings](#receive-settings) and [Send Settings](#send-settings).
+ * Defines an [X12 agreement](logic-apps-enterprise-integration-agreements.md) between the trading partners that participate in your workflow. Each agreement requires a host partner and a guest partner. The content in the messages between you and the other partner must match the agreement type. For information about agreement settings to use when receiving and sending messages, see [X12 message settings](logic-apps-enterprise-integration-x12-message-settings.md).
> [!IMPORTANT]
+ >
> If you're working with Health Insurance Portability and Accountability Act (HIPAA) schemas, you have to add a
- > `schemaReferences` section to your agreement. For more information, review [HIPAA schemas and message types](#hipaa-schemas).
+ > `schemaReferences` section to your agreement. For more information, see [HIPAA schemas and message types](logic-apps-enterprise-integration-x12-message-settings.md#hipaa-schemas).
- * The [schemas](logic-apps-enterprise-integration-schemas.md) to use for XML validation.
+ * Defines the [schemas](logic-apps-enterprise-integration-schemas.md) to use for XML validation.
> [!IMPORTANT]
- > If you're working with Health Insurance Portability and Accountability Act (HIPAA) schemas, make sure to review [HIPAA schemas and message types](#hipaa-schemas).
-
-## Connector reference
-
-For more technical information about this connector, such as triggers, actions, and limits as described by the connector's Swagger file, see the [connector's reference page](/connectors/x12/).
-
-> [!NOTE]
-> For logic apps in an [integration service environment (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md),
-> this connector's ISE-labeled version uses the [B2B message limits for ISE](../logic-apps/logic-apps-limits-and-config.md#b2b-protocol-limits).
-
-<a name="receive-settings"></a>
-
-## Receive Settings
-
-After you set the properties in your trading partner agreement, you can configure how this agreement identifies and handles inbound messages that you receive from your partner through this agreement.
-
-1. Under **Add**, select **Receive Settings**.
-
-1. Based on the agreement with the partner that exchanges messages with you, set the properties in the **Receive Settings** pane, which is organized into the following sections:
-
- * [Identifiers](#inbound-identifiers)
- * [Acknowledgement](#inbound-acknowledgement)
- * [Schemas](#inbound-schemas)
- * [Envelopes](#inbound-envelopes)
- * [Control Numbers](#inbound-control-numbers)
- * [Validations](#inbound-validations)
- * [Internal Settings](#inbound-internal-settings)
-
-1. When you're done, make sure to save your settings by selecting **OK**.
-
-<a name="inbound-identifiers"></a>
-
-### Receive Settings - Identifiers
-
-![Identifier properties for inbound messages](./media/logic-apps-enterprise-integration-x12/x12-receive-settings-identifiers.png)
-
-| Property | Description |
-|-|-|
-| **ISA1 (Authorization Qualifier)** | The Authorization Qualifier value that you want to use. The default value is **00 - No Authorization Information Present**. <p>**Note**: If you select other values, specify a value for the **ISA2** property. |
-| **ISA2** | The Authorization Information value to use when the **ISA1** property is not **00 - No Authorization Information Present**. This property value must have a minimum of one alphanumeric character and a maximum of 10. |
-| **ISA3 (Security Qualifier)** | The Security Qualifier value that you want to use. The default value is **00 - No Security Information Present**. <p>**Note**: If you select other values, specify a value for the **ISA4** property. |
-| **ISA4** | The Security Information value to use when the **ISA3** property is not **00 - No Security Information Present**. This property value must have a minimum of one alphanumeric character and a maximum of 10. |
-|||
+ >
+ > If you're working with Health Insurance Portability and Accountability Act (HIPAA) schemas, make sure to review [HIPAA schemas and message types](logic-apps-enterprise-integration-x12-message-settings.md#hipaa-schemas).
-<a name="inbound-acknowledgement"></a>
+* Based on whether you're working on a Consumption or Standard logic app workflow, your logic app resource might require a link to your integration account:
-### Receive Settings - Acknowledgement
+ | Logic app workflow | Link required? |
+ |--|-|
+ | Consumption | Connection to integration account required, but no link required. You can create the connection when you add the **X12** operation to your workflow. |
+ | Standard | Connection to integration account required, but no link required. You can create the connection when you add the **X12** operation to your workflow. |
-![Acknowledgement for inbound messages](./media/logic-apps-enterprise-integration-x12/x12-receive-settings-acknowledgement.png)
+* The logic app resource and workflow where you want to use the X12 operations.
-| Property | Description |
-|-|-|
-| **TA1 Expected** | Return a technical acknowledgment (TA1) to the interchange sender. |
-| **FA Expected** | Return a functional acknowledgment (FA) to the interchange sender. <p>For the **FA Version** property, based on the schema version, select the 997 or 999 acknowledgments. <p>To enable generation of AK2 loops in functional acknowledgments for accepted transaction sets, select **Include AK2 / IK2 Loop**. |
+ For more information, see the following documentation:
-<a name="inbound-schemas"></a>
+ * [Create an example Consumption logic app workflow in multi-tenant Azure Logic Apps](quickstart-create-example-consumption-workflow.md)
-### Receive Settings - Schemas
+ * [Create an example Standard logic app workflow in single-tenant Azure Logic Apps](create-single-tenant-workflows-azure-portal.md)
-![Schemas for inbound messages](./media/logic-apps-enterprise-integration-x12/x12-receive-settings-schemas.png)
+<a name="encode"></a>
-For this section, select a [schema](../logic-apps/logic-apps-enterprise-integration-schemas.md) from your [integration account](./logic-apps-enterprise-integration-create-integration-account.md) for each transaction type (ST01) and Sender Application (GS02). The EDI Receive Pipeline disassembles the incoming message by matching the values and schema that you set in this section with the values for ST01 and GS02 in the incoming message and with the schema of the incoming message. After you complete each row, a new empty row automatically appears.
+## Encode X12 messages
-| Property | Description |
-|-|-|
-| **Version** | The X12 version for the schema |
-| **Transaction Type (ST01)** | The transaction type |
-| **Sender Application (GS02)** | The sender application |
-| **Schema** | The schema file that you want to use |
-|||
+The **Encode to X12 message** operation performs the following tasks:
-<a name="inbound-envelopes"></a>
+* Resolves the agreement by matching sender and receiver context properties.
+* Serializes the EDI interchange and converts XML-encoded messages into EDI transaction sets in the interchange.
+* Applies transaction set header and trailer segments.
+* Generates an interchange control number, a group control number, and a transaction set control number for each outgoing interchange.
+* Replaces separators in the payload data.
+* Validates EDI and partner-specific properties.
+ * Schema validation of transaction-set data elements against the message schema.
+ * EDI validation on transaction-set data elements.
+ * Extended validation on transaction-set data elements.
+* Requests a Technical and Functional Acknowledgment, if configured.
+ * Generates a Technical Acknowledgment as a result of header validation. The technical acknowledgment reports the status of the processing of an interchange header and trailer by the address receiver.
+ * Generates a Functional Acknowledgment generates as a result of body validation. The functional acknowledgment reports each error encountered while processing the received document.
-### Receive Settings - Envelopes
+### [Consumption](#tab/consumption)
-![Separators to use in transaction sets for inbound messages](./media/logic-apps-enterprise-integration-x12/x12-receive-settings-envelopes.png)
+1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer.
-| Property | Description |
-|-|-|
-| **ISA11 Usage** | The separator to use in a transaction set: <p>- **Standard Identifier**: Use a period (.) for decimal notation, rather than the decimal notation of the incoming document in the EDI Receive Pipeline. <p>- **Repetition Separator**: Specify the separator for repeated occurrences of a simple data element or a repeated data structure. For example, usually the carat (^) is used as the repetition separator. For HIPAA schemas, you can only use the carat. |
-|||
+1. In the designer, [follow these general steps to add the **X12** action named **Encode to X12 message by agreement name** to your workflow](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
-<a name="inbound-control-numbers"></a>
+ > [!NOTE]
+ >
+ > If you want to use **Encode to X12 message by identities** action instead,
+ > you later have to provide different values, such as the **Sender identifier**
+ > and **Receiver identifier** that's specified by your X12 agreement.
+ > You also have to specify the **XML message to encode**, which can be the output
+ > from the trigger or a preceding action.
-### Receive Settings - Control Numbers
+1. When prompted, provide the following connection information for your integration account:
-![Handling control number duplicates for inbound messages](./media/logic-apps-enterprise-integration-x12/x12-receive-settings-control-numbers.png)
+ | Property | Required | Description |
+ |-|-|-|
+ | **Connection name** | Yes | A name for the connection |
+ | **Integration Account** | Yes | From the list of available integration accounts, select the account to use. |
-| Property | Description |
-|-|-|
-| **Disallow Interchange control number duplicates** | Block duplicate interchanges. Check the interchange control number (ISA13) for the received interchange control number. If a match is detected, the EDI Receive Pipeline doesn't process the interchange. <p><p>To specify the number of days to perform the check, enter a value for the **Check for duplicate ISA13 every (days)** property. |
-| **Disallow Group control number duplicates** | Block interchanges that have duplicate group control numbers. |
-| **Disallow Transaction set control number duplicates** | Block interchanges that have duplicate transaction set control numbers. |
-|||
+ For example:
-<a name="inbound-validations"></a>
+ ![Screenshot showing Consumption workflow and connection information for action named Encode to X12 message by agreement name.](./media/logic-apps-enterprise-integration-x12/create-x12-encode-connection-consumption.png)
-### Receive Settings - Validations
+1. When you're done, select **Create**.
-![Validations for inbound messages](./media/logic-apps-enterprise-integration-x12/x12-receive-settings-validations.png)
+1. In the X12 action information box, provide the following property values:
-The **Default** row shows the validation rules that are used for an EDI message type. If you want to define different rules, select each box where you want the rule set to **true**. After you complete each row, a new empty row automatically appears.
+ | Property | Required | Description |
+ |-|-|-|
+ | **Name of X12 agreement** | Yes | The X12 agreement to use. |
+ | **XML message to encode** | Yes | The XML message to encode |
+ | Other parameters | No | This operation includes the following other parameters: <br><br>- **Data element separator** <br>- **Component separator** <br>- **Replacement character** <br>- **Segment terminator** <br>- **Segment terminator suffix** <br>- **Control Version Number** <br>- **Application Sender Identifier/Code GS02** <br>- **Application Receiver Identifier/Code GS03** <br><br>For more information, review [X12 message settings](logic-apps-enterprise-integration-x12-message-settings.md). |
-| Property | Description |
-|-|-|
-| **Message Type** | The EDI message type |
-| **EDI Validation** | Perform EDI validation on data types as defined by the schema's EDI properties, length restrictions, empty data elements, and trailing separators. |
-| **Extended Validation** | If the data type isn't EDI, validation is on the data element requirement and allowed repetition, enumerations, and data element length validation (min or max). |
-| **Allow Leading/Trailing Zeroes** | Keep any additional leading or trailing zero and space characters. Don't remove these characters. |
-| **Trim Leading/Trailing Zeroes** | Remove any leading or trailing zero and space characters. |
-| **Trailing Separator Policy** | Generate trailing separators. <p>- **Not Allowed**: Prohibit trailing delimiters and separators in the inbound interchange. If the interchange has trailing delimiters and separators, the interchange is declared not valid. <p>- **Optional**: Accept interchanges with or without trailing delimiters and separators. <p>- **Mandatory**: The inbound interchange must have trailing delimiters and separators. |
-|||
+ For example, you can use the **Body** content output from the Request trigger as the XML message payload:
-<a name="inbound-internal-settings"></a>
+ ![Screenshot showing Consumption workflow, action named Encode to X12 message by agreement name, and action properties.](./media/logic-apps-enterprise-integration-x12/encode-x12-message-agreement-consumption.png)
-### Receive Settings - Internal Settings
+### [Standard](#tab/standard)
-![Internal settings for inbound messages](./media/logic-apps-enterprise-integration-x12/x12-receive-settings-internal-settings.png)
+1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer.
-| Property | Description |
-|-|-|
-| **Convert implied decimal format Nn to a base 10 numeric value** | Convert an EDI number that is specified with the format "Nn" into a base-10 numeric value. |
-| **Create empty XML tags if trailing separators are allowed** | Have the interchange sender include empty XML tags for trailing separators. |
-| **Split Interchange as transaction sets - suspend transaction sets on error** | Parse each transaction set that's in an interchange into a separate XML document by applying the appropriate envelope to the transaction set. Suspend only the transactions where the validation fails. |
-| **Split Interchange as transaction sets - suspend interchange on error** | Parse each transaction set that's in an interchange into a separate XML document by applying the appropriate envelope. Suspend the entire interchange when one or more transaction sets in the interchange fail validation. |
-| **Preserve Interchange - suspend transaction sets on error** | Leave the interchange intact and create an XML document for the entire batched interchange. Suspend only the transaction sets that fail validation, but continue to process all other transaction sets. |
-| **Preserve Interchange - suspend interchange on error** |Leaves the interchange intact, creates an XML document for the entire batched interchange. Suspends the entire interchange when one or more transaction sets in the interchange fail validation. |
-|||
+1. In the designer, [follow these general steps to add the **X12** action named **Encode to X12 message by agreement name** to your workflow](create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
-<a name="send-settings"></a>
+ > [!NOTE]
+ >
+ > If you want to use **Encode to X12 message by identities** action instead,
+ > you later have to provide different values, such as the **Sender identifier**
+ > and **Receiver identifier** that's specified by your X12 agreement.
+ > You also have to specify the **XML message to encode**, which can be the output
+ > from the trigger or a preceding action.
-## Send Settings
+1. When prompted, provide the following connection information for your integration account:
-After you set the agreement properties, you can configure how this agreement identifies and handles outbound messages that you send to your partner through this agreement.
+ | Property | Required | Description |
+ |-|-|-|
+ | **Connection Name** | Yes | A name for the connection |
+ | **Integration Account ID** | Yes | The resource ID for your integration account, which has the following format: <br><br>**`/subscriptions/<Azure-subscription-ID>/resourceGroups/<resource-group-name>/providers/Microsoft.Logic/integrationAccounts/<integration-account-name>`** <br><br>For example: <br>`/subscriptions/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/resourceGroups/integrationAccount-RG/providers/Microsoft.Logic/integrationAccounts/myIntegrationAccount` <br><br>To find this resource ID, follow these steps: <br><br>1. In the Azure portal, open your integration account. <br>2. On the integration account menu, select **Overview**. <br>3. On the **Overview** page, select **JSON View**. <br>4. From the **Resource ID** property, copy the value. |
+ | **Integration Account SAS URL** | Yes | The request endpoint URL that uses shared access signature (SAS) authentication to provide access to your integration account. This callback URL has the following format: <br><br>**`https://<request-endpoint-URI>sp=<permissions>sv=<SAS-version>sig=<signature>`** <br><br>For example: <br>`https://prod-04.west-us.logic-azure.com:443/integrationAccounts/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX?api-version=2015-08-1-preview&sp=XXXXXXXXX&sv=1.0&sig=ZZZZZZZZZZZZZZZZZZZZZZZZZZZ` <br><br>To find this URL, follow these steps: <br><br>1. In the Azure portal, open your integration account. <br>2. On the integration account menu, under **Settings**, select **Callback URL**. <br>3. From the **Generated Callback URL** property, copy the value. |
+ | **Size of Control Number Block** | No | The block size of control numbers to reserve from an agreement for high throughput scenarios |
-1. Under **Add**, select **Send Settings**.
+ For example:
-1. Configure these properties based on your agreement with the partner that exchanges messages with you. For property descriptions, see the tables in this section.
+ ![Screenshot showing Standard workflow and connection information for action named Encode to X12 message by agreement name.](./media/logic-apps-enterprise-integration-x12/create-x12-encode-connection-standard.png)
- The **Send Settings** are organized into these sections:
+1. When you're done, select **Create**.
- * [Identifiers](#outbound-identifiers)
- * [Acknowledgement](#outbound-acknowledgement)
- * [Schemas](#outbound-schemas)
- * [Envelopes](#outbound-envelopes)
- * [Control Version Number](#outbound-control-version-number)
- * [Control Numbers](#outbound-control-numbers)
- * [Character Sets and Separators](#outbound-character-sets-separators)
- * [Validation](#outbound-validation)
+1. In the X12 action information box, provide the following property values:
-1. When you're done, make sure to save your settings by selecting **OK**.
+ | Property | Required | Description |
+ |-|-|-|
+ | **Name Of X12 Agreement** | Yes | The X12 agreement to use. |
+ | **XML Message To Encode** | Yes | The XML message to encode |
+ | **Advanced parameters** | No | This operation includes the following other parameters: <br><br>- **Data element separator** <br>- **Component separator** <br>- **Replacement character** <br>- **Segment terminator** <br>- **Segment terminator suffix** <br>- **Control Version Number** <br>- **Application Sender Identifier/Code GS02** <br>- **Application Receiver Identifier/Code GS03** <br><br>For more information, review [X12 message settings](logic-apps-enterprise-integration-x12-message-settings.md). |
-<a name="outbound-identifiers"></a>
+ For example, you can use the **Body** content output from the Request trigger as the XML message payload:
-### Send Settings - Identifiers
+ ![Screenshot showing Standard workflow, action named Encode to X12 message by agreement name, and action properties.](./media/logic-apps-enterprise-integration-x12/encode-x12-message-agreement-standard.png)
-![Identifier properties for outbound messages](./media/logic-apps-enterprise-integration-x12/x12-send-settings-identifiers.png)
-
-| Property | Description |
-|-|-|
-| **ISA1 (Authorization Qualifier)** | The Authorization Qualifier value that you want to use. The default value is **00 - No Authorization Information Present**. <p>**Note**: If you select other values, specify a value for the **ISA2** property. |
-| **ISA2** | The Authorization Information value to use when the **ISA1** property is not **00 - No Authorization Information Present**. This property value must have a minimum of one alphanumeric character and a maximum of 10. |
-| **ISA3 (Security Qualifier)** | The Security Qualifier value that you want to use. The default value is **00 - No Security Information Present**. <p>**Note**: If you select other values, specify a value for the **ISA4** property. |
-| **ISA4** | The Security Information value to use when the **ISA3** property is not **00 - No Security Information Present**. This property value must have a minimum of one alphanumeric character and a maximum of 10. |
-|||
-
-<a name="outbound-acknowledgement"></a>
-
-### Send Settings - Acknowledgement
-
-![Acknowledgement properties for outbound messages](./media/logic-apps-enterprise-integration-x12/x12-send-settings-acknowledgement.png)
-
-| Property | Description |
-|-|-|
-| **TA1 Expected** | Return a technical acknowledgment (TA1) to the interchange sender. <p>This setting specifies that the host partner, who is sending the message, requests an acknowledgment from the guest partner in the agreement. These acknowledgments are expected by the host partner based on the agreement's Receive Settings. |
-| **FA Expected** | Return a functional acknowledgment (FA) to the interchange sender. For the **FA Version** property, based on the schema version, select the 997 or 999 acknowledgements. <p>This setting specifies that the host partner, who is sending the message, requests an acknowledgement from the guest partner in the agreement. These acknowledgments are expected by the host partner based on the agreement's Receive Settings. |
-|||
-
-<a name="outbound-schemas"></a>
-
-### Send Settings - Schemas
-
-![Schemas for outbound messages](./media/logic-apps-enterprise-integration-x12/x12-send-settings-schemas.png)
-
-For this section, select a [schema](../logic-apps/logic-apps-enterprise-integration-schemas.md) from your [integration account](./logic-apps-enterprise-integration-create-integration-account.md) for each transaction type (ST01). After you complete each row, a new empty row automatically appears.
-
-| Property | Description |
-|-|-|
-| **Version** | The X12 version for the schema |
-| **Transaction Type (ST01)** | The transaction type for the schema |
-| **Schema** | The schema file that you want to use. If you select the schema first, the version and transaction type are automatically set. |
-|||
-
-<a name="outbound-envelopes"></a>
-
-### Send Settings - Envelopes
-
-![Separators in a transaction set to use for outbound messages](./media/logic-apps-enterprise-integration-x12/x12-send-settings-envelopes.png)
-
-| Property | Description |
-|-|-|
-| **ISA11 Usage** | The separator to use in a transaction set: <p>- **Standard Identifier**: Use a period (.) for decimal notation, rather than the decimal notation of the outbound document in the EDI Send Pipeline. <p>- **Repetition Separator**: Specify the separator for repeated occurrences of a simple data element or a repeated data structure. For example, usually the carat (^) is used as the repetition separator. For HIPAA schemas, you can only use the carat. |
-|||
-
-<a name="outbound-control-version-number"></a>
-
-### Send Settings - Control Version Number
+
-![Control version number for outbound messages](./media/logic-apps-enterprise-integration-x12/x12-send-settings-control-version-number.png)
+<a name="decode"></a>
-For this section, select a [schema](../logic-apps/logic-apps-enterprise-integration-schemas.md) from your [integration account](./logic-apps-enterprise-integration-create-integration-account.md) for each interchange. After you complete each row, a new empty row automatically appears.
+## Decode X12 messages
-| Property | Description |
-|-|-|
-| **Control Version Number (ISA12)** | The version of the X12 standard |
-| **Usage Indicator (ISA15)** | The context of an interchange, which is either **Test** data, **Information** data, or **Production** data |
-| **Schema** | The schema to use for generating the GS and ST segments for an X12-encoded interchange that's sent to the EDI Send Pipeline. |
-| **GS1** | Optional, select the functional code. |
-| **GS2** | Optional, specify the application sender. |
-| **GS3** | Optional, specify the application receiver. |
-| **GS4** | Optional, select **CCYYMMDD** or **YYMMDD**. |
-| **GS5** | Optional, select **HHMM**, **HHMMSS**, or **HHMMSSdd**. |
-| **GS7** | Optional, select a value for the responsible agency. |
-| **GS8** | Optional, specify the schema document version. |
-|||
+The **Decode X12 message** operation performs the following tasks:
-<a name="outbound-control-numbers"></a>
+* Validates the envelope against trading partner agreement.
-### Send Settings - Control Numbers
+* Validates EDI and partner-specific properties.
-![Control numbers for outbound messages](./media/logic-apps-enterprise-integration-x12/x12-send-settings-control-numbers.png)
+ * EDI structural validation and extended schema validation
+ * Interchange envelope structural validation
+ * Schema validation of the envelope against the control schema
+ * Schema validation of the transaction set data elements against the message schema
+ * EDI validation on transaction-set data elements
-| Property | Description |
-|-|-|
-| **Interchange Control Number (ISA13)** | The range of values for the interchange control number, which can have a minimum of value 1 and a maximum value of 999999999 |
-| **Group Control Number (GS06)** | The range of values for the group control number, which can have a minimum value of 1 and a maximum value of 999999999 |
-| **Transaction Set Control Number (ST02)** | The range of values for the transaction set control number, which can have a minimum value of 1 and a maximum value of 999999999 <p>- **Prefix**: Optional, an alphanumeric value <br>- **Suffix**: Optional, an alphanumeric value |
-|||
+* Verifies that the interchange, group, and transaction set control numbers aren't duplicates.
-<a name="outbound-character-sets-separators"></a>
+ * Checks the interchange control number against previously received interchanges.
+ * Checks the group control number against other group control numbers in the interchange.
+ * Checks the transaction set control number against other transaction set control numbers in that group.
-### Send Settings - Character Sets and Separators
+* Splits an interchange into transaction sets, or preserves the entire interchange:
-![Delimiters for message types in outbound messages](./media/logic-apps-enterprise-integration-x12/x12-send-settings-character-sets-separators.png)
+ * Split the interchange into transaction sets or suspend transaction sets on error: Parse each transaction set. The X12 decode action outputs only those transaction sets failing validation to `badMessages`, and outputs the remaining transactions sets to `goodMessages`.
-The **Default** row shows the character set that's used as delimiters for a message schema. If you don't want to use the **Default** character set, you can enter a different set of delimiters for each message type. After you complete each row, a new empty row automatically appears.
+ * Split the interchange into transaction sets or suspend interchange on error: Parse each transaction set. If one or more transaction sets in the interchange fail validation, the X12 decode action outputs all the transaction sets in that interchange to `badMessages`.
-> [!TIP]
-> To provide special character values, edit the agreement as JSON and provide the ASCII value for the special character.
+ * Preserve the interchange or suspend transaction sets on error: Preserve the interchange and process the entire batched interchange. The X12 decode action outputs only those transaction sets failing validation to `badMessages`, and outputs the remaining transactions sets to `goodMessages`.
-| Property | Description |
-|-|-|
-| **Character Set to be used** | The X12 character set, which is either **Basic**, **Extended**, or **UTF8**. |
-| **Schema** | The schema that you want to use. After you select the schema, select the character set that you want to use, based on the separator descriptions below. |
-| **Input Type** | The input type for the character set |
-| **Component Separator** | A single character that separates composite data elements |
-| **Data Element Separator** | A single character that separates simple data elements within composite data |
-| **replacement Character Separator** | A replacement character that replaces all separator characters in the payload data when generating the outbound X12 message |
-| **Segment Terminator** | A single character that indicates the end of an EDI segment |
-| **Suffix** | The character to use with the segment identifier. If you specify a suffix, the segment terminator data element can be empty. If the segment terminator is left empty, you must designate a suffix. |
-|||
+ * Preserve the interchange or suspend interchange on error: Preserve the interchange and process the entire batched interchange. If one or more transaction sets in the interchange fail validation, the X12 decode action outputs all the transaction sets in that interchange to `badMessages`.
-<a name="outbound-validation"></a>
+* Generates a Technical and Functional Acknowledgment, if configured.
-### Send Settings - Validation
+ * Generates a Technical Acknowledgment as a result of header validation. The technical acknowledgment reports the status of the processing of an interchange header and trailer by the address receiver.
+ * Generates a Functional Acknowledgment as a result of body validation. The functional acknowledgment reports each error encountered while processing the received document.
-![Validation properties for outbound messages](./media/logic-apps-enterprise-integration-x12/x12-send-settings-validation.png)
+### [Consumption](#tab/consumption)
-The **Default** row shows the validation rules that are used for an EDI message type. If you want to define different rules, select each box where you want the rule set to **true**. After you complete each row, a new empty row automatically appears.
+1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer.
-| Property | Description |
-|-|-|
-| **Message Type** | The EDI message type |
-| **EDI Validation** | Perform EDI validation on data types as defined by the schema's EDI properties, length restrictions, empty data elements, and trailing separators. |
-| **Extended Validation** | If the data type isn't EDI, validation is on the data element requirement and allowed repetition, enumerations, and data element length validation (min or max). |
-| **Allow Leading/Trailing Zeroes** | Keep any additional leading or trailing zero and space characters. Don't remove these characters. |
-| **Trim Leading/Trailing Zeroes** | Remove any leading or trailing zero and space characters. |
-| **Trailing Separator Policy** | Generate trailing separators. <p>- **Not Allowed**: Prohibit trailing delimiters and separators in the outbound interchange. If the interchange has trailing delimiters and separators, the interchange is declared not valid. <p>- **Optional**: Send interchanges with or without trailing delimiters and separators. <p>- **Mandatory**: The outbound interchange must have trailing delimiters and separators. |
-|||
+1. In the designer, [follow these general steps to add the **X12** action named **Decode X12 message** to your workflow](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
-<a name="hipaa-schemas"></a>
+1. When prompted, provide the following connection information for your integration account:
-## HIPAA schemas and message types
+ | Property | Required | Description |
+ |-|-|-|
+ | **Connection name** | Yes | A name for the connection |
+ | **Integration Account** | Yes | From the list of available integration accounts, select the account to use. |
-When you work with HIPAA schemas and the 277 or 837 message types, you need to perform a few extra steps. The [document version numbers (GS8)](#outbound-control-version-number) for these message types have more than nine characters, for example, "005010X222A1". Also, some document version numbers map to variant message types. If you don't reference the correct message type in your schema and in your agreement, you get this error message:
+ For example:
-`"The message has an unknown document type and did not resolve to any of the existing schemas configured in the agreement."`
+ ![Screenshot showing Consumption workflow and connection information for action named Decode X12 message.](./media/logic-apps-enterprise-integration-x12/create-x12-decode-connection-consumption.png)
-This table lists the affected message types, any variants, and the document version numbers that map to those message types:
+1. When you're done, select **Create**.
-| Message type or variant | Description | Document version number (GS8) |
-|-|--|-|
-| 277 | Health Care Information Status Notification | 005010X212 |
-| 837_I | Health Care Claim Institutional | 004010X096A1 <br>005010X223A1 <br>005010X223A2 |
-| 837_D | Health Care Claim Dental | 004010X097A1 <br>005010X224A1 <br>005010X224A2 |
-| 837_P | Health Care Claim Professional | 004010X098A1 <br>005010X222 <br>005010X222A1 |
-|||
+1. In the X12 action information box, provide the following property values:
-You also need to disable EDI validation when you use these document version numbers because they result in an error that the character length is invalid.
+ | Property | Required | Description |
+ |-|-|-|
+ | **X12 flat file message to decode** | Yes | The X12 message in flat file format to decode <br><br>**Note**: The XML message payload or content for the message array, good or bad, is base64 encoded. So, you must use an expression that processes this content. For example, the following expression processes the message content as XML: <br><br>**`xml(base64ToBinary(item()?['Body']))`** |
+ | Other parameters | No | This operation includes the following other parameters: <br><br>- **Preserve Interchange** <br>- **Suspend Interchange on Error** <br><br>For more information, review [X12 message settings](logic-apps-enterprise-integration-x12-message-settings.md). |
-To specify these document version numbers and message types, follow these steps:
+ For example, you can use the **Body** content output from the Request trigger as the XML message payload, but you must first preprocess this content using an expression:
-1. In your HIPAA schema, replace the current message type with the variant message type for the document version number that you want to use.
+ ![Screenshot showing Consumption workflow, action named Decode X12 message, and action properties.](./media/logic-apps-enterprise-integration-x12/decode-x12-message-consumption.png)
- For example, suppose you want to use document version number `005010X222A1` with the `837` message type. In your schema, replace each `"X12_00501_837"` value with the `"X12_00501_837_P"` value instead.
+### [Standard](#tab/standard)
- To update your schema, follow these steps:
+1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer.
- 1. In the Azure portal, go to your integration account. Find and download your schema. Replace the message type and rename the schema file, and upload your revised schema to your integration account. For more information, see [Edit a schema](logic-apps-enterprise-integration-schemas.md#edit-schema).
+1. In the designer, [follow these general steps to add the **X12** action named **Decode X12 message** to your workflow](create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
- 1. In your agreement's message settings, select the revised schema.
+1. When prompted, provide the following connection information for your integration account:
-1. In your agreement's `schemaReferences` object, add another entry that specifies the variant message type that matches your document version number.
+ | Property | Required | Description |
+ |-|-|-|
+ | **Connection Name** | Yes | A name for the connection |
+ | **Integration Account ID** | Yes | The resource ID for your integration account, which has the following format: <br><br>**`/subscriptions/<Azure-subscription-ID>/resourceGroups/<resource-group-name>/providers/Microsoft.Logic/integrationAccounts/<integration-account-name>`** <br><br>For example: <br>`/subscriptions/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/resourceGroups/integrationAccount-RG/providers/Microsoft.Logic/integrationAccounts/myIntegrationAccount` <br><br>To find this resource ID, follow these steps: <br><br>1. In the Azure portal, open your integration account. <br>2. On the integration account menu, select **Overview**. <br>3. On the **Overview** page, select **JSON View**. <br>4. From the **Resource ID** property, copy the value. |
+ | **Integration Account SAS URL** | Yes | The request endpoint URL that uses shared access signature (SAS) authentication to provide access to your integration account. This callback URL has the following format: <br><br>**`https://<request-endpoint-URI>sp=<permissions>sv=<SAS-version>sig=<signature>`** <br><br>For example: <br>`https://prod-04.west-us.logic-azure.com:443/integrationAccounts/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX?api-version=2015-08-1-preview&sp=XXXXXXXXX&sv=1.0&sig=ZZZZZZZZZZZZZZZZZZZZZZZZZZZ` <br><br>To find this URL, follow these steps: <br><br>1. In the Azure portal, open your integration account. <br>2. On the integration account menu, under **Settings**, select **Callback URL**. <br>3. From the **Generated Callback URL** property, copy the value. |
+ | **Size of Control Number Block** | No | The block size of control numbers to reserve from an agreement for high throughput scenarios |
- For example, suppose you want to use document version number `005010X222A1` for the `837` message type. Your agreement has a `schemaReferences` section with these properties and values:
+ For example:
- ```json
- "schemaReferences": [
- {
- "messageId": "837",
- "schemaVersion": "00501",
- "schemaName": "X12_00501_837"
- }
- ]
- ```
+ ![Screenshot showing Standard workflow and connection information for action named Decode X12 message.](./media/logic-apps-enterprise-integration-x12/create-x12-decode-connection-standard.png)
- In this `schemaReferences` section, add another entry that has these values:
+1. When you're done, select **Create**.
- * `"messageId": "837_P"`
- * `"schemaVersion": "00501"`
- * `"schemaName": "X12_00501_837_P"`
+1. In the X12 action information box, provide the following property values:
- When you're done, your `schemaReferences` section looks like this:
+ | Property | Required | Description |
+ |-|-|-|
+ | **X12 Flat File Message To Decode** | Yes | The X12 message in flat file format to decode <br><br>**Note**: The XML message payload or content for the message array, good or bad, is base64 encoded. So, you must use an expression that processes this content. For example, the following expression processes the message content as XML: <br><br>**`xml(base64ToBinary(item()?['Body']))`** |
+ | **Advanced parameters** | No | This operation includes the following other parameters: <br><br>- **Preserve Interchange** <br>- **Suspend Interchange on Error** <br><br>For more information, review [X12 message settings](logic-apps-enterprise-integration-x12-message-settings.md). |
- ```json
- "schemaReferences": [
- {
- "messageId": "837",
- "schemaVersion": "00501",
- "schemaName": "X12_00501_837"
- },
- {
- "messageId": "837_P",
- "schemaVersion": "00501",
- "schemaName": "X12_00501_837_P"
- }
- ]
- ```
+ For example, you can use the **Body** content output from the Request trigger as the XML message payload, but you must first preprocess this content using an expression:
-1. In your agreement's message settings, disable EDI validation by clearing the **EDI Validation** checkbox either for each message type or for all message types if you're using the **Default** values.
+ ![Screenshot showing Standard workflow, action named Decode X12 message, and action properties.](./media/logic-apps-enterprise-integration-x12/decode-x12-message-standard.png)
- ![Disable validation for all message types or each message type](./media/logic-apps-enterprise-integration-x12/x12-disable-validation.png)
+ ## Next steps * [X12 TA1 technical acknowledgments and error codes](logic-apps-enterprise-integration-x12-ta1-acknowledgment.md) * [X12 997 functional acknowledgments and error codes](logic-apps-enterprise-integration-x12-997-acknowledgment.md)
-* [Managed connectors for Azure Logic Apps](../connectors/managed.md)
-* [Built-in connectors for Azure Logic Apps](../connectors/built-in.md)
+* [X12 message settings](logic-apps-enterprise-integration-x12-message-settings.md)
machine-learning How To Manage Kubernetes Instance Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-kubernetes-instance-types.md
Title: Create and manage instance types for efficient compute resource utilization
-description: Learn about what is instance types, and how to create and manage them, and what are benefits of using instance types
+ Title: Create and manage instance types for efficient utilization of compute resources
+description: Learn about what instance types are, how to create and manage them, and what the benefits of using them are.
-# Create and manage instance types for efficient compute resource utilization
+# Create and manage instance types for efficient utilization of compute resources
-## What are instance types?
+Instance types are an Azure Machine Learning concept that allows targeting certain types of compute nodes for training and inference workloads. For an Azure virtual machine, an example of an instance type is `STANDARD_D2_V3`.
-Instance types are an Azure Machine Learning concept that allows targeting certain types of compute nodes for training and inference workloads. For an Azure VM, an example for an instance type is `STANDARD_D2_V3`.
+In Kubernetes clusters, instance types are represented in a custom resource definition (CRD) that's installed with the Azure Machine Learning extension. Two elements in the Azure Machine Learning extension represent the instance types:
-In Kubernetes clusters, instance types are represented in a custom resource definition (CRD) that is installed with the Azure Machine Learning extension. Two elements in Azure Machine Learning extension represent the instance types:
-[nodeSelector](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector)
-and [resources](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/).
+- Use [nodeSelector](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector) to specify which node a pod should run on. The node must have a corresponding label.
+- In the [resources](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/) section, you can set the compute resources (CPU, memory, and NVIDIA GPU) for the pod.
-In short, a `nodeSelector` lets you specify which node a pod should run on. The node must have a corresponding label. In the `resources` section, you can set the compute resources (CPU, memory and NVIDIA GPU) for the pod.
+If you [specify a nodeSelector field when deploying the Azure Machine Learning extension](./how-to-deploy-kubernetes-extension.md#review-azure-machine-learning-extension-configuration-settings), the `nodeSelector` field will be applied to all instance types. This means that:
->[!IMPORTANT]
->
-> If you have [specified a nodeSelector when deploying the Azure Machine Learning extension](./how-to-deploy-kubernetes-extension.md#review-azure-machine-learning-extension-configuration-settings), the nodeSelector will be applied to all instance types. This means that:
-> - For each instance type creating, the specified nodeSelector should be a subset of the extension-specified nodeSelector.
-> - If you use an instance type **with nodeSelector**, the workload will run on any node matching both the extension-specified nodeSelector and the instance type-specified nodeSelector.
-> - If you use an instance type **without a nodeSelector**, the workload will run on any node mathcing the extension-specified nodeSelector.
+- For each instance type that you create, the specified `nodeSelector` field should be a subset of the extension-specified `nodeSelector` field.
+- If you use an instance type with `nodeSelector`, the workload will run on any node that matches both the extension-specified `nodeSelector` field and the instance-type-specified `nodeSelector` field.
+- If you use an instance type without a `nodeSelector` field, the workload will run on any node that matches the extension-specified `nodeSelector` field.
+## Create a default instance type
-## Default instance type
-
-By default, a `defaultinstancetype` with the following definition is created when you attach a Kubernetes cluster to an Azure Machine Learning workspace:
-- If you don't apply a `nodeSelector`, it means the pod can get scheduled on any node.-- The workload's pods are assigned default resources with 0.1 cpu cores, 2-GB memory and 0 GPU for request.-- The resources used by the workload's pods are limited to 2 cpu cores and 8-GB memory:
+By default, an instance type called `defaultinstancetype` is created when you attach a Kubernetes cluster to an Azure Machine Learning workspace. Here's the definition:
```yaml resources:
resources:
nvidia.com/gpu: null ```
-> [!NOTE]
-> - The default instance type purposefully uses little resources. To ensure all ML workloads run with appropriate resources, for example GPU resource, it is highly recommended to create custom instance types.
-> - `defaultinstancetype` will not appear as an InstanceType custom resource in the cluster when running the command ```kubectl get instancetype```, but it will appear in all clients (UI, CLI, SDK).
-> - `defaultinstancetype` can be overridden with a custom instance type definition having the same name as `defaultinstancetype` (see [Create custom instance types](#create-custom-instance-types) section)
+If you don't apply a `nodeSelector` field, the pod can be scheduled on any node. The workload's pods are assigned default resources with 0.1 CPU cores, 2 GB of memory, and 0 GPUs for the request. The resources that the workload's pods use are limited to 2 CPU cores and 8 GB of memory.
+
+The default instance type purposefully uses few resources. To ensure that all machine learning workloads run with appropriate resources (for example, GPU resource), we highly recommend that you [create custom instance types](#create-a-custom-instance-type).
+
+Keep in mind the following points about the default instance type:
+
+- `defaultinstancetype` doesn't appear as an `InstanceType` custom resource in the cluster when you're running the command ```kubectl get instancetype```, but it does appear in all clients (UI, Azure CLI, SDK).
+- `defaultinstancetype` can be overridden with the definition of a custom instance type that has the same name.
-### Create custom instance types
+## Create a custom instance type
-To create a new instance type, create a new custom resource for the instance type CRD. For example:
+To create a new instance type, create a new custom resource for the instance type CRD. For example:
```bash kubectl apply -f my_instance_type.yaml ```
-With `my_instance_type.yaml`:
+Here are the contents of *my_instance_type.yaml*:
+ ```yaml apiVersion: amlarc.azureml.com/v1alpha1 kind: InstanceType
spec:
memory: "1500Mi" ```
-The following steps create an instance type with the labeled behavior:
-- Pods are scheduled only on nodes with label `mylabel: mylabelvalue`.-- Pods are assigned resource requests of `700m` CPU and `1500Mi` memory.-- Pods are assigned resource limits of `1` CPU, `2Gi` memory and `1` NVIDIA GPU.
+The preceding code creates an instance type with the labeled behavior:
-Creation of custom instance types must meet the following parameters and definition rules, otherwise the instance type creation fails:
+- Pods are scheduled only on nodes that have the label `mylabel: mylabelvalue`.
+- Pods are assigned resource requests of `700m` for CPU and `1500Mi` for memory.
+- Pods are assigned resource limits of `1` for CPU, `2Gi` for memory, and `1` for NVIDIA GPU.
-| Parameter | Required | Description |
-| | | |
-| name | required | String values, which must be unique in cluster.|
-| CPU request | required | String values, which cannot be 0 or empty. <br>You can specify the CPU in millicores; for example, `100m`. You can also specify it as full numbers; for example, `"1"` is equivalent to `1000m`.|
-| Memory request | required | String values, which cannot be 0 or empty. <br>You can specify the memory as a full number + suffix; for example, `1024Mi` for 1024 MiB.|
-| CPU limit | required | String values, which cannot be 0 or empty. <br>You can specify the CPU in millicores; for example, `100m`. You can also specify it as full numbers; for example, `"1"` is equivalent to `1000m`.|
-| Memory limit | required | String values, which cannot be 0 or empty. <br>You can specify the memory as a full number + suffix; for example, `1024Mi` for 1024 MiB.|
-| GPU | optional | Integer values, which can only be specified in the `limits` section. <br>For more information, see the Kubernetes [documentation](https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/#using-device-plugins). |
-| nodeSelector | optional | Map of string keys and values. |
+Creation of custom instance types must meet the following parameters and definition rules, or it will fail:
+| Parameter | Required or optional | Description |
+| | | |
+| `name` | Required | String values, which must be unique in a cluster.|
+| `CPU request` | Required | String values, which can't be zero or empty. <br>You can specify the CPU in millicores; for example, `100m`. You can also specify it as full numbers. For example, `"1"` is equivalent to `1000m`.|
+| `Memory request` | Required | String values, which can't be zero or empty. <br>You can specify the memory as a full number + suffix; for example, `1024Mi` for 1,024 mebibytes (MiB).|
+| `CPU limit` | Required | String values, which can't be zero or empty. <br>You can specify the CPU in millicores; for example, `100m`. You can also specify it as full numbers. For example, `"1"` is equivalent to `1000m`.|
+| `Memory limit` | Required | String values, which can't be zero or empty. <br>You can specify the memory as a full number + suffix; for example, `1024Mi` for 1024 MiB.|
+| `GPU` | Optional | Integer values, which can be specified only in the `limits` section. <br>For more information, see the [Kubernetes documentation](https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/#using-device-plugins). |
+| `nodeSelector` | Optional | Map of string keys and values. |
It's also possible to create multiple instance types at once:
It's also possible to create multiple instance types at once:
kubectl apply -f my_instance_type_list.yaml ```
-With `my_instance_type_list.yaml`:
+Here are the contents of *my_instance_type_list.yaml*:
+ ```yaml apiVersion: amlarc.azureml.com/v1alpha1 kind: InstanceTypeList
items:
memory: "1Gi" ```
-The above example creates two instance types: `cpusmall` and `defaultinstancetype`. This `defaultinstancetype` definition overrides the `defaultinstancetype` definition created when Kubernetes cluster was attached to Azure Machine Learning workspace.
+The preceding example creates two instance types: `cpusmall` and `defaultinstancetype`. This `defaultinstancetype` definition overrides the `defaultinstancetype` definition that was created when you attached the Kubernetes cluster to the Azure Machine Learning workspace.
-If you submit a training or inference workload without an instance type, it uses the `defaultinstancetype`. To specify a default instance type for a Kubernetes cluster, create an instance type with name `defaultinstancetype`. It's automatically recognized as the default.
+If you submit a training or inference workload without an instance type, it uses `defaultinstancetype`. To specify a default instance type for a Kubernetes cluster, create an instance type with the name `defaultinstancetype`. It's automatically recognized as the default.
+## Select an instance type to submit a training job
-### Select instance type to submit training job
+### [Azure CLI](#tab/select-instancetype-to-trainingjob-with-cli)
-#### [Azure CLI](#tab/select-instancetype-to-trainingjob-with-cli)
-
-To select an instance type for a training job using CLI (V2), specify its name as part of the
-`resources` properties section in job YAML. For example:
+To select an instance type for a training job by using the Azure CLI (v2), specify its name as part of the
+`resources` properties section in the job YAML. For example:
```yaml command: python -c "print('Hello world!')"
environment:
image: library/python:latest compute: azureml:<Kubernetes-compute_target_name> resources:
- instance_type: <instance_type_name>
+ instance_type: <instance type name>
```
-#### [Python SDK](#tab/select-instancetype-to-trainingjob-with-sdk)
+### [Python SDK](#tab/select-instancetype-to-trainingjob-with-sdk)
-To select an instance type for a training job using SDK (V2), specify its name for `instance_type` property in `command` class. For example:
+To select an instance type for a training job by using the SDK (v2), specify its name for the `instance_type` property in the `command` class. For example:
```python from azure.ai.ml import command
command_job = command(
command="python -c "print('Hello world!')"", environment="AzureML-lightgbm-3.2-ubuntu18.04-py37-cpu@latest", compute="<Kubernetes-compute_target_name>",
- instance_type="<instance_type_name>"
+ instance_type="<instance type name>"
) ```+
-In the above example, replace `<Kubernetes-compute_target_name>` with the name of your Kubernetes compute
-target and replace `<instance_type_name>` with the name of the instance type you wish to select. If there's no `instance_type` property specified, the system uses `defaultinstancetype` to submit the job.
+In the preceding example, replace `<Kubernetes-compute_target_name>` with the name of your Kubernetes compute target. Replace `<instance type name>` with the name of the instance type that you want to select. If you don't specify an `instance_type` property, the system uses `defaultinstancetype` to submit the job.
-### Select instance type to deploy model
+## Select an instance type to deploy a model
-#### [Azure CLI](#tab/select-instancetype-to-modeldeployment-with-cli)
+### [Azure CLI](#tab/select-instancetype-to-modeldeployment-with-cli)
-To select an instance type for a model deployment using CLI (V2), specify its name for the `instance_type` property in the deployment YAML. For example:
+To select an instance type for a model deployment by using the Azure CLI (v2), specify its name for the `instance_type` property in the deployment YAML. For example:
```yaml name: blue
environment:
image: mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:latest ```
-#### [Python SDK](#tab/select-instancetype-to-modeldeployment-with-sdk)
+### [Python SDK](#tab/select-instancetype-to-modeldeployment-with-sdk)
-To select an instance type for a model deployment using SDK (V2), specify its name for the `instance_type` property in the `KubernetesOnlineDeployment` class. For example:
+To select an instance type for a model deployment by using the SDK (v2), specify its name for the `instance_type` property in the `KubernetesOnlineDeployment` class. For example:
```python from azure.ai.ml import KubernetesOnlineDeployment,Model,Environment,CodeConfiguration
blue_deployment = KubernetesOnlineDeployment(
instance_type="<instance type name>", ) ```+
-In the above example, replace `<instance_type_name>` with the name of the instance type you wish to select. If there's no `instance_type` property specified, the system uses `defaultinstancetype` to deploy the model.
+In the preceding example, replace `<instance type name>` with the name of the instance type that you want to select. If you don't specify an `instance_type` property, the system uses `defaultinstancetype` to deploy the model.
> [!IMPORTANT]
->
-> For MLFlow model deployment, the resource request require at least 2 CPU and 4 GB memory, otherwise the deployment will fail.
+> For MLflow model deployment, the resource request requires at least 2 CPU cores and 4 GB of memory. Otherwise, the deployment will fail.
+
+### Resource section validation
-#### Resource section validation
-If you're using the `resource section` to define the resource request and limit of your model deployments, for example:
+You can use the `resources` section to define the resource request and limit of your model deployments. For example:
#### [Azure CLI](#tab/define-resource-to-modeldeployment-with-cli)
resources:
memory: "0.5Gi" instance_type: <instance type name> ```+ #### [Python SDK](#tab/define-resource-to-modeldeployment-with-sdk) ```python
blue_deployment = KubernetesOnlineDeployment(
instance_type="<instance type name>", ) ```+
-If you use the `resource section`, the valid resource definition need to meet the following rules, otherwise the model deployment fails due to the invalid resource definition:
+If you use the `resources` section, a valid resource definition needs to meet the following rules. An invalid resource definition will cause the model deployment to fail.
-| Parameter | If necessary | Description |
+| Parameter | Required or optional | Description |
| | | |
-| `requests:`<br>`cpu:`| Required | String values, which can't be 0 or empty. <br>You can specify the CPU in millicores, for example `100m`, or in full numbers, for example `"1"` is equivalent to `1000m`.|
-| `requests:`<br>`memory:` | Required | String values, which can't be 0 or empty. <br>You can specify the memory as a full number + suffix, for example `1024Mi` for 1024 MiB. <br>Memory can't be less than **1 MBytes**.|
-| `limits:`<br>`cpu:` | Optional <br>(only required when need GPU) | String values, which can't be 0 or empty. <br>You can specify the CPU in millicores, for example `100m`, or in full numbers, for example `"1"` is equivalent to `1000m`. |
-| `limits:`<br>`memory:` | Optional <br>(only required when need GPU) | String values, which can't be 0 or empty. <br>You can specify the memory as a full number + suffix, for example `1024Mi` for 1024 MiB.|
-| `limits:`<br>`nvidia.com/gpu:` | Optional <br>(only required when need GPU) | Integer values, which can't be empty and can only be specified in the `limits` section. <br>For more information, see the Kubernetes [documentation](https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/#using-device-plugins). <br>If require CPU only, you can omit the entire `limits` section.|
-
-> [!NOTE]
->
->If the resource section definition is invalid, the deployment will fail.
->
-> The `instance type` is **required** for model deployment. If you have defined the resource section, and it will be validated against the instance type, the rules are as follows:
- > * With a valid resource section definition, the resource limits must be less than instance type limits, otherwise deployment will fail.
- > * If the user does not define instance type, the `defaultinstancetype` will be used to be validated with resource section.
- > * If the user does not define resource section, the instance type will be used to create deployment.
+| `requests:`<br>`cpu:`| Required | String values, which can't be zero or empty. <br>You can specify the CPU in millicores; for example, `100m`. You can also specify it in full numbers. For example, `"1"` is equivalent to `1000m`.|
+| `requests:`<br>`memory:` | Required | String values, which can't be zero or empty. <br>You can specify the memory as a full number + suffix; for example, `1024Mi` for 1024 MiB. <br>Memory can't be less than 1 MB.|
+| `limits:`<br>`cpu:` | Optional <br>(required only when you need GPU) | String values, which can't be zero or empty. <br>You can specify the CPU in millicores; for example `100m`. You can also specify it in full numbers. For example, `"1"` is equivalent to `1000m`. |
+| `limits:`<br>`memory:` | Optional <br>(required only when you need GPU) | String values, which can't be zero or empty. <br>You can specify the memory as a full number + suffix; for example, `1024Mi` for 1,024 MiB.|
+| `limits:`<br>`nvidia.com/gpu:` | Optional <br>(required only when you need GPU) | Integer values, which can't be empty and can be specified only in the `limits` section. <br>For more information, see the [Kubernetes documentation](https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/#using-device-plugins). <br>If you require CPU only, you can omit the entire `limits` section.|
+
+The instance type is *required* for model deployment. If you defined the `resources` section, and it will be validated against the instance type, the rules are as follows:
+- With a valid `resource` section definition, the resource limits must be less than the instance type limits. Otherwise, deployment will fail.
+- If you don't define an instance type, the system uses `defaultinstancetype` for validation with the `resources` section.
+- If you don't define the `resources` section, the system uses the instance type to create the deployment.
## Next steps - [Azure Machine Learning inference router and connectivity requirements](./how-to-kubernetes-inference-routing-azureml-fe.md)-- [Secure AKS inferencing environment](./how-to-secure-kubernetes-inferencing-environment.md)
+- [Secure Azure Kubernetes Service inferencing environment](./how-to-secure-kubernetes-inferencing-environment.md)
machine-learning Migrate To V2 Assets Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-assets-model.md
This article gives a comparison of scenario(s) in SDK v1 and SDK v2.
ml_client.models.create_or_update(run_model) ```
+For more information about models, see [Work with models in Azure Machine Learning](how-to-manage-models.md).
+ ## Mapping of key functionality in SDK v1 and SDK v2 |Functionality in SDK v1|Rough mapping in SDK v2|
machine-learning How To Develop A Standard Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-develop-a-standard-flow.md
We also support the input type of int, bool, double, list and object.
:::image type="content" source="./media/how-to-develop-a-standard-flow/flow-input-datatype.png" alt-text="Screenshot of inputs showing the type drop-down menu with string selected. " lightbox = "./media/how-to-develop-a-standard-flow/flow-input-datatype.png":::
-You should first set the input schema (name: url; type: string), then set a value manually or by:
+## Develop the flow using different tools
-1. Inputting data manually in the value field.
-2. Selecting a row of existing dataset in **fill value from data**.
--
-The dataset selection supports search and autosuggestion.
--
-After selecting a row, the url is backfilled to the value field.
-
-If the existing datasets don't meet your needs, upload new data from files. We support **.csv** and **.txt** for now.
--
-## Develop tool in your flow
-
-In one flow, you can consume different kinds of tools. We now support LLM, Python, Serp API, Content Safety and Vector Search.
+In one flow, you can consume different kinds of tools. We now support LLM, Python, Serp API, Content Safety, Vector Search and etc.
### Add tool as your need
First define flow output schema, then select in drop-down the node whose output
## Next steps -- [Develop a customized evaluation flow](how-to-develop-an-evaluation-flow.md)
+- [Bulk test using more data and evaluate the flow performance](how-to-bulk-test-evaluate-flow.md)
- [Tune prompts using variants](how-to-tune-prompts-using-variants.md) - [Deploy a flow](how-to-deploy-for-real-time-inference.md)
mysql How To Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-azure-ad.md
Last updated 11/21/2022 -+
network-watcher Traffic Analytics Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/traffic-analytics-schema.md
Title: Traffic analytics schema and data aggregation
description: Learn about schema and data aggregation in Azure Network Watcher traffic analytics to analyze flow logs. --- Previously updated : 04/11/2023 -++ Last updated : 08/16/2023 # Schema and data aggregation in Azure Network Watcher traffic analytics
Traffic analytics is a cloud-based solution that provides visibility into user a
## Data aggregation
+# [**NSG flow logs**](#tab/nsg)
+ - All flow logs at a network security group between `FlowIntervalStartTime_t` and `FlowIntervalEndTime_t` are captured at one-minute intervals as blobs in a storage account. - Default processing interval of traffic analytics is 60 minutes, meaning that every hour, traffic analytics picks blobs from the storage account for aggregation. However, if a processing interval of 10 minutes is selected, traffic analytics will instead pick blobs from the storage account every 10 minutes.-- Flows that have the same `Source IP`, `Destination IP`, `Destination port`, `NSG name`, `NSG rule`, `Flow Direction`, and `Transport layer protocol` (TCP or UDP) (Note: source port is excluded for aggregation) are clubbed into a single flow by traffic analytics.-- This single record is decorated (details in the section below) and ingested in Log Analytics by traffic analytics. This process can take up to 1 hour max.
+- Flows that have the same `Source IP`, `Destination IP`, `Destination port`, `NSG name`, `NSG rule`, `Flow Direction`, and `Transport layer protocol (TCP or UDP)` are clubbed into a single flow by traffic analytics (Note: source port is excluded for aggregation).
+- This single record is decorated (details in the section below) and ingested in Azure Monitor logs by traffic analytics. This process can take up to 1 hour.
- `FlowStartTime_t` field indicates the first occurrence of such an aggregated flow (same four-tuple) in the flow log processing interval between `FlowIntervalStartTime_t` and `FlowIntervalEndTime_t`.-- For any resource in traffic analytics, the flows indicated in the Azure portal are total flows seen by the network security group, but in Log Analytics user sees only the single, reduced record. To see all the flows, use the `blob_id` field, which can be referenced from storage. The total flow count for that record matches the individual flows seen in the blob.
+- For any resource in traffic analytics, the flows indicated in the Azure portal are total flows seen by the network security group, but in Azure Monitor logs, user sees only the single, reduced record. To see all the flows, use the `blob_id` field, which can be referenced from storage. The total flow count for that record matches the individual flows seen in the blob.
+
+# [**VNet flow logs (preview)**](#tab/vnet)
+
+- All flow logs between `FlowIntervalStartTime` and `FlowIntervalEndTime` are captured at one-minute intervals as blobs in a storage account.
+- Default processing interval of traffic analytics is 60 minutes, meaning that every hour, traffic analytics picks blobs from the storage account for aggregation. However, if a processing interval of 10 minutes is selected, traffic analytics will instead pick blobs from the storage account every 10 minutes.
+- Flows that have the same `Source IP`, `Destination IP`, `Destination port`, `NSG name`, `NSG rule`, `Flow Direction`, and `Transport layer protocol (TCP or UDP)` are clubbed into a single flow by traffic analytics (Note: source port is excluded for aggregation).
+- This single record is decorated (details in the section below) and ingested in Azure Monitor logs by traffic analytics. This process can take up to 1 hour.
+- `FlowStartTime` field indicates the first occurrence of such an aggregated flow (same four-tuple) in the flow log processing interval between `FlowIntervalStartTime` and `FlowIntervalEndTime`.
+- For any resource in traffic analytics, the flows indicated in the Azure portal are total flows seen, but in Azure Monitor logs, user sees only the single, reduced record. To see all the flows, use the `blob_id` field, which can be referenced from storage. The total flow count for that record matches the individual flows seen in the blob.
++ The following query helps you look at all subnets interacting with non-Azure public IPs in the last 30 days.
TableWithBlobId
The previous query constructs a URL to access the blob directly. The URL with placeholders is as follows: ```
-https://{saName}@insights-logs-networksecuritygroupflowevent/resoureId=/SUBSCRIPTIONS/{subscriptionId}/RESOURCEGROUPS/{resourceGroup}/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/{nsgName}/y={year}/m={month}/d={day}/h={hour}/m=00/macAddress={macAddress}/PT1H.json
+https://{storageAccountName}@insights-logs-networksecuritygroupflowevent/resoureId=/SUBSCRIPTIONS/{subscriptionId}/RESOURCEGROUPS/{resourceGroup}/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/{networkSecurityGroupName}/y={year}/m={month}/d={day}/h={hour}/m=00/macAddress={macAddress}/PT1H.json
``` ## Traffic analytics schema
+Traffic analytics is built on top of Azure Monitor logs, so you can run custom queries on data decorated by traffic analytics and set alerts.
+
+The following table lists the fields in the schema and what they signify.
+
+# [**NSG flow logs**](#tab/nsg)
+
+| Field | Format | Comments |
+| -- | | -- |
+| **TableName** | AzureNetworkAnalytics_CL | Table for traffic analytics data. |
+| **SubType_s** | FlowLog | Subtype for the flow logs. Use only **FlowLog**, other values of **SubType_s** are for internal use. |
+| **FASchemaVersion_s** | 2 | Schema version. Doesn't reflect NSG flow log version. |
+| **TimeProcessed_t** | Date and time in UTC | Time at which the traffic analytics processed the raw flow logs from the storage account. |
+| **FlowIntervalStartTime_t** | Date and time in UTC | Starting time of the flow log processing interval (time from which flow interval is measured). |
+| **FlowIntervalEndTime_t** | Date and time in UTC | Ending time of the flow log processing interval. |
+| **FlowStartTime_t** | Date and time in UTC | First occurrence of the flow (which gets aggregated) in the flow log processing interval between `FlowIntervalStartTime_t` and `FlowIntervalEndTime_t`. This flow gets aggregated based on aggregation logic. |
+| **FlowEndTime_t** | Date and time in UTC | Last occurrence of the flow (which gets aggregated) in the flow log processing interval between `FlowIntervalStartTime_t` and `FlowIntervalEndTime_t`. In terms of flow log v2, this field contains the time when the last flow with the same four-tuple started (marked as **B** in the raw flow record). |
+| **FlowType_s** | - IntraVNet <br> - InterVNet <br> - S2S <br> - P2S <br> - AzurePublic <br> - ExternalPublic <br> - MaliciousFlow <br> - Unknown Private <br> - Unknown | See [Notes](#notes) for definitions. |
+| **SrcIP_s** | Source IP address | Blank in AzurePublic and ExternalPublic flows. |
+| **DestIP_s** | Destination IP address | Blank in AzurePublic and ExternalPublic flows. |
+| **VMIP_s** | IP of the VM | Used for AzurePublic and ExternalPublic flows. |
+| **DestPort_d** | Destination Port | Port at which traffic is incoming. |
+| **L4Protocol_s** | - T <br> - U | Transport Protocol. T = TCP <br> U = UDP. |
+| **L7Protocol_s** | Protocol Name | Derived from destination port. |
+| **FlowDirection_s** | - I = Inbound <br> - O = Outbound | Direction of the flow: in or out of network security group per flow log. |
+| **FlowStatus_s** | - A = Allowed <br> - D = Denied | Status of flow whether allowed or denied by the network security group per flow log. |
+| **NSGList_s** | \<SUBSCRIPTIONID>\/<RESOURCEGROUP_NAME>\/<NSG_NAME> | Network security group associated with the flow. |
+| **NSGRules_s** | \<Index value 0>\|\<NSG_RULENAME>\|\<Flow Direction>\|\<Flow Status>\|\<FlowCount ProcessedByRule> | Network security group rule that allowed or denied this flow. |
+| **NSGRule_s** | NSG_RULENAME | Network security group rule that allowed or denied this flow. |
+| **NSGRuleType_s** | - User Defined <br> - Default | The type of network security group rule used by the flow. |
+| **MACAddress_s** | MAC Address | MAC address of the NIC at which the flow was captured. |
+| **Subscription_s** | Subscription of the Azure virtual network / network interface / virtual machine is populated in this field | Applicable only for FlowType = S2S, P2S, AzurePublic, ExternalPublic, MaliciousFlow, and UnknownPrivate flow types (flow types where only one side is Azure). |
+| **Subscription1_s** | Subscription ID | Subscription ID of virtual network / network interface / virtual machine that the source IP in the flow belongs to. |
+| **Subscription2_s** | Subscription ID | Subscription ID of virtual network/ network interface / virtual machine that the destination IP in the flow belongs to. |
+| **Region_s** | Azure region of virtual network / network interface / virtual machine that the IP in the flow belongs to. | Applicable only for FlowType = S2S, P2S, AzurePublic, ExternalPublic, MaliciousFlow, and UnknownPrivate flow types (flow types where only one side is Azure). |
+| **Region1_s** | Azure Region | Azure region of virtual network / network interface / virtual machine that the source IP in the flow belongs to. |
+| **Region2_s** | Azure Region | Azure region of virtual network that the destination IP in the flow belongs to. |
+| **NIC_s** | \<resourcegroup_Name>\/\<NetworkInterfaceName> | NIC associated with the VM sending or receiving the traffic. |
+| **NIC1_s** | <resourcegroup_Name>/\<NetworkInterfaceName> | NIC associated with the source IP in the flow. |
+| **NIC2_s** | <resourcegroup_Name>/\<NetworkInterfaceName> | NIC associated with the destination IP in the flow. |
+| **VM_s** | <resourcegroup_Name>\/\<NetworkInterfaceName> | Virtual Machine associated with the Network interface NIC_s. |
+| **VM1_s** | <resourcegroup_Name>/\<VirtualMachineName> | Virtual Machine associated with the source IP in the flow. |
+| **VM2_s** | <resourcegroup_Name>/\<VirtualMachineName> | Virtual Machine associated with the destination IP in the flow. |
+| **Subnet_s** | <ResourceGroup_Name>/<VirtualNetwork_Name>/\<SubnetName> | Subnet associated with the NIC_s. |
+| **Subnet1_s** | <ResourceGroup_Name>/<VirtualNetwork_Name>/\<SubnetName> | Subnet associated with the Source IP in the flow. |
+| **Subnet2_s** | <ResourceGroup_Name>/<VirtualNetwork_Name>/\<SubnetName> | Subnet associated with the Destination IP in the flow. |
+| **ApplicationGateway1_s** | \<SubscriptionID>/\<ResourceGroupName>/\<ApplicationGatewayName> | Application gateway associated with the Source IP in the flow. |
+| **ApplicationGateway2_s** | \<SubscriptionID>/\<ResourceGroupName>/\<ApplicationGatewayName> | Application gateway associated with the Destination IP in the flow. |
+| **LoadBalancer1_s** | \<SubscriptionID>/\<ResourceGroupName>/\<LoadBalancerName> | Load balancer associated with the Source IP in the flow. |
+| **LoadBalancer2_s** | \<SubscriptionID>/\<ResourceGroupName>/\<LoadBalancerName> | Load balancer associated with the Destination IP in the flow. |
+| **LocalNetworkGateway1_s** | \<SubscriptionID>/\<ResourceGroupName>/\<LocalNetworkGatewayName> | Local network gateway associated with the Source IP in the flow. |
+| **LocalNetworkGateway2_s** | \<SubscriptionID>/\<ResourceGroupName>/\<LocalNetworkGatewayName> | Local network gateway associated with the Destination IP in the flow. |
+| **ConnectionType_s** | Possible values are VNetPeering, VpnGateway, and ExpressRoute | Connection Type. |
+| **ConnectionName_s** | \<SubscriptionID>/\<ResourceGroupName>/\<ConnectionName> | Connection Name. For flow type P2S, it is formatted as \<gateway name\>_\<VPN Client IP\>. |
+| **ConnectingVNets_s** | Space separated list of virtual network names | In case of hub and spoke topology, hub virtual networks are populated here. |
+| **Country_s** | Two letter country code (ISO 3166-1 alpha-2) | Populated for flow type ExternalPublic. All IP addresses in PublicIPs_s field share the same country code. |
+| **AzureRegion_s** | Azure region locations | Populated for flow type AzurePublic. All IP addresses in PublicIPs_s field share the Azure region. |
+| **AllowedInFlows_d** | | Count of inbound flows that were allowed, which represents the number of flows that shared the same four-tuple inbound to the network interface at which the flow was captured. |
+| **DeniedInFlows_d** | | Count of inbound flows that were denied. (Inbound to the network interface at which the flow was captured). |
+| **AllowedOutFlows_d** | | Count of outbound flows that were allowed (Outbound to the network interface at which the flow was captured). |
+| **DeniedOutFlows_d** | | Count of outbound flows that were denied (Outbound to the network interface at which the flow was captured). |
+| **FlowCount_d** | Deprecated. Total flows that matched the same four-tuple. In case of flow types ExternalPublic and AzurePublic, count includes the flows from various PublicIP addresses as well. |
+| **InboundPackets_d** | Represents packets sent from the destination to the source of the flow | Populated only for Version 2 of NSG flow log schema. |
+| **OutboundPackets_d** | Represents packets sent from the source to the destination of the flow | Populated only for Version 2 of NSG flow log schema. |
+| **InboundBytes_d** | Represents bytes sent from the destination to the source of the flow | Populated only for Version 2 of NSG flow log schema. |
+| **OutboundBytes_d** | Represents bytes sent from the source to the destination of the flow | Populated only for Version 2 of NSG flow log schema. |
+| **CompletedFlows_d**| | Populated with nonzero value only for Version 2 of NSG flow log schema. |
+| **PublicIPs_s** | <PUBLIC_IP>\|\<FLOW_STARTED_COUNT>\|\<FLOW_ENDED_COUNT>\|\<OUTBOUND_PACKETS>\|\<INBOUND_PACKETS>\|\<OUTBOUND_BYTES>\|\<INBOUND_BYTES> | Entries separated by bars. |
+| **SrcPublicIPs_s** | <SOURCE_PUBLIC_IP>\|\<FLOW_STARTED_COUNT>\|\<FLOW_ENDED_COUNT>\|\<OUTBOUND_PACKETS>\|\<INBOUND_PACKETS>\|\<OUTBOUND_BYTES>\|\<INBOUND_BYTES> | Entries separated by bars. |
+| **DestPublicIPs_s** | <DESTINATION_PUBLIC_IP>\|\<FLOW_STARTED_COUNT>\|\<FLOW_ENDED_COUNT>\|\<OUTBOUND_PACKETS>\|\<INBOUND_PACKETS>\|\<OUTBOUND_BYTES>\|\<INBOUND_BYTES> | Entries separated by bars. |
+ > [!IMPORTANT]
-> The traffic analytics schema was updated on August 22, 2019. The new schema provides source and destination IPs separately, removing need to parse the `FlowDirection` field so that queries are simpler. These are changes in the updated schema:
+> The traffic analytics schema was updated on August 22, 2019. The new schema provides source and destination IPs separately, removing the need to parse the `FlowDirection` field so that queries are simpler. The updated schema had the following changes:
> > - `FASchemaVersion_s` updated from 1 to 2. > - Deprecated fields: `VMIP_s`, `Subscription_s`, `Region_s`, `NSGRules_s`, `Subnet_s`, `VM_s`, `NIC_s`, `PublicIPs_s`, `FlowCount_d` > - New fields: `SrcPublicIPs_s`, `DestPublicIPs_s`, `NSGRule_s`
->
-> Deprecated fields are available until November 2022.
->
-
-Traffic analytics is built on top of Log Analytics, so you can run custom queries on data decorated by traffic analytics and set alerts on the same.
-The following table lists the fields in the schema and what they signify.
+# [**VNet flow logs (preview)**](#tab/vnet)
| Field | Format | Comments | | -- | | -- |
-| TableName | AzureNetworkAnalytics_CL | Table for traffic analytics data. |
-| SubType_s | FlowLog | Subtype for the flow logs. Use only "FlowLog", other values of SubType_s are for internal workings of the product. |
-| FASchemaVersion_s | 2 | Schema version. Doesn't reflect NSG flow log version. |
-| TimeProcessed_t | Date and Time in UTC | Time at which the traffic analytics processed the raw flow logs from the storage account. |
-| FlowIntervalStartTime_t | Date and Time in UTC | Starting time of the flow log processing interval (time from which flow interval is measured). |
-| FlowIntervalEndTime_t | Date and Time in UTC | Ending time of the flow log processing interval. |
-| FlowStartTime_t | Date and Time in UTC | First occurrence of the flow (which will get aggregated) in the flow log processing interval between ΓÇ£FlowIntervalStartTime_tΓÇ¥ and ΓÇ£FlowIntervalEndTime_tΓÇ¥. This flow gets aggregated based on aggregation logic. |
-| FlowEndTime_t | Date and Time in UTC | Last occurrence of the flow (which will get aggregated) in the flow log processing interval between ΓÇ£FlowIntervalStartTime_tΓÇ¥ and ΓÇ£FlowIntervalEndTime_tΓÇ¥. In terms of flow log v2, this field contains the time when the last flow with the same four-tuple started (marked as ΓÇ£BΓÇ¥ in the raw flow record). |
-| FlowType_s | * IntraVNet <br> * InterVNet <br> * S2S <br> * P2S <br> * AzurePublic <br> * ExternalPublic <br> * MaliciousFlow <br> * Unknown Private <br> * Unknown | Definition in notes below the table. |
-| SrcIP_s | Source IP address | Will be blank in case of AzurePublic and ExternalPublic flows. |
-| DestIP_s | Destination IP address | Will be blank in case of AzurePublic and ExternalPublic flows. |
-| VMIP_s | IP of the VM | Used for AzurePublic and ExternalPublic flows. |
-| DestPort_d | Destination Port | Port at which traffic is incoming. |
-| L4Protocol_s | * T <br> * U | Transport Protocol. T = TCP <br> U = UDP. |
-| L7Protocol_s | Protocol Name | Derived from destination port. |
-| FlowDirection_s | * I = Inbound<br> * O = Outbound | Direction of the flow in/out of NSG as per flow log. |
-| FlowStatus_s | * A = Allowed by NSG Rule <br> * D = Denied by NSG Rule | Status of flow allowed/nblocked by NSG as per flow log. |
-| NSGList_s | \<SUBSCRIPTIONID>\/<RESOURCEGROUP_NAME>\/<NSG_NAME> | Network Security Group (NSG) associated with the flow. |
-| NSGRules_s | \<Index value 0)>\|\<NSG_RULENAME>\|\<Flow Direction>\|\<Flow Status>\|\<FlowCount ProcessedByRule> | NSG rule that allowed or denied this flow. |
-| NSGRule_s | NSG_RULENAME | NSG rule that allowed or denied this flow. |
-| NSGRuleType_s | * User Defined * Default | The type of NSG Rule used by the flow. |
-| MACAddress_s | MAC Address | MAC address of the NIC at which the flow was captured. |
-| Subscription_s | Subscription of the Azure virtual network/ network interface/ virtual machine is populated in this field | Applicable only for FlowType = S2S, P2S, AzurePublic, ExternalPublic, MaliciousFlow, and UnknownPrivate flow types (flow types where only one side is Azure). |
-| Subscription1_s | Subscription ID | Subscription ID of virtual network/ network interface/ virtual machine to which the source IP in the flow belongs to. |
-| Subscription2_s | Subscription ID | Subscription ID of virtual network/ network interface/ virtual machine to which the destination IP in the flow belongs to. |
-| Region_s | Azure region of virtual network/ network interface/ virtual machine to which the IP in the flow belongs to | Applicable only for FlowType = S2S, P2S, AzurePublic, ExternalPublic, MaliciousFlow, and UnknownPrivate flow types (flow types where only one side is Azure). |
-| Region1_s | Azure Region | Azure region of virtual network/ network interface/ virtual machine to which the source IP in the flow belongs to. |
-| Region2_s | Azure Region | Azure region of virtual network to which the destination IP in the flow belongs to. |
-| NIC_s | \<resourcegroup_Name>\/\<NetworkInterfaceName> | NIC associated with the VM sending or receiving the traffic. |
-| NIC1_s | <resourcegroup_Name>/\<NetworkInterfaceName> | NIC associated with the source IP in the flow. |
-| NIC2_s | <resourcegroup_Name>/\<NetworkInterfaceName> | NIC associated with the destination IP in the flow. |
-| VM_s | <resourcegroup_Name>\/\<NetworkInterfaceName> | Virtual Machine associated with the Network interface NIC_s. |
-| VM1_s | <resourcegroup_Name>/\<VirtualMachineName> | Virtual Machine associated with the source IP in the flow. |
-| VM2_s | <resourcegroup_Name>/\<VirtualMachineName> | Virtual Machine associated with the destination IP in the flow. |
-| Subnet_s | <ResourceGroup_Name>/<VNET_Name>/\<SubnetName> | Subnet associated with the NIC_s. |
-| Subnet1_s | <ResourceGroup_Name>/<VNET_Name>/\<SubnetName> | Subnet associated with the Source IP in the flow. |
-| Subnet2_s | <ResourceGroup_Name>/<VNET_Name>/\<SubnetName> | Subnet associated with the Destination IP in the flow. |
-| ApplicationGateway1_s | \<SubscriptionID>/\<ResourceGroupName>/\<ApplicationGatewayName> | Application gateway associated with the Source IP in the flow. |
-| ApplicationGateway2_s | \<SubscriptionID>/\<ResourceGroupName>/\<ApplicationGatewayName> | Application gateway associated with the Destination IP in the flow. |
-| LoadBalancer1_s | \<SubscriptionID>/\<ResourceGroupName>/\<LoadBalancerName> | Load balancer associated with the Source IP in the flow. |
-| LoadBalancer2_s | \<SubscriptionID>/\<ResourceGroupName>/\<LoadBalancerName> | Load balancer associated with the Destination IP in the flow. |
-| LocalNetworkGateway1_s | \<SubscriptionID>/\<ResourceGroupName>/\<LocalNetworkGatewayName> | Local network gateway associated with the Source IP in the flow. |
-| LocalNetworkGateway2_s | \<SubscriptionID>/\<ResourceGroupName>/\<LocalNetworkGatewayName> | Local network gateway associated with the Destination IP in the flow. |
-| ConnectionType_s | Possible values are VNetPeering, VpnGateway, and ExpressRoute | Connection Type. |
-| ConnectionName_s | \<SubscriptionID>/\<ResourceGroupName>/\<ConnectionName> | Connection Name. For flow type P2S, it will be formatted as \<gateway name\>_\<VPN Client IP\>. |
-| ConnectingVNets_s | Space separated list of virtual network names | In case of hub and spoke topology, hub virtual networks will be populated here. |
-| Country_s | Two letter country code (ISO 3166-1 alpha-2) | Populated for flow type ExternalPublic. All IP addresses in PublicIPs_s field will share the same country code. |
-| AzureRegion_s | Azure region locations | Populated for flow type AzurePublic. All IP addresses in PublicIPs_s field will share the Azure region. |
-| AllowedInFlows_d | | Count of inbound flows that were allowed. This represents the number of flows that shared the same four-tuple inbound to the network interface at which the flow was captured. |
-| DeniedInFlows_d | | Count of inbound flows that were denied. (Inbound to the network interface at which the flow was captured). |
-| AllowedOutFlows_d | | Count of outbound flows that were allowed (Outbound to the network interface at which the flow was captured). |
-| DeniedOutFlows_d | | Count of outbound flows that were denied (Outbound to the network interface at which the flow was captured). |
-| FlowCount_d | Deprecated. Total flows that matched the same four-tuple. In case of flow types ExternalPublic and AzurePublic, count includes the flows from various PublicIP addresses as well. |
-| InboundPackets_d | Represents packets sent from the destination to the source of the flow | This field is only populated for Version 2 of NSG flow log schema. |
-| OutboundPackets_d | Represents packets sent from the source to the destination of the flow | This field is only populated for Version 2 of NSG flow log schema. |
-| InboundBytes_d | Represents bytes sent from the destination to the source of the flow | This field is only populated Version 2 of NSG flow log schema. |
-| OutboundBytes_d | Represents bytes sent from the source to the destination of the flow | This field is only populated Version 2 of NSG flow log schema. |
-| CompletedFlows_d | | This field is only populated with nonzero value for Version 2 of NSG flow log schema. |
-| PublicIPs_s | <PUBLIC_IP>\|\<FLOW_STARTED_COUNT>\|\<FLOW_ENDED_COUNT>\|\<OUTBOUND_PACKETS>\|\<INBOUND_PACKETS>\|\<OUTBOUND_BYTES>\|\<INBOUND_BYTES> | Entries separated by bars. |
-| SrcPublicIPs_s | <SOURCE_PUBLIC_IP>\|\<FLOW_STARTED_COUNT>\|\<FLOW_ENDED_COUNT>\|\<OUTBOUND_PACKETS>\|\<INBOUND_PACKETS>\|\<OUTBOUND_BYTES>\|\<INBOUND_BYTES> | Entries separated by bars. |
-| DestPublicIPs_s | <DESTINATION_PUBLIC_IP>\|\<FLOW_STARTED_COUNT>\|\<FLOW_ENDED_COUNT>\|\<OUTBOUND_PACKETS>\|\<INBOUND_PACKETS>\|\<OUTBOUND_BYTES>\|\<INBOUND_BYTES> | Entries separated by bars. |
+| **TableName** | NTANetAnalytics | Table for traffic analytics data. |
+| **SubType** | FlowLog | Subtype for the flow logs. Use only **FlowLog**, other values of **SubType** are for internal use. |
+| **FASchemaVersion** | 3 | Schema version. Doesn't reflect NSG flow log version. |
+| **TimeProcessed** | Date and time in UTC | Time at which the traffic analytics processed the raw flow logs from the storage account. |
+| **FlowIntervalStartTime** | Date and time in UTC | Starting time of the flow log processing interval (time from which flow interval is measured). |
+| **FlowIntervalEndTime**| Date and time in UTC | Ending time of the flow log processing interval. |
+| **FlowStartTime** | Date and time in UTC | First occurrence of the flow (which gets aggregated) in the flow log processing interval between `FlowIntervalStartTime` and `FlowIntervalEndTime`. This flow gets aggregated based on aggregation logic. |
+| **FlowEndTime** | Date and time in UTC | Last occurrence of the flow (which gets aggregated) in the flow log processing interval between `FlowIntervalStartTime` and `FlowIntervalEndTime`. In terms of flow log v2, this field contains the time when the last flow with the same four-tuple started (marked as **B** in the raw flow record). |
+| **FlowType** | - IntraVNet <br> - InterVNet <br> - S2S <br> - P2S <br> - AzurePublic <br> - ExternalPublic <br> - MaliciousFlow <br> - Unknown Private <br> - Unknown | See [Notes](#notes) for definitions. |
+| **SrcIP** | Source IP address | Blank in AzurePublic and ExternalPublic flows. |
+| **DestIP** | Destination IP address | Blank in AzurePublic and ExternalPublic flows. |
+| **TargetResourceId** | ResourceGroupName/ResourceName | The ID of the resource at which flow logging and traffic analytics is enabled. |
+| **TargetResourceType** | VirtualNetwork/Subnet/NetworkInterface | Type of resource at which flow logging and traffic analytics is enabled (virtual network, subnet, NIC or network security group).|
+| **FlowLogResourceId** | ResourceGroupName/NetworkWatcherName/FlowLogName | The resource ID of the flow log. |
+| **DestPort** | Destination Port | Port at which traffic is incoming. |
+| **L4Protocol** | - T <br> - U | Transport Protocol. **T** = TCP <br> **U** = UDP |
+| **L7Protocol** | Protocol Name | Derived from destination port. |
+| **FlowDirection** | - **I** = Inbound <br> - **O** = Outbound | Direction of the flow: in or out of the network security group per flow log. |
+| **FlowStatus** | - **A** = Allowed <br> - **D** = Denied | Status of flow: allowed or denied by network security group per flow log. |
+| **NSGList** |\<SUBSCRIPTIONID>/<RESOURCEGROUP_NAME>/<NSG_NAME> | Network security group associated with the flow. |
+| **NSGRule** | NSG_RULENAME | Network security group rule that allowed or denied the flow. |
+| **NSGRuleType** | - User Defined <br> - Default | The type of network security group rule used by the flow. |
+| **MACAddress** | MAC Address | MAC address of the NIC at which the flow was captured. |
+| **SrcSubscription** | Subscription ID | Subscription ID of virtual network / network interface / virtual machine that the source IP in the flow belongs to. |
+| **DestSubscription** | Subscription ID | Subscription ID of virtual network / network interface / virtual machine that the destination IP in the flow belongs to. |
+| **SrcRegion** | Azure Region | Azure region of virtual network / network interface / virtual machine to which the source IP in the flow belongs to. |
+| **DestRegion** | Azure Region | Azure region of virtual network to which the destination IP in the flow belongs to. |
+| **SecNIC** | <resourcegroup_Name>/\<NetworkInterfaceName> | NIC associated with the source IP in the flow. |
+| **DestNIC** | <resourcegroup_Name>/\<NetworkInterfaceName> | NIC associated with the destination IP in the flow. |
+| **SrcVM** | <resourcegroup_Name>/\<VirtualMachineName> | Virtual machine associated with the source IP in the flow. |
+| **DestVM** | <resourcegroup_Name>/\<VirtualMachineName> | Virtual machine associated with the destination IP in the flow. |
+| **SrcSubnet** | <ResourceGroup_Name>/<VirtualNetwork_Name>/\<SubnetName> | Subnet associated with the source IP in the flow. |
+| **DestSubnet** | <ResourceGroup_Name>/<VirtualNetwork_Name>/\<SubnetName> | Subnet associated with the destination IP in the flow. |
+| **SrcApplicationGateway** | \<SubscriptionID>/\<ResourceGroupName>/\<ApplicationGatewayName> | Application gateway associated with the source IP in the flow. |
+| **DestApplicationGateway** | \<SubscriptionID>/\<ResourceGroupName>/\<ApplicationGatewayName> | Application gateway associated with the destination IP in the flow. |
+| **SrcLoadBalancer** | \<SubscriptionID>/\<ResourceGroupName>/\<LoadBalancerName> | Load balancer associated with the source IP in the flow. |
+| **DestLoadBalancer** | \<SubscriptionID>/\<ResourceGroupName>/\<LoadBalancerName> | Load balancer associated with the destination IP in the flow. |
+| **SrcLocalNetworkGateway** | \<SubscriptionID>/\<ResourceGroupName>/\<LocalNetworkGatewayName> | Local network gateway associated with the source IP in the flow. |
+| **DestLocalNetworkGateway** | \<SubscriptionID>/\<ResourceGroupName>/\<LocalNetworkGatewayName> | Local network gateway associated with the destination IP in the flow. |
+| **ConnectionType** | Possible values are VNetPeering, VpnGateway, and ExpressRoute | The connection type. |
+| **ConnectionName** | \<SubscriptionID>/\<ResourceGroupName>/\<ConnectionName> | The connection name. For flow type P2S, it's formatted as \<GatewayName>_\<VPNClientIP> |
+| **ConnectingVNets** | Space separated list of virtual network names. | In hub and spoke topology, hub virtual networks are populated here. |
+| **Country** | Two-letter country code (ISO 3166-1 alpha-2) | Populated for flow type ExternalPublic. All IP addresses in PublicIPs field share the same country code. |
+| **AzureRegion** | Azure region locations | Populated for flow type AzurePublic. All IP addresses in PublicIPs field share the Azure region. |
+| **AllowedInFlows**| - | Count of inbound flows that were allowed, which represents the number of flows that shared the same four-tuple inbound to the network interface at which the flow was captured. |
+| **DeniedInFlows** | - | Count of inbound flows that were denied. (Inbound to the network interface at which the flow was captured). |
+| **AllowedOutFlows** | - | Count of outbound flows that were allowed (Outbound to the network interface at which the flow was captured). |
+| **DeniedOutFlows** | - | Count of outbound flows that were denied (Outbound to the network interface at which the flow was captured). |
+| **FlowCount** | Deprecated. Total flows that matched the same four-tuple. In flow types ExternalPublic and AzurePublic, count includes the flows from various PublicIP addresses as well. | - |
+| **PacketsDestToSrc** | Represents packets sent from the destination to the source of the flow | Populated only for the Version 2 of NSG flow log schema. |
+| **PacketsSrcToDest** | Represents packets sent from the source to the destination of the flow | Populated only for the Version 2 of NSG flow log schema. |
+| **BytesDestToSrc** | Represents bytes sent from the destination to the source of the flow | Populated only for the Version 2 of NSG flow log schema. |
+| **BytesSrcToDest** | Represents bytes sent from the source to the destination of the flow | Populated only for the Version 2 of NSG flow log schema. |
+| **CompletedFlows** | - | Populated with nonzero value only for the Version 2 of NSG flow log schema. |
+| **SrcPublicIPs** | <SOURCE_PUBLIC_IP>\|\<FLOW_STARTED_COUNT>\|\<FLOW_ENDED_COUNT>\|\<OUTBOUND_PACKETS>\|\<INBOUND_PACKETS>\|\<OUTBOUND_BYTES>\|\<INBOUND_BYTES> | Entries separated by bars. |
+| **DestPublicIPs** | <DESTINATION_PUBLIC_IP>\|\<FLOW_STARTED_COUNT>\|\<FLOW_ENDED_COUNT>\|\<OUTBOUND_PACKETS>\|\<INBOUND_PACKETS>\|\<OUTBOUND_BYTES>\|\<INBOUND_BYTES> | Entries separated by bars. |
+| **FlowEncryption** | - Encrypted <br>- Unencrypted <br>- Unsupported hardware <br>- Software not ready <br>- Drop due to no encryption <br>- Discovery not supported <br>- Destination on same host <br>- Fall back to no encryption. | Encryption level of flows. |
+
+> [!NOTE]
+> *NTANetAnalytics* in VNet flow logs replaces *AzureNetworkAnalytics_CL* used in NSG flow logs.
++ ## Public IP details schema
Traffic analytics provides WHOIS data and geographic location for all public IPs
The following table details public IP schema:
+# [**NSG flow logs**](#tab/nsg)
+
+| Field | Format | Comments |
+| -- | | -- |
+| **TableName** | AzureNetworkAnalyticsIPDetails_CL | Table that contains traffic analytics IP details data. |
+| **SubType_s** | FlowLog | Subtype for the flow logs. **Use only "FlowLog"**, other values of SubType_s are for internal workings of the product. |
+| **FASchemaVersion_s** | 2 | Schema version. Doesn't reflect NSG flow log version. |
+| **FlowIntervalStartTime_t** | Date and Time in UTC | Start time of the flow log processing interval (time from which flow interval is measured). |
+| **FlowIntervalEndTime_t** | Date and Time in UTC | End time of the flow log processing interval. |
+| **FlowType_s** | - AzurePublic <br> - ExternalPublic <br> - MaliciousFlow | See [Notes](#notes) for definitions. |
+| **IP** | Public IP | Public IP whose information is provided in the record. |
+| **Location** | Location of the IP | - For Azure Public IP: Azure region of virtual network/network interface/virtual machine to which the IP belongs OR Global for IP [168.63.129.16](../virtual-network/what-is-ip-address-168-63-129-16.md). <br> - For External Public IP and Malicious IP: 2-letter country code where IP is located (ISO 3166-1 alpha-2). |
+| **PublicIPDetails** | Information about IP | - For AzurePublic IP: Azure Service owning the IP or Microsoft virtual public IP for [168.63.129.16](../virtual-network/what-is-ip-address-168-63-129-16.md). <br> - ExternalPublic/Malicious IP: WhoIS information of the IP. |
+| **ThreatType** | Threat posed by malicious IP | **For Malicious IPs only**: One of the threats from the list of currently allowed values (described in the next table). |
+| **ThreatDescription** | Description of the threat | **For Malicious IPs only**: Description of the threat posed by the malicious IP. |
+| **DNSDomain** | DNS domain | **For Malicious IPs only**: Domain name associated with this IP. |
+
+# [**VNet flow logs (preview)**](#tab/vnet)
+ | Field | Format | Comments | | -- | | -- |
-| TableName | AzureNetworkAnalyticsIPDetails_CL | Table that contains traffic analytics IP details data. |
-| SubType_s | FlowLog | Subtype for the flow logs. **Use only "FlowLog"**, other values of SubType_s are for internal workings of the product. |
-| FASchemaVersion_s | 2 | Schema version. It doesn't reflect NSG flow log version. |
-| FlowIntervalStartTime_t | Date and Time in UTC | Start time of the flow log processing interval (time from which flow interval is measured). |
-| FlowIntervalEndTime_t | Date and Time in UTC | End time of the flow log processing interval. |
-| FlowType_s | - AzurePublic <br> - ExternalPublic <br> - MaliciousFlow | Definition in notes below the table. |
-| IP | Public IP | Public IP whose information is provided in the record. |
-| Location | Location of the IP | - For Azure Public IP: Azure region of virtual network/network interface/virtual machine to which the IP belongs OR Global for IP [168.63.129.16](../virtual-network/what-is-ip-address-168-63-129-16.md). <br> - For External Public IP and Malicious IP: 2-letter country code where IP is located (ISO 3166-1 alpha-2). |
-| PublicIPDetails | Information about IP | - For AzurePublic IP: Azure Service owning the IP or Microsoft virtual public IP for [168.63.129.16](../virtual-network/what-is-ip-address-168-63-129-16.md). <br> - ExternalPublic/Malicious IP: WhoIS information of the IP. |
-| ThreatType | Threat posed by malicious IP | **For Malicious IPs only**: One of the threats from the list of currently allowed values (described in the next table). |
-| ThreatDescription | Description of the threat | **For Malicious IPs only**: Description of the threat posed by the malicious IP. |
-| DNSDomain | DNS domain | **For Malicious IPs only**: Domain name associated with this IP. |
+| **TableName**| NTAIpDetails | Table that contains traffic analytics IP details data. |
+| **SubType**| FlowLog | Subtype for the flow logs. Use only **FlowLog**. Other values of SubType are for internal workings of the product. |
+| **FASchemaVersion** | 2 | Schema version. Doesn't reflect NSG flow Log version. |
+| **FlowIntervalStartTime**| Date and time in UTC | Start time of the flow log processing interval (the time from which flow interval is measured). |
+| **FlowIntervalEndTime**| Date and time in UTC | End time of the flow log processing interval. |
+| **FlowType** | - AzurePublic <br> - ExternalPublic <br> - MaliciousFlow | See [Notes](#notes) for definitions. |
+| **IP**| Public IP | Public IP whose information is provided in the record. |
+| **PublicIPDetails** | Information about IP | **For AzurePublic IP**: Azure Service owning the IP or **Microsoft Virtual Public IP** for the IP 168.63.129.16. <br> **ExternalPublic/Malicious IP**: WhoIS information of the IP. |
+| **ThreatType** | Threat posed by malicious IP | *For Malicious IPs only*. One of the threats from the list of currently allowed values. For more information, see [Notes](#notes). |
+| **DNSDomain** | DNS domain | *For Malicious IPs only*. Domain name associated with this IP. |
+| **ThreatDescription** |Description of the threat | *For Malicious IPs only*. Description of the threat posed by the malicious IP. |
+| **Location** | Location of the IP | **For Azure Public IP**: Azure region of virtual network / network interface / virtual machine to which the IP belongs or Global for IP 168.63.129.16. <br> **For External Public IP and Malicious IP**: two-letter country code (ISO 3166-1 alpha-2) where IP is located. |
+
+> [!NOTE]
+> *NTAIPDetails* in VNet flow logs replaces *AzureNetworkAnalyticsIPDetails_CL* used in NSG flow logs.
++ List of threat types:
List of threat types:
| Phishing | Indicators relating to a phishing campaign. | | Proxy | Indicator of a proxy service. | | PUA | Potentially Unwanted Application. |
-| WatchList | A generic bucket into which indicators are placed when it can't be determined exactly what the threat is or will require manual interpretation. `WatchList` should typically not be used by partners submitting data into the system. |
+| WatchList | A generic bucket into which indicators are placed when it can't be determined exactly what the threat is or requires manual interpretation. `WatchList` should typically not be used by partners submitting data into the system. |
## Notes -- In case of `AzurePublic` and `ExternalPublic` flows, customer owned Azure virtual machine IP is populated in `VMIP_s` field, while the Public IP addresses are populated in the `PublicIPs_s` field. For these two flow types, you should use `VMIP_s` and `PublicIPs_s` instead of `SrcIP_s` and `DestIP_s` fields. For AzurePublic and ExternalPublic IP addresses, we aggregate further, so that the number of records ingested to log analytics workspace is minimal. (This field will be deprecated soon and you should be using SrcIP_ and DestIP_s depending on whether the virtual machine was the source or the destination in the flow).
+- In case of `AzurePublic` and `ExternalPublic` flows, customer owned Azure virtual machine IP is populated in `VMIP_s` field, while the Public IP addresses are populated in the `PublicIPs_s` field. For these two flow types, you should use `VMIP_s` and `PublicIPs_s` instead of `SrcIP_s` and `DestIP_s` fields. For AzurePublic and ExternalPublic IP addresses, we aggregate further, so that the number of records ingested to Log Analytics workspace is minimal. (This field will be deprecated soon and you should be using SrcIP_ and DestIP_s depending on whether the virtual machine was the source or the destination in the flow).
- Some field names are appended with `_s` or `_d`, which don't signify source and destination but indicate the data types *string* and *decimal* respectively. - Based on the IP addresses involved in the flow, we categorize the flows into the following flow types: - `IntraVNet`: Both IP addresses in the flow reside in the same Azure virtual network.
List of threat types:
## Next Steps -- To learn more about traffic analytics, see [Azure Network Watcher Traffic analytics](traffic-analytics.md).-- See [Traffic analytics FAQ](traffic-analytics-faq.yml) for answers to traffic analytics frequently asked questions.--
+- To learn more about traffic analytics, see [Traffic analytics overview](traffic-analytics.md).
+- See [Traffic analytics FAQ](traffic-analytics-faq.yml) for answers to traffic analytics most frequently asked questions.
network-watcher Vnet Flow Logs Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/vnet-flow-logs-cli.md
+
+ Title: Manage VNet flow logs - Azure CLI
+
+description: Learn how to create, change, enable, disable, or delete Azure Network Watcher VNet flow logs using the Azure CLI.
++++ Last updated : 08/16/2023+++
+# Create, change, enable, disable, or delete VNet flow logs using the Azure CLI
+
+> [!IMPORTANT]
+> VNet flow logs is currently in PREVIEW. This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+Virtual network flow logging is a feature of Azure Network Watcher that allows you to log information about IP traffic flowing through an Azure virtual network. For more information about virtual network flow logging, see [VNet flow logs overview](vnet-flow-logs-overview.md).
+
+In this article, you learn how to create, change, enable, disable, or delete a VNet flow log using the Azure CLI. You can learn how to manage a VNet flow log using [PowerShell](vnet-flow-logs-powershell.md).
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+- Insights provider. For more information, see [Register Insights provider](#register-insights-provider).
+
+- A virtual network. If you need to create a virtual network, see [Create a virtual network using the Azure CLI](../virtual-network/quick-create-cli.md).
+
+- An Azure storage account. If you need to create a storage account, see [Create a storage account using the Azure CLI](../storage/common/storage-account-create.md?tabs=azure-cli).
+
+- Bash environment in [Azure Cloud Shell](https://shell.azure.com) or the Azure CLI installed locally. To learn more about using Bash in Azure Cloud Shell, see [Azure Cloud Shell Quickstart - Bash](../cloud-shell/quickstart.md).
+
+ - If you choose to install and use Azure CLI locally, this article requires the Azure CLI version 2.39.0 or later. Run `az --version` to find the installed version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). Run `az login` to sign in to Azure.
+
+## Register insights provider
+
+*Microsoft.Insights* provider must be registered to successfully log traffic in a virtual network. If you aren't sure if the *Microsoft.Insights* provider is registered, use [az provider register](/cli/azure/provider#az-provider-register) to register it.
+
+```azurecli-interactive
+# Register Microsoft.Insights provider.
+az provider register --namespace Microsoft.Insights
+```
+
+## Enable VNet flow logs
+
+Use [az network watcher flow-log create](/cli/azure/network/watcher/flow-log#az-network-watcher-flow-log-create) to create a VNet flow log.
+
+```azurecli-interactive
+# Create a VNet flow log.
+az network watcher flow-log create --location eastus --resource-group myResourceGroup --name myVNetFlowLog --vnet myVNet --storage-account myStorageAccount
+```
+
+## Enable VNet flow logs and traffic analytics
+
+Use [az monitor log-analytics workspace create](/cli/azure/monitor/log-analytics/workspace#az-monitor-log-analytics-workspace-create) to create a traffic analytics workspace, and then use [az network watcher flow-log create](/cli/azure/network/watcher/flow-log#az-network-watcher-flow-log-create) to create a VNet flow log that uses it.
+
+```azurecli-interactive
+# Create a traffic analytics workspace.
+az monitor log-analytics workspace create --name myWorkspace --resource-group myResourceGroup --location eastus
+
+# Create a VNet flow log.
+az network watcher flow-log create --location eastus --name myVNetFlowLog --resource-group myResourceGroup --vnet myVNet --storage-account myStorageAccount --workspace myWorkspace --interval 10 --traffic-analytics true
+```
+
+## List all flow logs in a region
+
+Use [az network watcher flow-log list](/cli/azure/network/watcher/flow-log#az-network-watcher-flow-log-list) to list all flow log resources in a particular region in your subscription.
+
+```azurecli-interactive
+# Get all flow logs in East US region.
+az network watcher flow-log list --location eastus --out table
+```
+
+## View VNet flow log resource
+
+Use [az network watcher flow-log show](/cli/azure/network/watcher/flow-log#az-network-watcher-flow-log-show) to see details of a flow log resource.
+
+```azurecli-interactive
+# Get the flow log details.
+az network watcher flow-log show --name myVNetFlowLog --resource-group NetworkWatcherRG --location eastus
+```
+
+## Download a flow log
+
+To access and download VNet flow logs from your storage account, you can use Azure Storage Explorer. Fore more information, see [Get started with Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md).
+
+VNet flow log files saved to a storage account follow the logging path shown in the following example:
+
+```
+https://{storageAccountName}.blob.core.windows.net/insights-logs-flowlogflowevent/flowLogResourceID=/SUBSCRIPTIONS/{subscriptionID}/RESOURCEGROUPS/NETWORKWATCHERRG/PROVIDERS/MICROSOFT.NETWORK/NETWORKWATCHERS/NETWORKWATCHER_{Region}/FLOWLOGS/{FlowlogResourceName}/y={year}/m={month}/d={day}/h={hour}/m=00/macAddress={macAddress}/PT1H.json
+```
+
+## Disable traffic analytics on flow log resource
+
+To disable traffic analytics on the flow log resource and continue to generate and save VNet flow logs to a storage account, use [az network watcher flow-log update](/cli/azure/network/watcher/flow-log#az-network-watcher-flow-log-update).
+
+```azurecli-interactive
+# Update the VNet flow log.
+az network watcher flow-log update --location eastus --name myVNetFlowLog --resource-group myResourceGroup --vnet myVNet --storage-account myStorageAccount --traffic-analytics false
+```
+
+## Delete a VNet flow log resource
+
+To delete a VNet flow log resource, use [az network watcher flow-log delete](/cli/azure/network/watcher/flow-log#az-network-watcher-flow-log-delete).
+
+```azurecli-interactive
+# Delete the VNet flow log.
+az network watcher flow-log delete --name myVNetFlowLog --location eastus
+```
+
+## Next steps
+
+- To learn about traffic analytics, see [Traffic analytics](traffic-analytics.md).
+- To learn how to use Azure built-in policies to audit or enable traffic analytics, see [Manage traffic analytics using Azure Policy](traffic-analytics-policy-portal.md).
network-watcher Vnet Flow Logs Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/vnet-flow-logs-overview.md
+
+ Title: VNet flow logs (preview)
+
+description: Learn about VNet flow logs feature of Azure Network Watcher.
++++ Last updated : 08/16/2023++
+# VNet flow logs (preview)
+
+> [!IMPORTANT]
+> VNet flow logs is currently in PREVIEW. This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+Virtual network (VNet) flow logs is a feature of Azure Network Watcher that allows you to log information about IP traffic flowing through a virtual network. Flow data is sent to Azure Storage from where you can access it and export it to any visualization tool, security information and event management (SIEM) solution, or intrusion detection system (IDS) of your choice. Network Watcher VNet flow logs capability overcomes some of the existing limitations of [NSG flow logs](network-watcher-nsg-flow-logging-overview.md).
+
+## Why use flow logs?
+
+It's vital to monitor, manage, and know your network so that you can protect and optimize it. You may need to know the current state of the network, who's connecting, and where users are connecting from. You may also need to know which ports are open to the internet, what network behavior is expected, what network behavior is irregular, and when sudden rises in traffic happen.
+
+Flow logs are the source of truth for all network activity in your cloud environment. Whether you're in a startup that's trying to optimize resources or a large enterprise that's trying to detect intrusion, flow logs can help. You can use them for optimizing network flows, monitoring throughput, verifying compliance, detecting intrusions, and more.
+
+## Common use cases
+
+#### Network monitoring
+
+- Identify unknown or undesired traffic.
+- Monitor traffic levels and bandwidth consumption.
+- Filter flow logs by IP and port to understand application behavior.
+- Export flow logs to analytics and visualization tools of your choice to set up monitoring dashboards.
+
+#### Usage monitoring and optimization
+
+- Identify top talkers in your network.
+- Combine with GeoIP data to identify cross-region traffic.
+- Understand traffic growth for capacity forecasting.
+- Use data to remove overly restrictive traffic rules.
+
+#### Compliance
+
+- Use flow data to verify network isolation and compliance with enterprise access rules.
+
+#### Network forensics and security analysis
+
+- Analyze network flows from compromised IPs and network interfaces.
+- Export flow logs to any SIEM or IDS tool of your choice.
+
+## VNet flow logs compared to NSG flow logs
+
+Both VNet flow logs and [NSG flow logs](network-watcher-nsg-flow-logging-overview.md) record IP traffic but they differ in their behavior & capabilities. VNet flow logs simplify the scope of traffic monitoring by allowing you to enable logging at [virtual networks](../virtual-network/virtual-networks-overview.md), ensuring that traffic through all supported workloads within a virtual network are recorded. VNet flow logs also avoids the need to enable multi-level flow logging such as in cases of [NSG flow logs](network-watcher-nsg-flow-logging-overview.md#best-practices) where network security groups are configured at both subnet & NIC.
+
+In addition to existing support to identify allowed/denied traffic by [network security group rules](../virtual-network/network-security-groups-overview.md), VNet flow logs support identification of traffic allowed/denied by [Azure Virtual Network Manager security admin rules](../virtual-network-manager/concept-security-admins.md). VNet flow logs also support evaluating the encryption status of your network traffic in scenarios where [virtual network encryption](../virtual-network/virtual-network-encryption-overview.md) is enabled.
+
+## How logging works
+
+Key properties of VNet flow logs include:
+
+- Flow logs operate at Layer 4 of the Open Systems Interconnection (OSI) model and record all IP flows going through a virtual network.
+- Logs are collected at 1-minute intervals through the Azure platform and don't affect your Azure resources or network traffic.
+- Logs are written in the JSON (JavaScript Object Notation) format.
+- Each log record contains the network interface (NIC) the flow applies to, 5-tuple information, traffic direction, flow state, encryption state and throughput information.
+- All traffic flows in your network are evaluated through the rules in the applicable [network security group rules](../virtual-network/network-security-groups-overview.md) or [Azure Virtual Network Manager security admin rules](../virtual-network-manager/concept-security-admins.md). For more information, see [Log format](#log-format).
+
+## Log format
+
+VNet flow logs have the following properties:
+
+- `time`: Time in UTC when the event was logged.
+- `flowLogVersion`: Version of flow log schema.
+- `flowLogGUID`: The resource GUID of the FlowLog resource.
+- `macAddress`: MAC address of the network interface where the event was captured.
+- `category`: Category of the event. The category is always `FlowLogFlowEvent`.
+- `flowLogResourceID`: Resource ID of the FlowLog resource.
+- `targetResourceID`: Resource ID of target resource associated to the FlowLog resource.
+- `operationName`: Always `FlowLogFlowEvent`.
+- `flowRecords`: Collection of flow records.
+ - `flows`: Collection of flows. This property has multiple entries for different ACLs.
+ - `aclID`: Identifier of the resource evaluating traffic, either a network security group or Virtual Network Manager. For cases like traffic denied by encryption, this value is `unspecified`.
+ - `flowGroups`: Collection of flow records at a rule level.
+ - `rule`: Name of the rule that allowed or denied the traffic. For traffic denied due to encryption, this value is `unspecified`.
+ - `flowTuples`: string that contains multiple properties for the flow tuple in a comma-separated format:
+ - `Time Stamp`: Time stamp of when the flow occurred in UNIX epoch format.
+ - `Source IP`: Source IP address.
+ - `Destination IP`: Destination IP address.
+ - `Source port`: Source port.
+ - `Destination port`: Destination Port.
+ - `Protocol`: Layer 4 protocol of the flow expressed in IANA assigned values.
+ - `Flow direction`: Direction of the traffic flow. Valid values are `I` for inbound and `O` for outbound.
+ - `Flow state`: State of the flow. Possible states are:
+ - `B`: Begin, when a flow is created. No statistics are provided.
+ - `C`: Continuing for an ongoing flow. Statistics are provided at 5-minute intervals.
+ - `E`: End, when a flow is terminated. Statistics are provided.
+ - `D`: Deny, when a flow is denied.
+ - `Flow encryption`: Encryption state of the flow. Possible values are:
+ - `X`: Encrypted.
+ - `NX`: Unencrypted.
+ - `NX_HW_NOT_SUPPORTED`: Unsupported hardware.
+ - `NX_SW_NOT_READY`: Software not ready.
+ - `NX_NOT_ACCEPTED`: Drop due to no encryption.
+ - `NX_NOT_SUPPORTED`: Discovery not supported.
+ - `NX_LOCAL_DST`: Destination on same host.
+ - `NX_FALLBACK`: Fall back to no encryption.
+ - `Packets sent`: Total number of packets sent from source to destination since the last update.
+ - `Bytes sent`: Total number of packet bytes sent from source to destination since the last update. Packet bytes include the packet header and payload.
+ - `Packets received`: Total number of packets sent from destination to source since the last update.
+ - `Bytes received`: Total number of packet bytes sent from destination to source since the last update. Packet bytes include packet header and payload.
+
+Traffic in your virtual networks is Unencrypted (NX) by default. For encrypted traffic, enable [virtual network encryption](../virtual-network/virtual-network-encryption-overview.md).
+
+`Flow encryption` has the following possible encryption statuses:
+
+| Encryption Status | Description |
+| -- | -- |
+| `X` | **Connection is encrypted**. Encryption is configured and the platform has encrypted the connection. |
+| `NX` | **Connection is Unencrypted**. This event is logged in two scenarios: <br> - When encryption isn't configured. <br> - When an encrypted virtual machine communicates with an endpoint that lacks encryption (such as an internet endpoint). |
+| `NX_HW_NOT_SUPPORTED` | **Unsupported hardware**. Encryption is configured, but the virtual machine is running on a host that doesn't support encryption. This issue can usually be the case where the FPGA isn't attached to the host, or could be faulty. Report this issue to Microsoft for investigation. |
+| `NX_SW_NOT_READY` | **Software not ready**. Encryption is configured, but the software component (GFT) in the host networking stack isn't ready to process encrypted connections. This issue can happen when the virtual machine is booting for the first time / restarting / redeployed. It can also happen in the case where there's an update to the networking components on the host where virtual machine is running. In all these scenarios, the packet gets dropped. The issue should be temporary and encryption should start working once either the virtual machine is fully up and running or the software update on the host is complete. If the issue is seen for longer durations, report it to Microsoft for investigation. |
+| `NX_NOT_ACCEPTED` | **Drop due to no encryption**. Encryption is configured on both source and destination endpoints with drop on unencrypted policy. If there's a failure to encrypt traffic, packet is dropped. |
+| `NX_NOT_SUPPORTED` | **Discovery not supported**. Encryption is configured, but the encryption session wasn't established, as discovery isn't supported in the host networking stack. In this case, packet is dropped. If you encounter this issue, report it to Microsoft for investigation. |
+| `NX_LOCAL_DST` | **Destination on same host**. Encryption is configured, but the source and destination virtual machines are running on the same Azure host. In this case, the connection isn't encrypted by design. |
+| `NX_FALLBACK` | **Fall back to no encryption**. Encryption is configured with the allow unencrypted policy for both source and destination endpoints. Encryption was attempted, but ran into an issue. In this case, connection is allowed but it isn't encrypted. An example of this can be, the virtual machine initially landed on a node that supports encryption, but later, this support was disabled. |
++
+## Sample log record
+
+In the following example of VNet flow logs, multiple records that follow the property list described earlier.
+
+```json
+{
+ "records": [
+ {
+ "time": "2022-09-14T09:00:52.5625085Z",
+ "flowLogVersion": 4,
+ "flowLogGUID": "abcdef01-2345-6789-0abc-def012345678",
+ "macAddress": "00224871C205",
+ "category": "FlowLogFlowEvent",
+ "flowLogResourceID": "/SUBSCRIPTIONS/00000000-0000-0000-0000-000000000000/RESOURCEGROUPS/NETWORKWATCHERRG/PROVIDERS/MICROSOFT.NETWORK/NETWORKWATCHERS/NETWORKWATCHER_EASTUS2EUAP/FLOWLOGS/VNETFLOWLOG",
+ "targetResourceID": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.Network/virtualNetworks/myVNet",
+ "operationName": "FlowLogFlowEvent",
+ "flowRecords": {
+ "flows": [
+ {
+ "aclID": "00000000-1234-abcd-ef00-c1c2c3c4c5c6",
+ "flowGroups": [
+ {
+ "rule": "DefaultRule_AllowInternetOutBound",
+ "flowTuples": [
+ "1663146003599,10.0.0.6,52.239.184.180,23956,443,6,O,B,NX,0,0,0,0",
+ "1663146003606,10.0.0.6,52.239.184.180,23956,443,6,O,E,NX,3,767,2,1580",
+ "1663146003637,10.0.0.6,40.74.146.17,22730,443,6,O,B,NX,0,0,0,0",
+ "1663146003640,10.0.0.6,40.74.146.17,22730,443,6,O,E,NX,3,705,4,4569",
+ "1663146004251,10.0.0.6,40.74.146.17,22732,443,6,O,B,NX,0,0,0,0",
+ "1663146004251,10.0.0.6,40.74.146.17,22732,443,6,O,E,NX,3,705,4,4569",
+ "1663146004622,10.0.0.6,40.74.146.17,22734,443,6,O,B,NX,0,0,0,0",
+ "1663146004622,10.0.0.6,40.74.146.17,22734,443,6,O,E,NX,2,134,1,108",
+ "1663146017343,10.0.0.6,104.16.218.84,36776,443,6,O,B,NX,0,0,0,0",
+ "1663146022793,10.0.0.6,104.16.218.84,36776,443,6,O,E,NX,22,2217,33,32466"
+ ]
+ }
+ ]
+ },
+ {
+ "aclID": "01020304-abcd-ef00-1234-102030405060",
+ "flowGroups": [
+ {
+ "rule": "BlockHighRiskTCPPortsFromInternet",
+ "flowTuples": [
+ "1663145998065,101.33.218.153,10.0.0.6,55188,22,6,I,D,NX,0,0,0,0",
+ "1663146005503,192.241.200.164,10.0.0.6,35276,119,6,I,D,NX,0,0,0,0"
+ ]
+ },
+ {
+ "rule": "Internet",
+ "flowTuples": [
+ "1663145989563,20.106.221.10,10.0.0.6,50557,44357,6,I,D,NX,0,0,0,0",
+ "1663145989679,20.55.117.81,10.0.0.6,62797,35945,6,I,D,NX,0,0,0,0",
+ "1663145989709,20.55.113.5,10.0.0.6,51961,65515,6,I,D,NX,0,0,0,0",
+ "1663145990049,13.65.224.51,10.0.0.6,40497,40129,6,I,D,NX,0,0,0,0",
+ "1663145990145,20.55.117.81,10.0.0.6,62797,30472,6,I,D,NX,0,0,0,0",
+ "1663145990175,20.55.113.5,10.0.0.6,51961,28184,6,I,D,NX,0,0,0,0",
+ "1663146015545,20.106.221.10,10.0.0.6,50557,31244,6,I,D,NX,0,0,0,0"
+ ]
+ }
+ ]
+ }
+ ]
+ }
+ }
+ ]
+}
+
+```
+## Log tuple and bandwidth calculation
++
+Here's an example bandwidth calculation for flow tuples from a TCP conversation between **185.170.185.105:35370** and **10.2.0.4:23**:
+
+`1493763938,185.170.185.105,10.2.0.4,35370,23,6,I,B,NX,,,,`
+`1493695838,185.170.185.105,10.2.0.4,35370,23,6,I,C,NX,1021,588096,8005,4610880`
+`1493696138,185.170.185.105,10.2.0.4,35370,23,6,I,E,NX,52,29952,47,27072`
+
+For continuation (`C`) and end (`E`) flow states, byte and packet counts are aggregate counts from the time of the previous flow's tuple record. In the example conversation, the total number of packets transferred is 1021+52+8005+47 = 9125. The total number of bytes transferred is 588096+29952+4610880+27072 = 5256000.
+
+## Considerations for VNet flow logs
+
+### Storage account
+
+- **Location**: The storage account used must be in the same region as the virtual network.
+- **Performance tier**: Currently, only standard-tier storage accounts are supported.
+- **Self-managed key rotation**: If you change or rotate the access keys to your storage account, VNet flow logs stop working. To fix this problem, you must disable and then re-enable VNet flow logs.
+
+### Cost
+
+VNet flow logging is billed on the volume of logs produced. High traffic volume can result in large-flow log volume and the associated costs.
+
+Pricing of VNet flow logs doesn't include the underlying costs of storage. Using the retention policy feature with VNet flow logs means incurring separate storage costs for extended periods of time.
+
+If you want to retain data forever and don't want to apply any retention policy, set retention days to 0. For more information, see [Network Watcher pricing](https://azure.microsoft.com/pricing/details/network-watcher/) and [Azure Storage pricing](https://azure.microsoft.com/pricing/details/storage/).
+
+## Pricing
+
+VNet flow logs are not currently billed. In future, VNet flow logs will be charged per gigabyte of "Network Logs Collected" and come with a free tier of 5 GB/month per subscription. If traffic analytics is enabled with VNet flow logs, then existing traffic analytics pricing is applicable. For more information, see [Network Watcher pricing](https://azure.microsoft.com/pricing/details/network-watcher/).
+
+## Availability
+
+VNet flow logs is available in the following regions during the preview:
+
+- East US 2 EUAP
+- Central US EUAP
+- West Central US
+- East US
+- East US 2
+- West US
+- West US 2
+
+To sign up to obtain access to the public preview, see [VNet flow logs - public preview sign up](https://aka.ms/VNetflowlogspreviewsignup).
+
+## Next steps
+
+- To learn how to create, change, enable, disable, or delete VNet flow logs, see [PowerShell](vnet-flow-logs-powershell.md) or [Azure CLI](vnet-flow-logs-cli.md) VNet flow logs articles.
+- To learn about traffic analytics, see [Traffic analytics](traffic-analytics.md) and [Traffic analytics schema](traffic-analytics-schema.md).
+- To learn how to use Azure built-in policies to audit or enable traffic analytics, see [Manage traffic analytics using Azure Policy](traffic-analytics-policy-portal.md).
+++
network-watcher Vnet Flow Logs Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/vnet-flow-logs-powershell.md
+
+ Title: Manage VNet flow logs - PowerShell
+
+description: Learn how to create, change, enable, disable, or delete Azure Network Watcher VNet flow logs using Azure PowerShell.
++++ Last updated : 08/16/2023+++
+# Create, change, enable, disable, or delete VNet flow logs using Azure PowerShell
+
+> [!IMPORTANT]
+> VNet flow logs is currently in PREVIEW. This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+Virtual network flow logging is a feature of Azure Network Watcher that allows you to log information about IP traffic flowing through an Azure virtual network. For more information about virtual network flow logging, see [VNet flow logs overview](vnet-flow-logs-overview.md).
+
+In this article, you learn how to create, change, enable, disable, or delete a VNet flow log using Azure PowerShell. You can learn how to manage a VNet flow log using the [Azure CLI](vnet-flow-logs-cli.md).
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+- Insights provider. For more information, see [Register Insights provider](#register-insights-provider).
+
+- A virtual network. If you need to create a virtual network, see [Create a virtual network using PowerShell](../virtual-network/quick-create-powershell.md).
+
+- An Azure storage account. If you need to create a storage account, see [Create a storage account using PowerShell](../storage/common/storage-account-create.md?tabs=azure-powershell).
+
+- PowerShell environment in [Azure Cloud Shell](https://shell.azure.com) or Azure PowerShell installed locally. To learn more about using PowerShell in Azure Cloud Shell, see [Azure Cloud Shell Quickstart - PowerShell](../cloud-shell/quickstart-powershell.md).
+
+ - If you choose to install and use PowerShell locally, this article requires the Azure PowerShell version 7.4.0 or later. Run `Get-InstalledModule -Name Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). Run `Connect-AzAccount` to sign in to Azure.
+
+## Register insights provider
+
+*Microsoft.Insights* provider must be registered to successfully log traffic in a virtual network. If you aren't sure if the *Microsoft.Insights* provider is registered, use [Register-AzResourceProvider](/powershell/module/az.resources/register-azresourceprovider) to register it.
+
+```azurepowershell-interactive
+# Register Microsoft.Insights provider.
+Register-AzResourceProvider -ProviderNamespace Microsoft.Insights
+```
+
+## Enable VNet flow logs
+
+Use [New-AzNetworkWatcherFlowLog](/powershell/module/az.network/new-aznetworkwatcherflowlog) to create a VNet flow log.
+
+```azurepowershell-interactive
+# Place the virtual network configuration into a variable.
+$vnet = Get-AzVirtualNetwork -Name myVNet -ResourceGroupName myResourceGroup
+# Place the storage account configuration into a variable.
+$storageAccount = Get-AzStorageAccount -Name myStorageAccount -ResourceGroupName myResourceGroup
+
+# Create a VNet flow log.
+New-AzNetworkWatcherFlowLog -Enabled $true -Name myVNetFlowLog -NetworkWatcherName NetworkWatcher_eastus -ResourceGroupName NetworkWatcherRG -StorageId $storageAccount.Id -TargetResourceId $vnet.Id
+```
+
+## Enable VNet flow logs and traffic analytics
+
+Use [New-AzOperationalInsightsWorkspace](/powershell/module/az.operationalinsights/new-azoperationalinsightsworkspace) to create a traffic analytics workspace, and then use [New-AzNetworkWatcherFlowLog](/powershell/module/az.network/new-aznetworkwatcherflowlog) to create a VNet flow log that uses it.
+
+```azurepowershell-interactive
+# Place the virtual network configuration into a variable.
+$vnet = Get-AzVirtualNetwork -Name myVNet -ResourceGroupName myResourceGroup
+# Place the storage account configuration into a variable.
+$storageAccount = Get-AzStorageAccount -Name myStorageAccount -ResourceGroupName myResourceGroup
+
+# Create a traffic analytics workspace and place its configuration into a variable.
+$workspace = New-AzOperationalInsightsWorkspace -Name myWorkspace -ResourceGroupName myResourceGroup -Location EastUS
+
+# Create a VNet flow log.
+New-AzNetworkWatcherFlowLog -Enabled $true -Name myVNetFlowLog -NetworkWatcherName NetworkWatcher_eastus -ResourceGroupName NetworkWatcherRG -StorageId $storageAccount.Id -TargetResourceId $vnet.Id -EnableTrafficAnalytics -TrafficAnalyticsWorkspaceId $workspace.ResourceId -TrafficAnalyticsInterval 10
+```
+
+## List all flow logs in a region
+
+Use [Get-AzNetworkWatcherFlowLog](/powershell/module/az.network/get-aznetworkwatcherflowlog) to list all flow log resources in a particular region in your subscription.
+
+```azurepowershell-interactive
+# Get all flow logs in East US region.
+Get-AzNetworkWatcherFlowLog -NetworkWatcherName NetworkWatcher_eastus -ResourceGroupName NetworkWatcherRG | format-table Name
+```
+
+## View VNet flow log resource
+
+Use [Get-AzNetworkWatcherFlowLog](/powershell/module/az.network/get-aznetworkwatcherflowlog) to see details of a flow log resource.
+
+```azurepowershell-interactive
+# Get the flow log details.
+Get-AzNetworkWatcherFlowLog -NetworkWatcherName NetworkWatcher_eastus -ResourceGroupName NetworkWatcherRG -Name myVNetFlowLog
+```
+
+## Download a flow log
+
+To access and download VNet flow logs from your storage account, you can use Azure Storage Explorer. Fore more information, see [Get started with Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md).
+
+VNet flow log files saved to a storage account follow the logging path shown in the following example:
+
+```
+https://{storageAccountName}.blob.core.windows.net/insights-logs-flowlogflowevent/flowLogResourceID=/SUBSCRIPTIONS/{subscriptionID}/RESOURCEGROUPS/NETWORKWATCHERRG/PROVIDERS/MICROSOFT.NETWORK/NETWORKWATCHERS/NETWORKWATCHER_{Region}/FLOWLOGS/{FlowlogResourceName}/y={year}/m={month}/d={day}/h={hour}/m=00/macAddress={macAddress}/PT1H.json
+```
+
+## Disable traffic analytics on flow log resource
+
+To disable traffic analytics on the flow log resource and continue to generate and save VNet flow logs to storage account, use [Set-AzNetworkWatcherFlowLog](/powershell/module/az.network/set-aznetworkwatcherflowlog).
+
+```azurepowershell-interactive
+# Place the virtual network configuration into a variable.
+$vnet = Get-AzVirtualNetwork -Name myVNet -ResourceGroupName myResourceGroup
+# Place the storage account configuration into a variable.
+$storageAccount = Get-AzStorageAccount -Name mynwstorageaccount -ResourceGroupName Storage
+
+# Update the VNet flow log.
+Set-AzNetworkWatcherFlowLog -Enabled $true -Name myVNetFlowLog -NetworkWatcherName NetworkWatcher_eastus -ResourceGroupName NetworkWatcherRG -StorageId $storageAccount.Id -TargetResourceId $vnet.Id
+```
+
+## Disable VNet flow logging
+
+To disable a VNet flow log without deleting it so you can re-enable it later, use [Set-AzNetworkWatcherFlowLog](/powershell/module/az.network/set-aznetworkwatcherflowlog).
+
+```azurepowershell-interactive
+# Place the virtual network configuration into a variable.
+$vnet = Get-AzVirtualNetwork -Name myVNet -ResourceGroupName myResourceGroup
+# Place the storage account configuration into a variable.
+$storageAccount = Get-AzStorageAccount -Name mynwstorageaccount -ResourceGroupName Storage
+
+# Disable the VNet flow log.
+Set-AzNetworkWatcherFlowLog -Enabled $false -Name myVNetFlowLog -NetworkWatcherName NetworkWatcher_eastus -ResourceGroupName NetworkWatcherRG -StorageId $storageAccount.Id -TargetResourceId $vnet.Id
+```
+
+## Delete a VNet flow log resource
+
+To delete a VNet flow log resource, use [Remove-AzNetworkWatcherFlowLog](/powershell/module/az.network/remove-aznetworkwatcherflowlog).
+
+```azurepowershell-interactive
+# Delete the VNet flow log.
+Remove-AzNetworkWatcherFlowLog -Name myVNetFlowLog -NetworkWatcherName NetworkWatcher_eastus -ResourceGroupName NetworkWatcherRG
+```
+
+## Next steps
+
+- To learn about traffic analytics, see [Traffic analytics](traffic-analytics.md).
+- To learn how to use Azure built-in policies to audit or enable traffic analytics, see [Manage traffic analytics using Azure Policy](traffic-analytics-policy-portal.md).
orbital Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/overview.md
With Azure Orbital Ground Station, you can focus on your missions by off-loading
Azure Orbital Ground Station uses MicrosoftΓÇÖs global infrastructure and low-latency global network along with an expansive partner ecosystem of ground station networks, cloud modems, and "Telemetry, Tracking, & Control" (TT&C) functions. ## Earth Observation with Azure Orbital Ground Station
For a full end-to-end solution to manage fleet operations and "Telemetry, Tracki
* Direct data ingestion into Azure * Marketplace integration with third-party data processing and image calibration services * Integrated cloud modems for X and S bands
- * Global reach through integrated third-party networks
+ * Global reach through first-party and integrated third-party networks
+ ## Links to learn more - [Overview, features, security, and FAQ](https://azure.microsoft.com/products/orbital/#layout-container-uid189e)
private-5g-core Commission Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/commission-cluster.md
Run the following commands at the PowerShell prompt, specifying the object ID yo
```powershell Invoke-Command -Session $minishellSession -ScriptBlock {Set-HcsKubeClusterArcInfo -CustomLocationsObjectId *object ID*}+
+Invoke-Command -Session $minishellSession -ScriptBlock {Enable-HcsAzureKubernetesService -f}
``` Once you've run this command, you should see an updated option in the local UI ΓÇô **Kubernetes** becomes **Kubernetes (Preview)** as shown in the following image.
The Azure Private 5G Core private mobile network requires a custom location and
1. Create the Network Function Operator Kubernetes extension: ```azurecli
- Add-Content -Path $TEMP_FILE -Value @"
+ cat > $TEMP_FILE <<EOF
{ "helm.versions": "v3", "Microsoft.CustomLocation.ServiceAccount": "azurehybridnetwork-networkfunction-operator",
The Azure Private 5G Core private mobile network requires a custom location and
"helm.release-namespace": "azurehybridnetwork", "managed-by": "helm" }
- "@
+ EOF
``` ```azurecli
private-5g-core Complete Private Mobile Network Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/complete-private-mobile-network-prerequisites.md
In this how-to guide, you'll carry out each of the tasks you need to complete be
## Get access to Azure Private 5G Core for your Azure subscription
-Contact your trials engineer and ask them to register your Azure subscription for access to Azure Private 5G Core. If you don't already have a trials engineer and are interested in trialing Azure Private 5G Core, contact your Microsoft account team, or express your interest through the [partner registration form](https://aka.ms/privateMECMSP).
+Contact your trials engineer and ask them to register your Azure subscription for access to Azure Private 5G Core. If you don't already have a trials engineer and are interested in trialing Azure Private 5G Core, contact your Microsoft account team, or express your interest through the [partner registration form](https://forms.office.com/r/4Q1yNRakXe).
## Choose the core technology type (5G or 4G)
private-link Inspect Traffic With Azure Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/inspect-traffic-with-azure-firewall.md
Title: 'Use Azure Firewall to inspect traffic destined to a private endpoint'
+ Title: 'Azure Firewall scenarios to inspect traffic destined to a private endpoint'
-description: Learn how you can inspect traffic destined to a private endpoint using Azure Firewall.
+description: Learn about different scenarios to inspect traffic destined to a private endpoint using Azure Firewall.
- Previously updated : 04/27/2023+ Last updated : 08/14/2023
-# Use Azure Firewall to inspect traffic destined to a private endpoint
+# Azure Firewall scenarios to inspect traffic destined to a private endpoint
> [!NOTE] > If you want to secure traffic to private endpoints in Azure Virtual WAN using secured virtual hub, see [Secure traffic destined to private endpoints in Azure Virtual WAN](../firewall-manager/private-link-inspection-secure-virtual-hub.md).
If your security requirements require client traffic to services exposed via pri
The same considerations as in scenario 2 above apply. In this scenario, there aren't virtual network peering charges. For more information about how to configure your DNS servers to allow on-premises workloads to access private endpoints, see [on-premises workloads using a DNS forwarder](./private-endpoint-dns.md#on-premises-workloads-using-a-dns-forwarder).
-## Prerequisites
-
-* An Azure subscription.
-
-* A Log Analytics workspace.
-
-See, [Create a Log Analytics workspace in the Azure portal](../azure-monitor/logs/quick-create-workspace.md) to create a workspace if you don't have one in your subscription.
-
-## Sign in to Azure
-
-Sign in to the [Azure portal](https://portal.azure.com).
-
-## Create a VM
-
-In this section, you create a virtual network and subnet to host the VM used to access your private link resource. An Azure SQL database is used as the example service.
-
-## Virtual networks and parameters
-
-Create three virtual networks and their corresponding subnets to:
-
-* Contain the Azure Firewall used to restrict communication between the VM and the private endpoint.
-
-* Host the VM that is used to access your private link resource.
-
-* Host the private endpoint.
-
-Replace the following parameters in the steps with the following information:
-
-### Azure Firewall network
-
-| Parameter | Value |
-|--|-|
-| **\<resource-group-name>** | myResourceGroup |
-| **\<virtual-network-name>** | myAzFwVNet |
-| **\<region-name>** | South Central US |
-| **\<IPv4-address-space>** | 10.0.0.0/16 |
-| **\<subnet-name>** | AzureFirewallSubnet |
-| **\<subnet-address-range>** | 10.0.0.0/24 |
-
-### Virtual machine network
-
-| Parameter | Value |
-|--|-|
-| **\<resource-group-name>** | myResourceGroup |
-| **\<virtual-network-name>** | myVMVNet |
-| **\<region-name>** | South Central US |
-| **\<IPv4-address-space>** | 10.1.0.0/16 |
-| **\<subnet-name>** | VMSubnet |
-| **\<subnet-address-range>** | 10.1.0.0/24 |
-
-### Private endpoint network
-
-| Parameter | Value |
-|--|-|
-| **\<resource-group-name>** | myResourceGroup |
-| **\<virtual-network-name>** | myPEVNet |
-| **\<region-name>** | South Central US |
-| **\<IPv4-address-space>** | 10.2.0.0/16 |
-| **\<subnet-name>** | PrivateEndpointSubnet |
-| **\<subnet-address-range>** | 10.2.0.0/24 |
--
-10. Repeat steps 1 to 9 to create the virtual networks for hosting the virtual machine and private endpoint resources.
-
-### Create virtual machine
-
-1. On the upper-left side of the screen in the Azure portal, select **Create a resource** > **Compute** > **Virtual machine**.
-
-2. In **Create a virtual machine - Basics**, enter or select this information:
-
- | Setting | Value |
- | - | -- |
- | **Project details** | |
- | Subscription | Select your subscription. |
- | Resource group | Select **myResourceGroup**. You created this resource group in the previous section. |
- | **Instance details** | |
- | Virtual machine name | Enter **myVM**. |
- | Region | Select **(US) South Central US**. |
- | Availability options | Leave the default **No infrastructure redundancy required**. |
- | Image | Select **Ubuntu Server 18.04 LTS - Gen1**. |
- | Size | Select **Standard_B2s**. |
- | **Administrator account** | |
- | Authentication type | Select **Password**. |
- | Username | Enter a username of your choosing. |
- | Password | Enter a password of your choosing. The password must be at least 12 characters long and meet the [defined complexity requirements](../virtual-machines/linux/faq.yml?toc=%2fazure%2fvirtual-network%2ftoc.json#what-are-the-password-requirements-when-creating-a-vm-).|
- | Confirm Password | Reenter password. |
- | **Inbound port rules** | |
- | Public inbound ports | Select **None**. |
--
-3. Select **Next: Disks**.
-
-4. In **Create a virtual machine - Disks**, leave the defaults and select **Next: Networking**.
-
-5. In **Create a virtual machine - Networking**, select this information:
-
- | Setting | Value |
- | - | -- |
- | Virtual network | Select **myVMVNet**. |
- | Subnet | Select **VMSubnet (10.1.0.0/24)**.|
- | Public IP | Leave the default **(new) myVm-ip**. |
- | Public inbound ports | Select **Allow selected ports**. |
- | Select inbound ports | Select **SSH**.|
- ||
-
-6. Select **Review + create**. You're taken to the **Review + create** page where Azure validates your configuration.
-
-7. When you see the **Validation passed** message, select **Create**.
--
-## Deploy the Firewall
-
-1. On the Azure portal menu or from the **Home** page, select **Create a resource**.
-
-2. Type **firewall** in the search box and press **Enter**.
-
-3. Select **Firewall** and then select **Create**.
-
-4. On the **Create a Firewall** page, use the following table to configure the firewall:
-
- | Setting | Value |
- | - | -- |
- | **Project details** | |
- | Subscription | Select your subscription. |
- | Resource group | Select **myResourceGroup**. |
- | **Instance details** | |
- | Name | Enter **myAzureFirewall**. |
- | Region | Select **South Central US**. |
- | Availability zone | Leave the default **None**. |
- | Choose a virtual network | Select **Use Existing**. |
- | Virtual network | Select **myAzFwVNet**. |
- | Public IP address | Select **Add new** and in Name enter **myFirewall-ip**. |
- | Forced tunneling | Leave the default **Disabled**. |
- |||
-5. Select **Review + create**. You're taken to the **Review + create** page where Azure validates your configuration.
-
-6. When you see the **Validation passed** message, select **Create**.
-
-## Enable firewall logs
-
-In this section, you enable the logs on the firewall.
-
-1. In the Azure portal, select **All resources** in the left-hand menu.
-
-2. Select the firewall **myAzureFirewall** in the list of resources.
-
-3. Under **Monitoring** in the firewall settings, select **Diagnostic settings**
-
-4. Select **+ Add diagnostic setting** in the Diagnostic settings.
-
-5. In **Diagnostics setting**, enter or select this information:
-
- | Setting | Value |
- | - | -- |
- | Diagnostic setting name | Enter **myDiagSetting**. |
- | Category details | |
- | log | Select **AzureFirewallApplicationRule** and **AzureFirewallNetworkRule**. |
- | Destination details | Select **Send to Log Analytics**. |
- | Subscription | Select your subscription. |
- | Log Analytics workspace | Select your Log Analytics workspace. |
-
-6. Select **Save**.
-
-## Create Azure SQL database
-
-In this section, you create a private SQL Database.
-
-1. On the upper-left side of the screen in the Azure portal, select **Create a resource** > **Databases** > **SQL Database**.
-
-2. In **Create SQL Database - Basics**, enter or select this information:
-
- | Setting | Value |
- | - | -- |
- | **Project details** | |
- | Subscription | Select your subscription. |
- | Resource group | Select **myResourceGroup**. You created this resource group in the previous section.|
- | **Database details** | |
- | Database name | Enter **mydatabase**. |
- | Server | Select **Create new** and enter the following information. |
- | Server name | Enter **mydbserver**. If this name is taken, enter a unique name. |
- | Server admin sign in | Enter a name of your choosing. |
- | Password | Enter a password of your choosing. |
- | Confirm Password | Reenter password |
- | Location | Select **(US) South Central US**. |
- | Want to use SQL elastic pool | Leave the default **No**. |
- | Compute + storage | Leave the default **General Purpose Gen5, 2 vCores, 32 GB Storage**. |
- |||
-
-3. Select **Review + create**. You're taken to the **Review + create** page where Azure validates your configuration.
-
-4. When you see the **Validation passed** message, select **Create**.
-
-## Create private endpoint
-
-In this section, you create a private endpoint for the Azure SQL database in the previous section.
-
-1. In the Azure portal, select **All resources** in the left-hand menu.
-
-2. Select the Azure SQL server **mydbserver** in the list of services. If you used a different server name, choose that name.
-
-3. In the server settings, select **Private endpoint connections** under **Security**.
-
-4. Select **+ Private endpoint**.
-
-5. In **Create a private endpoint**, enter or select this information in the **Basics** tab:
-
- | Setting | Value |
- | - | -- |
- | **Project details** | |
- | Subscription | Select your subscription. |
- | Resource group | Select **myResourceGroup**. |
- | **Instance details** | |
- | Name | Enter **SQLPrivateEndpoint**. |
- | Region | Select **(US) South Central US.** |
-
-6. Select the **Resource** tab or select **Next: Resource** at the bottom of the page.
-
-7. In the **Resource** tab, enter or select this information:
-
- | Setting | Value |
- | - | -- |
- | Connection method | Select **Connect to an Azure resource in my directory**. |
- | Subscription | Select your subscription. |
- | Resource type | Select **Microsoft.Sql/servers**. |
- | Resource | Select **mydbserver** or the name of the server you created in the previous step.
- | Target subresource | Select **sqlServer**. |
-
-8. Select the **Configuration** tab or select **Next: Configuration** at the bottom of the page.
-
-9. In the **Configuration** tab, enter or select this information:
-
- | Setting | Value |
- | - | -- |
- | **Networking** | |
- | Virtual network | Select **myPEVnet**. |
- | Subnet | Select **PrivateEndpointSubnet**. |
- | **Private DNS integration** | |
- | Integrate with private DNS zone | Select **Yes**. |
- | Subscription | Select your subscription. |
- | Private DNS zones | Leave the default **privatelink.database.windows.net**. |
-
-10. Select the **Review + create** tab or select **Review + create** at the bottom of the page.
-
-11. Select **Create**.
-
-12. After the endpoint is created, select **Firewalls and virtual networks** under **Security**.
-
-13. In **Firewalls and virtual networks**, select **Yes** next to **Allow Azure services and resources to access this server**.
-
-14. Select **Save**.
-
-## Connect the virtual networks using virtual network peering
-
-In this section, we connect virtual networks **myVMVNet** and **myPEVNet** to **myAzFwVNet** using peering. There isn't direct connectivity between **myVMVNet** and **myPEVNet**.
-
-1. In the portal's search bar, enter **myAzFwVNet**.
-
-2. Select **Peerings** under **Settings** menu and select **+ Add**.
-
-3. In **Add Peering** enter or select the following information:
-
- | Setting | Value |
- | - | -- |
- | Name of the peering from myAzFwVNet to remote virtual network | Enter **myAzFwVNet-to-myVMVNet**. |
- | **Peer details** | |
- | Virtual network deployment model | Leave the default **Resource Manager**. |
- | I know my resource ID | Leave unchecked. |
- | Subscription | Select your subscription. |
- | Virtual network | Select **myVMVNet**. |
- | Name of the peering from remote virtual network to myAzFwVNet | Enter **myVMVNet-to-myAzFwVNet**. |
- | **Configuration** | |
- | **Configure virtual network access settings** | |
- | Allow virtual network access from myAzFwVNet to remote virtual network | Leave the default **Enabled**. |
- | Allow virtual network access from remote virtual network to myAzFwVNet | Leave the default **Enabled**. |
- | **Configure forwarded traffic settings** | |
- | Allow forwarded traffic from remote virtual network to myAzFwVNet | Select **Enabled**. |
- | Allow forwarded traffic from myAzFwVNet to remote virtual network | Select **Enabled**. |
- | **Configure gateway transit settings** | |
- | Allow gateway transit | Leave unchecked |
-
-4. Select **OK**.
-
-5. Select **+ Add**.
-
-6. In **Add Peering** enter or select the following information:
-
- | Setting | Value |
- | - | -- |
- | Name of the peering from myAzFwVNet to remote virtual network | Enter **myAzFwVNet-to-myPEVNet**. |
- | **Peer details** | |
- | Virtual network deployment model | Leave the default **Resource Manager**. |
- | I know my resource ID | Leave unchecked. |
- | Subscription | Select your subscription. |
- | Virtual network | Select **myPEVNet**. |
- | Name of the peering from remote virtual network to myAzFwVNet | Enter **myPEVNet-to-myAzFwVNet**. |
- | **Configuration** | |
- | **Configure virtual network access settings** | |
- | Allow virtual network access from myAzFwVNet to remote virtual network | Leave the default **Enabled**. |
- | Allow virtual network access from remote virtual network to myAzFwVNet | Leave the default **Enabled**. |
- | **Configure forwarded traffic settings** | |
- | Allow forwarded traffic from remote virtual network to myAzFwVNet | Select **Enabled**. |
- | Allow forwarded traffic from myAzFwVNet to remote virtual network | Select **Enabled**. |
- | **Configure gateway transit settings** | |
- | Allow gateway transit | Leave unchecked |
-
-7. Select **OK**.
-
-## Link the virtual networks to the private DNS zone
-
-In this section, we link virtual networks **myVMVNet** and **myAzFwVNet** to the **privatelink.database.windows.net** private DNS zone. This zone was created when we created the private endpoint.
-
-The link is required for the VM and firewall to resolve the FQDN of database to its private endpoint address. Virtual network **myPEVNet** was automatically linked when the private endpoint was created.
-
->[!NOTE]
->If you don't link the VM and firewall virtual networks to the private DNS zone, both the VM and firewall will still be able to resolve the SQL Server FQDN. They will resolve to its public IP address.
-
-1. In the portal's search bar, enter **privatelink.database**.
-
-2. Select **privatelink.database.windows.net** in the search results.
-
-3. Select **Virtual network links** under **Settings**.
-
-4. Select **+ Add**
-
-5. In **Add virtual network link** enter or select the following information:
-
- | Setting | Value |
- | - | -- |
- | Link name | Enter **Link-to-myVMVNet**. |
- | **Virtual network details** | |
- | I know the resource ID of virtual network | Leave unchecked. |
- | Subscription | Select your subscription. |
- | Virtual network | Select **myVMVNet**. |
- | **CONFIGURATION** | |
- | Enable auto registration | Leave unchecked. |
-
-6. Select **OK**.
-
-## Configure an application rule with SQL FQDN in Azure Firewall
-
-In this section, configure an application rule to allow communication between **myVM** and the private endpoint for SQL Server **mydbserver.database.windows.net**.
-
-This rule allows communication through the firewall that we created in the previous steps.
-
-1. In the portal's search bar, enter **myAzureFirewall**.
-
-2. Select **myAzureFirewall** in the search results.
-
-3. Select **Rules** under **Settings** in the **myAzureFirewall** overview.
-
-4. Select the **Application rule collection** tab.
-
-5. Select **+ Add application rule collection**.
-
-6. In **Add application rule collection** enter or select the following information:
-
- | Setting | Value |
- | - | -- |
- | Name | Enter **SQLPrivateEndpoint**. |
- | Priority | Enter **100**. |
- | Action | Enter **Allow**. |
- | **Rules** | |
- | **FQDN tags** | |
- | Name | Leave blank. |
- | Source type | Leave the default **IP address**. |
- | Source | Leave blank. |
- | FQDN tags | Leave the default **0 selected**. |
- | **Target FQDNs** | |
- | Name | Enter **SQLPrivateEndpoint**. |
- | Source type | Leave the default **IP address**. |
- | Source | Enter **10.1.0.0/16**. |
- | Protocol: Port | Enter **mssql:1433**. |
- | Target FQDNs | Enter **mydbserver.database.windows.net**. |
-
-7. Select **Add**.
-
-## Route traffic between the virtual machine and private endpoint through Azure Firewall
-
-We didn't create a virtual network peering directly between virtual networks **myVMVNet** and **myPEVNet**. The virtual machine **myVM** doesn't have a route to the private endpoint we created.
-
-In this section, we create a route table with a custom route.
-
-The route sends traffic from the **myVM** subnet to the address space of virtual network **myPEVNet**, through the Azure Firewall.
-
-1. On the Azure portal menu or from the **Home** page, select **Create a resource**.
-
-2. Type **route table** in the search box and press **Enter**.
-
-3. Select **Route table** and then select **Create**.
-
-4. On the **Create Route table** page, use the following table to configure the route table:
-
- | Setting | Value |
- | - | -- |
- | **Project details** | |
- | Subscription | Select your subscription. |
- | Resource group | Select **myResourceGroup**. |
- | **Instance details** | |
- | Region | Select **South Central US**. |
- | Name | Enter **VMsubnet-to-AzureFirewall**. |
- | Propagate gateway routes | Select **No**. |
-
-5. Select **Review + create**. You're taken to the **Review + create** page where Azure validates your configuration.
-
-6. When you see the **Validation passed** message, select **Create**.
-
-7. Once the deployment completes select **Go to resource**.
-
-8. Select **Routes** under **Settings**.
-
-9. Select **+ Add**.
-
-10. On the **Add route** page, enter, or select this information:
-
- | Setting | Value |
- | - | -- |
- | Route name | Enter **myVMsubnet-to-privateendpoint**. |
- | Address prefix | Enter **10.2.0.0/16**. |
- | Next hop type | Select **Virtual appliance**. |
- | Next hop address | Enter **10.0.0.4**. |
-
-11. Select **OK**.
-
-12. Select **Subnets** under **Settings**.
-
-13. Select **+ Associate**.
-
-14. On the **Associate subnet** page, enter or select this information:
-
- | Setting | Value |
- | - | -- |
- | Virtual network | Select **myVMVNet**. |
- | Subnet | Select **VMSubnet**. |
-
-15. Select **OK**.
-
-## Connect to the virtual machine from your client computer
-
-Connect to the VM **myVm** from the internet as follows:
-
-1. In the portal's search bar, enter **myVm-ip**.
-
-2. Select **myVM-ip** in the search results.
-
-3. Copy or write down the value under **IP address**.
-
-4. If you're using Windows 10, run the following command using PowerShell. For other Windows client versions, use an SSH client like [Putty](https://www.putty.org/):
-
-* Replace **username** with the admin username you entered during VM creation.
-
-* Replace **IPaddress** with the IP address from the previous step.
-
- ```bash
- ssh username@IPaddress
- ```
-
-5. Enter the password you defined when creating **myVm**
-
-## Access SQL Server privately from the virtual machine
-
-In this section, you connect privately to the SQL Database using the private endpoint.
-
-1. Enter `nslookup mydbserver.database.windows.net`
-
- You receive a message similar to the following output:
-
- ```output
- Server: 127.0.0.53
- Address: 127.0.0.53#53
-
- Non-authoritative answer:
- mydbserver.database.windows.net canonical name = mydbserver.privatelink.database.windows.net.
- Name: mydbserver.privatelink.database.windows.net
- Address: 10.2.0.4
- ```
-
-2. Install [SQL Server command-line tools](/sql/linux/quickstart-install-connect-ubuntu#tools).
-
-3. Run the following command to connect to the SQL Server. Use the server admin and password you defined when you created the SQL Server in the previous steps.
-
-* Replace **\<ServerAdmin>** with the admin username you entered during the SQL server creation.
-
-* Replace **\<YourPassword>** with the admin password you entered during SQL server creation.
-
- ```bash
- sqlcmd -S mydbserver.database.windows.net -U '<ServerAdmin>' -P '<YourPassword>'
- ```
-4. A SQL command prompt is displayed on successful sign in. Enter **exit** to exit the **sqlcmd** tool.
-
-5. Close the connection to **myVM** by entering **exit**.
-
-## Validate the traffic in Azure Firewall logs
-
-1. In the Azure portal, select **All Resources** and select your Log Analytics workspace.
-
-2. Select **Logs** under **General** in the Log Analytics workspace page.
-
-3. Select the blue **Get Started** button.
-
-4. In the **Example queries** window, select **Firewalls** under **All Queries**.
-
-5. Select the **Run** button under **Application rule log data**.
-
-6. In the log query output, verify **mydbserver.database.windows.net** is listed under **FQDN** and **SQLPrivateEndpoint** is listed under **RuleCollection**.
-
-## Clean up resources
-
-When you're done using the resources, delete the resource group and all of the resources it contains:
-
-1. Enter **myResourceGroup** in the **Search** box at the top of the portal and select **myResourceGroup** from the search results.
-
-1. Select **Delete resource group**.
-
-1. Enter **myResourceGroup** for **TYPE THE RESOURCE GROUP NAME** and select **Delete**.
- ## Next steps
-In this article, you explored different scenarios that you can use to restrict traffic between a virtual machine and a private endpoint using Azure Firewall.
+In this article, you explored different scenarios that you can use to restrict traffic between a virtual machine and a private endpoint using Azure Firewall.
-You connected to the VM and securely communicated to the database through Azure Firewall using private link.
+For a tutorial on how to configure Azure Firewall to inspect traffic destined to a private endpoint, see [Tutorial: Inspect private endpoint traffic with Azure Firewall](tutorial-inspect-traffic-azure-firewall.md)
To learn more about private endpoint, see [What is Azure Private Endpoint?](private-endpoint-overview.md).
private-link Private Endpoint Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-overview.md
A private-link resource is the destination target of a specified private endpoin
| Azure Data Factory | Microsoft.DataFactory/factories | dataFactory | | Azure Data Explorer | Microsoft.Kusto/clusters | cluster | | Azure Database for MariaDB | Microsoft.DBforMariaDB/servers | mariadbServer |
-| Azure Database for MySQL | Microsoft.DBforMySQL/servers | mysqlServer |
+| Azure Database for MySQL - Single Server | Microsoft.DBforMySQL/servers | mysqlServer |
+| Azure Database for MySQL- Flexible Server | Microsoft.DBforMySQL/flexibleServers | mysqlServer |
| Azure Database for PostgreSQL - Single server | Microsoft.DBforPostgreSQL/servers | postgresqlServer | | Azure Device Provisioning Service | Microsoft.Devices/provisioningServices | iotDps | | Azure IoT Hub | Microsoft.Devices/IotHubs | iotHub |
private-link Tutorial Inspect Traffic Azure Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/tutorial-inspect-traffic-azure-firewall.md
+
+ Title: 'Tutorial: Inspect private endpoint traffic with Azure Firewall'
+description: Learn how to inspect private endpoint traffic with Azure Firewall.
+++++ Last updated : 08/15/2023++
+# Tutorial: Inspect private endpoint traffic with Azure Firewall
+
+Azure Private Endpoint is the fundamental building block for Azure Private Link. Private endpoints enable Azure resources deployed in a virtual network to communicate privately with private link resources.
+
+Private endpoints allow resources access to the private link service deployed in a virtual network. Access to the private endpoint through virtual network peering and on-premises network connections extend the connectivity.
+
+You may need to inspect or block traffic from clients to the services exposed via private endpoints. Complete this inspection by using [Azure Firewall](../firewall/overview.md) or a third-party network virtual appliance.
+
+For more information and scenarios that involve private endpoints and Azure Firewall, see [Azure Firewall scenarios to inspect traffic destined to a private endpoint](inspect-traffic-with-azure-firewall.md).
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Create a virtual network and bastion host for the test virtual machine.
+> * Create the private endpoint virtual network.
+> * Create a test virtual machine.
+> * Deploy Azure Firewall.
+> * Create an Azure SQL database.
+> * Create a private endpoint for Azure SQL.
+> * Create a network peer between the private endpoint virtual network and the test virtual machine virtual network.
+> * Link the virtual networks to a private DNS zone.
+> * Configure application rules in Azure Firewall for Azure SQL.
+> * Route traffic between the test virtual machine and Azure SQL through Azure Firewall.
+> * Test the connection to Azure SQL and validate in Azure Firewall logs.
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Prerequisites
+
+- An Azure account with an active subscription.
+
+- A Log Analytics workspace. For more information about the creation of a log analytics workspace, see [Create a Log Analytics workspace in the Azure portal](../azure-monitor/logs/quick-create-workspace.md).
+
+## Sign in to the Azure portal
+
+Sign in to the [Azure portal](https://portal.azure.com).
++++
+## Deploy Azure Firewall
+
+1. In the search box at the top of the portal, enter **Firewall**. Select **Firewalls** in the search results.
+
+1. In **Firewalls**, select **+ Create**.
+
+1. Enter or select the following information in the **Basics** tab of **Create a firewall**:
+
+ | Setting | Value |
+ |||
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **test-rg**. |
+ | **Instance details** | |
+ | Name | Enter **firewall**. |
+ | Region | Select **East US 2**. |
+ | Availability zone | Select **None**. |
+ | Firewall SKU | Select **Standard**. |
+ | Firewall management | Select **Use a Firewall Policy to manage this firewall**. |
+ | Firewall policy | Select **Add new**. </br> Enter **firewall-policy** in **Policy name**. </br> Select **East US 2** in region. </br> Select **OK**. |
+ | Choose a virtual network | Select **Create new**. |
+ | Virtual network name | Enter **vnet-firewall**. |
+ | Address space | Enter **10.2.0.0/16**. |
+ | Subnet address space | Enter **10.2.1.0/26**. |
+ | Public IP address | Select **Add new**. </br> Enter **public-ip-firewall** in **Name**. </br> Select **OK**. |
+
+1. Select **Review + create**.
+
+1. Select **Create**.
+
+Wait for the firewall deployment to complete before you continue.
+
+## Enable firewall logs
+
+In this section, you enable the firewall logs and send them to the log analytics workspace.
+
+> [!NOTE]
+> You must have a log analytics workspace in your subscription before you can enable firewall logs. For more information, see [Prerequisites](#prerequisites).
+
+1. In the search box at the top of the portal, enter **Firewall**. Select **Firewalls** in the search results.
+
+1. Select **firewall**.
+
+1. In **Monitoring** select **Diagnostic settings**.
+
+1. Select **+ Add diagnostic setting**.
+
+1. In **Diagnostic setting** enter or select the following information:
+
+ | Setting | Value |
+ |||
+ | Diagnostic setting name | Enter **diagnostic-setting-firewall**. |
+ | **Logs** | |
+ | Categories | Select **Azure Firewall Application Rule (Legacy Azure Diagnostics)** and **Azure Firewall Network Rule (Legacy Azure Diagnostics)**. |
+ | **Destination details** | |
+ | Destination | Select **Send to Log Analytics workspace**. |
+ | Subscription | Select your subscription. |
+ | Log Analytics workspace | Select your log analytics workspace. |
+
+1. Select **Save**.
+
+## Create an Azure SQL database
+
+1. In the search box at the top of the portal, enter **SQL**. Select **SQL databases** in the search results.
+
+1. In **SQL databases**, select **+ Create**.
+
+1. In the **Basics** tab of **Create SQL Database**, enter or select the following information:
+
+ | Setting | Value |
+ |||
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **test-rg**. |
+ | **Database details** | |
+ | Database name | Enter **sql-db**. |
+ | Server | Select **Create new**. </br> Enter **sql-server-1** in **Server name** (Server names must be unique, replace **sql-server-1** with a unique value). </br> Select **(US) East US 2** in **Location**. </br> Select **Use SQL authentication**. </br> Enter a server admin sign-in and password. </br> Select **OK**. |
+ | Want to use SQL elastic pool? | Select **No**. |
+ | Workload environment | Leave the default of **Production**. |
+ | **Backup storage redundancy** | |
+ | Backup storage redundancy | Select **Locally redundant backup storage**. |
+
+1. Select **Next: Networking**.
+
+1. In the **Networking** tab of **Create SQL Database**, enter or select the following information:
+
+ | Setting | Value |
+ |||
+ | **Network connectivity** | |
+ | Connectivity method | Select **Private endpoint**. |
+ | **Private endpoints** | |
+ | Select **+Add private endpoint**. | |
+ | **Create private endpoint** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **test-rg**. |
+ | Location | Select **East US 2**. |
+ | Name | Enter **private-endpoint-sql**. |
+ | Target subresource | Select **SqlServer**. |
+ | **Networking** | |
+ | Virtual network | Select **vnet-private-endpoint**. |
+ | Subnet | Select **subnet-private-endpoint**. |
+ | **Private DNS integration** | |
+ | Integrate with private DNS zone | Select **Yes**. |
+ | Private DNS zone | Leave the default of **privatelink.database.windows.net**. |
+
+1. Select **OK**.
+
+1. Select **Review + create**.
+
+1. Select **Create**.
+
+## Connect virtual networks with virtual network peering
+
+In this section, you connect the virtual networks with virtual network peering. The networks **vnet-1** and **vnet-private-endpoint** are connected to **vnet-firewall**. There isn't direct connectivity between **vnet-1** and **vnet-private-endpoint**.
+
+1. In the search box at the top of the portal, enter **Virtual networks**. Select **Virtual networks** in the search results.
+
+1. Select **vnet-firewall**.
+
+1. In **Settings** select **Peerings**.
+
+1. In **Peerings** select **+ Add**.
+
+1. In **Add peering**, enter or select the following information:
+
+ | Setting | Value |
+ |||
+ | **This virtual network** | |
+ | Peering link name | Enter **vnet-firewall-to-vnet-1**. |
+ | Traffic to remote virtual network | Select **Allow (default)**. |
+ | Traffic forwarded from remote virtual network | Select **Allow (default)**. |
+ | Virtual network gateway or Route Server | Select **None (default)**. |
+ | **Remote virtual network** | |
+ | Peering link name | Enter **vnet-1-to-vnet-firewall**. |
+ | Virtual network deployment model | Select **Resource manager**. |
+ | Subscription | Select your subscription. |
+ | Virtual network | Select **vnet-1**. |
+ | Traffic to remote virtual network | Select **Allow (default)**. |
+ | Traffic forwarded from remote virtual network | Select **Allow (default)**. |
+ | Virtual network gateway or Route Server | Select **None (default)**. |
+
+1. Select **Add**.
+
+1. In **Peerings** select **+ Add**.
+
+1. In **Add peering**, enter or select the following information:
+
+ | Setting | Value |
+ |||
+ | **This virtual network** | |
+ | Peering link name | Enter **vnet-firewall-to-vnet-private-endpoint**. |
+ | Traffic to remote virtual network | Select **Allow (default)**. |
+ | Traffic forwarded from remote virtual network | Select **Allow (default)**. |
+ | Virtual network gateway or Route Server | Select **None (default)**. |
+ | **Remote virtual network** | |
+ | Peering link name | Enter **vnet-private-endpoint-to-vnet-firewall**. |
+ | Virtual network deployment model | Select **Resource manager**. |
+ | Subscription | Select your subscription. |
+ | Virtual network | Select **vnet-private-endpoint**. |
+ | Traffic to remote virtual network | Select **Allow (default)**. |
+ | Traffic forwarded from remote virtual network | Select **Allow (default)**. |
+ | Virtual network gateway or Route Server | Select **None (default)**. |
+
+1. Select **Add**.
+
+1. Verify the **Peering status** displays **Connected** for both network peers.
+
+## Link the virtual networks to the private DNS zone
+
+The private DNS zone created during the private endpoint creation in the previous section must be linked to the **vnet-1** and **vnet-firewall** virtual networks.
+
+1. In the search box at the top of the portal, enter **Private DNS zone**. Select **Private DNS zones** in the search results.
+
+1. Select **privatelink.database.windows.net**.
+
+1. In **Settings** select **Virtual network links**.
+
+1. Select **+ Add**.
+
+1. In **Add virtual network link**, enter or select the following information:
+
+ | Setting | Value |
+ |||
+ | **Virtual network link** | |
+ | Virtual network link name | Enter **link-to-vnet-1**. |
+ | Subscription | Select your subscription. |
+ | Virtual network | Select **vnet-1 (test-rg)**. |
+ | Configuration | Leave the default of unchecked for **Enable auto registration**. |
+
+1. Select **OK**.
+
+1. Select **+ Add**.
+
+1. In **Add virtual network link**, enter or select the following information:
+
+ | Setting | Value |
+ |||
+ | **Virtual network link** | |
+ | Virtual network link name | Enter **link-to-vnet-firewall**. |
+ | Subscription | Select your subscription. |
+ | Virtual network | Select **vnet-firewall (test-rg)**. |
+ | Configuration | Leave the default of unchecked for **Enable auto registration**. |
+
+1. Select **OK**.
+
+## Create route between vnet-1 and vnet-private-endpoint
+
+A network link between **vnet-1** and **vnet-private-endpoint** doesn't exist. You must create a route to allow traffic to flow between the virtual networks through Azure Firewall.
+
+The route sends traffic from **vnet-1** to the address space of virtual network **vnet-private-endpoint**, through the Azure Firewall.
+
+1. In the search box at the top of the portal, enter **Route tables**. Select **Route tables** in the search results.
+
+1. Select **+ Create**.
+
+1. In the **Basics** tab of **Create Route table**, enter or select the following information:
+
+ | Setting | Value |
+ |||
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **test-rg**. |
+ | **Instance details** | |
+ | Region | Select **East US 2**. |
+ | Name | Enter **vnet-1-to-vnet-firewall**. |
+ | Propagate gateway routes | Leave the default of **Yes**. |
+
+1. Select **Review + create**.
+
+1. Select **Create**.
+
+1. In the search box at the top of the portal, enter **Route tables**. Select **Route tables** in the search results.
+
+1. Select **vnet-1-to-vnet-firewall**.
+
+1. In **Settings** select **Routes**.
+
+1. Select **+ Add**.
+
+1. In **Add route**, enter or select the following information:
+
+ | Setting | Value |
+ |||
+ | Route name | Enter **subnet-1-to-subnet-private-endpoint**. |
+ | Destination type | Select **IP Addresses**. |
+ | Destination IP addresses/CIDR ranges | Enter **10.1.0.0/16**. |
+ | Next hop type | Select **Virtual appliance**. |
+ | Next hop address | Enter **10.2.1.4**. |
+
+1. Select **Add**.
+
+1. In **Settings**, select **Subnets**.
+
+1. Select **+ Associate**.
+
+1. In **Associate subnet**, enter or select the following information:
+
+ | Setting | Value |
+ |||
+ | Virtual network | Select **vnet-1(test-rg)**. |
+ | Subnet | Select **subnet-1**. |
+
+1. Select **OK**.
+
+## Configure an application rule in Azure Firewall
+
+Create an application rule to allow communication from **vnet-1** to the private endpoint of the Azure SQL server **sql-server-1.database.windows.net**. Replace **sql-server-1** with the name of your Azure SQL server.
+
+1. In the search box at the top of the portal, enter **Firewall**. Select **Firewall Policies** in the search results.
+
+1. In **Firewall Policies**, select **firewall-policy**.
+
+1. In **Settings** select **Application rules**.
+
+1. Select **+ Add a rule collection**.
+
+1. In **Add a rule collection**, enter or select the following information:
+
+ | Setting | Value |
+ |||
+ | Name | Enter **rule-collection-sql**. |
+ | Rule collection type | Leave the selection of **Application**. |
+ | Priority | Enter **100**. |
+ | Rule collection action | Select **Allow**. |
+ | Rule collection group | Leave the default of **DefaultApplicationRuleCollectionGroup**. |
+ | **Rules** | |
+ | **Rule 1** | |
+ | Name | Enter **SQLPrivateEndpoint**. |
+ | Source type | Select **IP Address**. |
+ | Source | Enter **10.0.0.0/16** |
+ | Protocol | Enter **mssql:1433** |
+ | Destination type | Select **FQDN**. |
+ | Destination | Enter **sql-server-1.database.windows.net**. |
+
+1. Select **Add**.
+
+## Test connection to Azure SQL from virtual machine
+
+1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
+
+1. Select **vm-1**.
+
+1. In **Operations** select **Bastion**.
+
+1. Enter the username and password for the virtual machine.
+
+1. Select **Connect**.
+
+1. To verify name resolution of the private endpoint, enter the following command in the terminal window:
+
+ ```bash
+ nslookup sql-server-1.database.windows.net
+ ```
+
+ You receive a message similar to the following example. The IP address returned is the private IP address of the private endpoint.
+
+ ```output
+ Server: 127.0.0.53
+ Address: 127.0.0.53#53
+
+ Non-authoritative answer:
+ sql-server-8675.database.windows.netcanonical name = sql-server-8675.privatelink.database.windows.net.
+ Name:sql-server-8675.privatelink.database.windows.net
+ Address: 10.1.0.4
+ ```
+
+1. Install the SQL server command line tools from [Install the SQL Server command-line tools sqlcmd and bcp on Linux](/sql/linux/sql-server-linux-setup-tools). Proceed with the next steps after the installation is complete.
+
+1. Use the following commands to connect to the SQL server you created in the previous steps.
+
+ * Replace **\<server-admin>** with the admin username you entered during the SQL server creation.
+
+ * Replace **\<admin-password>** with the admin password you entered during SQL server creation.
+
+ * Replace **sql-server-1** with the name of your SQL server.
+
+ ```bash
+ sqlcmd -S sql-server-1.database.windows.net -U '<server-admin>' -P '<admin-password>'
+ ```
+
+1. A SQL command prompt is displayed on successful sign in. Enter **exit** to exit the **sqlcmd** tool.
+
+## Validate traffic in the Azure Firewall logs
+
+1. In the search box at the top of the portal, enter **Log Analytics**. Select **Log Analytics** in the search results.
+
+1. Select your log analytics workspace. In this example, the workspace is named **log-analytics-workspace**.
+
+1. In the General settings, select **Logs**.
+
+1. In the example **Queries** in the search box, enter **Application rule**. In the returned results in **Network**, select the **Run** button for **Application rule log data**.
+
+1. In the log query output, verify **sql-server-1.database.windows.net** is listed under **FQDN** and **SQLPrivateEndpoint** is listed under **Rule**.
++
+## Next steps
+
+Advance to the next article to learn how to use a private endpoint with Azure Private Resolver:
+> [!div class="nextstepaction"]
+> [Create a private endpoint DNS infrastructure with Azure Private Resolver for an on-premises workload](tutorial-dns-on-premises-private-resolver.md)
role-based-access-control Tutorial Role Assignments Group Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/tutorial-role-assignments-group-powershell.md
-+ Last updated 02/02/2019
role-based-access-control Tutorial Role Assignments User Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/tutorial-role-assignments-user-powershell.md
-+ Last updated 02/02/2019
route-server Expressroute Vpn Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/expressroute-vpn-support.md
Previously updated : 08/14/2023- Last updated : 08/15/2023 # Azure Route Server support for ExpressRoute and Azure VPN
For example, in the following diagram:
You can also replace the SDWAN appliance with Azure VPN gateway. Since Azure VPN and ExpressRoute gateways are fully managed, you only need to enable the route exchange for the two on-premises networks to talk to each other.
+If you enable BGP on the VPN gateway, the gateway learns *On-premises 1* routes dynamically over BGP. For more information, see [How to configure BGP for Azure VPN Gateway](../vpn-gateway/bgp-howto.md). If you donΓÇÖt enable BGP on the VPN gateway, the gateway learns *On-premises 1* routes that are defined in the local network gateway of *On-premises 1*. For more information, see [Create a local network gateway](../vpn-gateway/tutorial-site-to-site-portal.md#LocalNetworkGateway). Whether you enable BGP on the VPN gateway or not, the gateway advertises the routes it learns to the Route Server if route exchange is enabled. For more information, see [Configure route exchange](quickstart-configure-route-server-portal.md#configure-route-exchange).
+ > [!IMPORTANT]
-> Azure VPN gateway must be configured in [**active-active**](../vpn-gateway/vpn-gateway-activeactive-rm-powershell.md) mode and have the ASN set to 65515. It's not necessary to have BGP enabled on the VPN gateway.
+> Azure VPN gateway must be configured in [**active-active**](../vpn-gateway/vpn-gateway-activeactive-rm-powershell.md) mode and have the ASN set to 65515. It's not a requirement to have BGP enabled on the VPN gateway to communicate with the Route Server.
-> [!IMPORTANT]
+> [!NOTE]
> When the same route is learned over ExpressRoute, Azure VPN or an SDWAN appliance, the ExpressRoute network will be preferred. ## Next steps
sap Deploy Control Plane https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/deploy-control-plane.md
Run the following command to deploy the control plane:
```bash
-az logout
-cd ~/Azure_SAP_Automated_Deployment
-cp -Rp samples/Terraform/WORKSPACES config
-cd config/WORKSPACES
- export ARM_SUBSCRIPTION_ID="<subscriptionId>" export ARM_CLIENT_ID="<appId>" export ARM_CLIENT_SECRET="<password>" export ARM_TENANT_ID="<tenantId>" export env_code="MGMT" export region_code="WEEU"
-export vnet_code="WEEU"
+export vnet_code="DEP01"
+
+export DEPLOYMENT_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation"
+export CONFIG_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES"
+export SAP_AUTOMATION_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation"
+az logout
az login --service-principal -u "${ARM_CLIENT_ID}" -p="${ARM_CLIENT_SECRET}" --tenant "${ARM_TENANT_ID}"
-export DEPLOYMENT_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation"
-="${subscriptionId}"
-export CONFIG_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/config/WORKSPACES"
-export SAP_AUTOMATION_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation"
+cd ~/Azure_SAP_Automated_Deployment/WORKSPACES
sudo ${SAP_AUTOMATION_REPO_PATH}/deploy/scripts/deploy_controlplane.sh \
sudo ${SAP_AUTOMATION_REPO_PATH}/deploy/scripts/deploy_controlplane.sh
--subscription "${ARM_SUBSCRIPTION_ID}" \ --spn_id "${ARM_CLIENT_ID}" \ --spn_secret "${ARM_CLIENT_SECRET}" \
- --tenant_id "${ARM_TENANT_ID}" \
- --auto-approve
+ --tenant_id "${ARM_TENANT_ID}"
```
sap Deploy Workload Zone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/deploy-workload-zone.md
export region_code="<region_code>"
export vnet_code="SAP02" export deployer_environment="MGMT"
-az login --service-principal -u "${ARM_CLIENT_ID}" -p="${ARM_CLIENT_SECRET}" --tenant "${ARM_TENANT_ID}"
- export DEPLOYMENT_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation" export CONFIG_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/config/WORKSPACES" export SAP_AUTOMATION_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation"
+az login --service-principal -u "${ARM_CLIENT_ID}" -p="${ARM_CLIENT_SECRET}" --tenant "${ARM_TENANT_ID}"
++ cd "${CONFIG_REPO_PATH}/LANDSCAPE/${env_code}-${region_code}-${vnet_code}-INFRASTRUCTURE" parameterFile="${env_code}-${region_code}-${vnet_code}-INFRASTRUCTURE.tfvars"
$SAP_AUTOMATION_REPO_PATH/deploy/scripts/install_workloadzone.sh \
--subscription "${ARM_SUBSCRIPTION_ID}" \ --spn_id "${ARM_CLIENT_ID}" \ --spn_secret "${ARM_CLIENT_SECRET}" \
- --tenant_id "${ARM_TENANT_ID}" \
- --auto-approve
+ --tenant_id "${ARM_TENANT_ID}"
``` # [Windows](#tab/windows)
sap Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/get-started.md
Follow the guidance here [Configure Azure DevOps for SDAF](configure-devops.md)
You can run the SAP on Azure Deployment Automation Framework from a virtual machine in Azure. The following steps describe how to create the environment.
-Clone the repository and prepare the execution environment by using the following steps on a Linux Virtual machine in Azure:
+> [!IMPORTANT]
+> Ensure that the virtual machine is using either a system assigned or user assigned identity with permissions on the subscription to create resources.
+ Ensure the Virtual Machine has the following prerequisites installed:+ - git - jq - unzip
+ - virtualenv (if running on Ubuntu)
-Ensure that the virtual machine is using either a system assigned or user assigned identity with permissions on the subscription to create resources.
--- Create a directory called `Azure_SAP_Automated_Deployment` for your automation framework deployment.
+You can install the prerequisites on an Ubuntu Virtual Machine by using the following command:
```bash
-mkdir -p ~/Azure_SAP_Automated_Deployment; cd $_
+sudo apt-get install -y git jq unzip virtualenv
-git clone https://github.com/Azure/sap-automation.git sap-automation
+```
-git clone https://github.com/Azure/sap-automation-samples.git samples
+You can then install the deployer components using the following commands:
-git clone https://github.com/Azure/sap-automation-bootstrap.git config
+```bash
-cd sap-automation/deploy/scripts
-
+wget https://raw.githubusercontent.com/Azure/sap-automation/main/deploy/scripts/configure_deployer.sh -O configure_deployer.sh
+chmod +x ./configure_deployer.sh
./configure_deployer.sh
-```
+# Source the new variables
+. /etc/profile.d/deploy_server.sh
+
+```
-> [!TIP]
-> The deployer already clones the required repositories.
## Samples
The ~/Azure_SAP_Automated_Deployment/samples folder contains a set of sample con
```bash cd ~/Azure_SAP_Automated_Deployment
-cp -Rp samples/Terraform/WORKSPACES config
+cp -Rp samples/Terraform/WORKSPACES ~/Azure_SAP_Automated_Deployment
```
search Search Howto Index Cosmosdb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-cosmosdb.md
The data source definition specifies the data to index, credentials, and policie
### Supported credentials and connection strings
-Indexers can connect to a collection using the following connections. For connections that target the [SQL API](../cosmos-db/sql-query-getting-started.md), you can omit "ApiKind" from the connection string.
+Indexers can connect to a collection using the following connections.
Avoid port numbers in the endpoint URL. If you include the port number, the connection will fail.
Avoid port numbers in the endpoint URL. If you include the port number, the conn
| Managed identity connection string | ||
-|`{ "connectionString" : "ResourceId=/subscriptions/<your subscription ID>/resourceGroups/<your resource group name>/providers/Microsoft.DocumentDB/databaseAccounts/<your cosmos db account name>/;(ApiKind=[api-kind];)" }`|
-|This connection string doesn't require an account key, but you must have previously configured a search service to [connect using a managed identity](search-howto-managed-identities-data-sources.md) and created a role assignment that grants **Cosmos DB Account Reader Role** permissions. See [Setting up an indexer connection to an Azure Cosmos DB database using a managed identity](search-howto-managed-identities-cosmos-db.md) for more information.|
+|`{ "connectionString" : "ResourceId=/subscriptions/<your subscription ID>/resourceGroups/<your resource group name>/providers/Microsoft.DocumentDB/databaseAccounts/<your cosmos db account name>/;(ApiKind=[api-kind];)/(IdentityAuthType=[identity-auth-type])" }`|
+|This connection string doesn't require an account key, but you must have previously configured a search service to [connect using a managed identity](search-howto-managed-identities-data-sources.md). For connections that target the [SQL API](../cosmos-db/sql-query-getting-started.md), you can omit `ApiKind` from the connection string. For more information about `ApiKind`, `IdentityAuthType` see [Setting up an indexer connection to an Azure Cosmos DB database using a managed identity](search-howto-managed-identities-cosmos-db.md).|
<a name="flatten-structures"></a>
search Search Howto Managed Identities Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-managed-identities-cosmos-db.md
You can use a system-assigned managed identity or a user-assigned managed identi
* [Create a managed identity](search-howto-managed-identities-data-sources.md) for your search service.
-* [Assign a role](search-howto-managed-identities-data-sources.md#assign-a-role) in Azure Cosmos DB.
+* Assign the **Cosmos DB Account Reader** role to the search service managed identity. This role grants the ability to read Azure Cosmos DB account data. For more information about role assignments in Cosmos DB, see [Configure role-based access control to data](search-howto-managed-identities-data-sources.md#assign-a-role).
+
+* Data Plane Role assignment: Follow [Data plane Role assignment](../cosmos-db/how-to-setup-rbac.md)
+to know more.
+
+* Example for a read-only data plane role assignment:
+```azurepowershell
+$cosmosdb_acc_name = <cosmos db account name>
+$resource_group = <resource group name>
+$subsciption = <subscription id>
+$system_assigned_principal = <principal id for system assigned identity>
+$readOnlyRoleDefinitionId = "00000000-0000-0000-0000-000000000001"
+$scope=$(az cosmosdb show --name $cosmosdbname --resource-group $resourcegroup --query id --output tsv)
+```
+
+Role assignment for system-assigned identity:
- For data reader access, you'll need the **Cosmos DB Account Reader** role and the identity used to make the request. This role works for all Azure Cosmos DB APIs supported by Cognitive Search. This is a control plane RBAC role.
+```azurepowershell
+az cosmosdb sql role assignment create --account-name $cosmosdbname --resource-group $resourcegroup --role-definition-id $readOnlyRoleDefinitionId --principal-id $sys_principal --scope $scope
+```
+* For Cosmos DB for NoSQL, you can optionally [Enforcing RBAC as the only authentication method](../cosmos-db/how-to-setup-rbac.md#disable-local-auth)
+for data connections by setting `disableLocalAuth` to `true` for your Cosmos DB account.
- At this time, Cognitive Search obtains keys with the identity and uses those keys to connect to the Azure Cosmos DB account. This means that [enforcing RBAC as the only authentication method in Azure Cosmos DB](../cosmos-db/how-to-setup-rbac.md#disable-local-auth) isn't supported when using Search with managed identities to connect to Azure Cosmos DB.
+* *For Gremlin and MongoDB Collections*:
+ Indexer support is currently in preview. At this time, a preview limitation exists that requires Cognitive Search to connect using keys. You can still set up a managed identity and role assignment, but Cognitive Search will only use the role assignment to get keys for the connection. This limitation means that you can't configure an [RBAC-only approach](../cosmos-db/how-to-setup-rbac.md#disable-local-auth) if your indexers are connecting to Gremlin or MongoDB using Search with managed identities to connect to Azure Cosmos DB.
* You should be familiar with [indexer concepts](search-indexer-overview.md) and [configuration](search-howto-index-cosmosdb.md).
The [REST API](/rest/api/searchservice/create-data-source), Azure portal, and th
When you're connecting with a system-assigned managed identity, the only change to the data source definition is the format of the "credentials" property. You'll provide the database name and a ResourceId that has no account key or password. The ResourceId must include the subscription ID of Azure Cosmos DB, the resource group, and the Azure Cosmos DB account name. * For SQL collections, the connection string doesn't require "ApiKind".
+* For SQL collections add "IdentityAuthType=AccessToken" if RBAC is enforced as the only authentication method. It is not applicable for MongoDB and Gremlin collections.
* For MongoDB collections, add "ApiKind=MongoDb" to the connection string and use a preview REST API. * For Gremlin graphs, add "ApiKind=Gremlin" to the connection string and use a preview REST API.
api-key: [Search service admin key]
"name": "[my-cosmosdb-ds]", "type": "cosmosdb", "credentials": {
- "connectionString": "ResourceId=/subscriptions/[subscription-id]/resourceGroups/[rg-name]/providers/Microsoft.DocumentDB/databaseAccounts/[cosmos-account-name];Database=[cosmos-database];ApiKind=[SQL | Gremlin | MongoDB];"
+ "connectionString": "ResourceId=/subscriptions/[subscription-id]/resourceGroups/[rg-name]/providers/Microsoft.DocumentDB/databaseAccounts/[cosmos-account-name];Database=[cosmos-database];ApiKind=[SQL | Gremlin | MongoDB];IdentityAuthType=[AccessToken | AccountKey]"
}, "container": { "name": "[my-cosmos-collection]", "query": null }, "dataChangeDetectionPolicy": null
The 2021-04-30-preview REST API supports connections based on a user-assigned ma
* First, the format of the "credentials" property is the database name and a ResourceId that has no account key or password. The ResourceId must include the subscription ID of Azure Cosmos DB, the resource group, and the Azure Cosmos DB account name. * For SQL collections, the connection string doesn't require "ApiKind".
+ * For SQL collections add "IdentityAuthType=AccessToken" if RBAC is enforced as the only authentication method. It is not applicable for MongoDB and Gremlin collections.
* For MongoDB collections, add "ApiKind=MongoDb" to the connection string * For Gremlin graphs, add "ApiKind=Gremlin" to the connection string.
api-key: [Search service admin key]
"name": "[my-cosmosdb-ds]", "type": "cosmosdb", "credentials": {
- "connectionString": "ResourceId=/subscriptions/[subscription-id]/resourceGroups/[rg-name]/providers/Microsoft.DocumentDB/databaseAccounts/[cosmos-account-name];Database=[cosmos-database];ApiKind=[SQL | Gremlin | MongoDB];"
+ "connectionString": "ResourceId=/subscriptions/[subscription-id]/resourceGroups/[rg-name]/providers/Microsoft.DocumentDB/databaseAccounts/[cosmos-account-name];Database=[cosmos-database];ApiKind=[SQL | Gremlin | MongoDB];IdentityAuthType=[AccessToken | AccountKey]"
}, "container": { "name": "[my-cosmos-collection]", "query": null
search Semantic Answers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/semantic-answers.md
Previously updated : 01/16/2023 Last updated : 08/14/2023 # Return a semantic answer in Azure Cognitive Search
The "semanticConfiguration" parameter is required. It's defined in a search inde
+ "queryLanguage" must be one of the values from the [supported languages list (REST API)](/rest/api/searchservice/preview-api/search-documents#queryLanguage).
-+ A "semanticConfiguration" determines which string fields provide tokens to the extraction model. The same fields that produce captions also produce answers. See [Create a semantic configuration](semantic-how-to-query-request.md#2create-a-semantic-configuration) for details.
++ A "semanticConfiguration" determines which string fields provide tokens to the extraction model. The same fields that produce captions also produce answers. See [Create a semantic configuration](semantic-how-to-query-request.md#2create-a-semantic-configuration) for details. + For "answers", parameter construction is `"answers": "extractive"`, where the default number of answers returned is one. You can increase the number of answers by adding a `count` as shown in the above example, up to a maximum of 10. Whether you need more than one answer depends on the user experience of your app, and how you want to render results.
search Semantic How To Query Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/semantic-how-to-query-request.md
Previously updated : 7/14/2023 Last updated : 8/15/2023 # Configure semantic ranking and return captions in search results
The following example in this section uses the [hotels-sample-index](search-get-
1. Set "captions" to specify whether semantic captions are included in the result. If you're using a semantic configuration, you should set this parameter. While the ["searchFields" approach](#2buse-searchfields-for-field-prioritization) automatically included captions, "semanticConfiguration" doesn't.
- Currently, the only valid value for this parameter is "extractive". Captions can be configured to return results with or without highlights. The default is for highlights to be returned. This example returns captions without highlights: `extractive|highlight-false`.
+ Currently, the only valid value for this parameter is "extractive". Captions can be configured to return results with or without highlights. The default is for highlights to be returned. This example returns captions without highlights: `extractive|highlight-false`.
+
+ For semantic captions, the fields referenced in the "semanticConfiguration" must have a word limit in the range of 2000-3000 words (or equivalent to 10000 tokens), otherwise, it will miss important caption results. If you anticipate that the fields used by the "semanticConfiguration" word count could be higher than the exposed limit and you need to use captions, consider [Text split cognitive skill]cognitive-search-skill-textsplit.md) as part of your [AI enrichment pipeline](cognitive-search-concept-intro.md) while indexing your data with [built-in pull indexers](search-indexer-overview.md).
1. Set "highlightPreTag" and "highlightPostTag" if you want to override the default highlight formatting that's applied to captions.
search Semantic Ranking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/semantic-ranking.md
Previously updated : 07/14/2023 Last updated : 08/14/2023 # Semantic ranking in Azure Cognitive Search
Each document is now represented by a single long string.
> [!NOTE] > In the 2020-06-30-preview, the "searchFields" parameter is used rather than the semantic configuration to determine which fields to use. We recommend upgrading to the 2021-04-30-preview API version for best results.
-The string is composed of tokens, not characters or words. The maximum token count is 128 unique tokens. For estimation purposes, you can assume that 128 tokens are roughly equivalent to a string that is 128 words in length.
+The string is composed of tokens, not characters or words. The maximum token count is 256 unique tokens. For estimation purposes, you can assume that 256 tokens are roughly equivalent to a string that is 256 words in length.
> [!NOTE] > Tokenization is determined in part by the analyzer assignment on searchable fields. If you are using specialized analyzer, such as nGram or EdgeNGram, you might want to exclude that field from "searchFields". For insights into how strings are tokenized, you can review the token output of an analyzer using the [Test Analyzer REST API](/rest/api/searchservice/test-analyzer).
service-bus-messaging Service Bus Amqp Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-amqp-overview.md
Title: Overview of AMQP 1.0 in Azure Service Bus description: Learn how Azure Service Bus supports Advanced Message Queuing Protocol (AMQP), an open standard protocol. Previously updated : 05/31/2022 Last updated : 08/16/2023 # Advanced Message Queueing Protocol (AMQP) 1.0 support in Service Bus
service-bus-messaging Service Bus Amqp Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-amqp-troubleshoot.md
Title: Troubleshoot AMQP errors in Azure Service Bus | Microsoft Docs description: Provides a list of AMQP errors you may receive when using Azure Service Bus, and cause of those errors. Previously updated : 09/20/2021 Last updated : 08/16/2023 # AMQP errors in Azure Service Bus
-This article provides some of the errors you receive when using AMQP with Azure Service Bus. They are all standard behaviors of the service. You can avoid them by making send/receive calls on the connection/link, which automatically recreates the connection/link.
+This article provides some of the errors you receive when using AMQP with Azure Service Bus. They're all standard behaviors of the service. You can avoid them by making send/receive calls on the connection/link, which automatically recreates the connection/link.
## Link is closed You see the following error when the AMQP connection and link are active but no calls (for example, send or receive) are made using the link for 10 minutes. So, the link is closed. The connection is still open.
amqp:link:detach-forced:The link 'G2:7223832:user.tenant0.cud_00000000000-0000-0
``` ## Connection is closed
-You see the following error on the AMQP connection when all links in the connection have been closed because there was no activity (idle) and a new link has not been created in 5 minutes.
+You see the following error on the AMQP connection when all links in the connection have been closed because there was no activity (idle) and a new link hasn't been created in 5 minutes.
``` Error{condition=amqp:connection:forced, description='The connection was inactive for more than the allowed 300000 milliseconds and is closed by container 'LinkTracker'. TrackingId:00000000000000000000000000000000000_G21, SystemTracker:gateway5, Timestamp:2019-03-06T17:32:00', info=null} ```
-## Link is not created
-You see this error when a new AMQP connection is created but a link is not created within 1 minute of the creation of the AMQP Connection.
+## Link isn't created
+You see this error when a new AMQP connection is created but a link isn't created within 1 minute of the creation of the AMQP Connection.
``` Error{condition=amqp:connection:forced, description='The connection was inactive for more than the allowed 60000 milliseconds and is closed by container 'LinkTracker'. TrackingId:0000000000000000000000000000000000000_G21, SystemTracker:gateway5, Timestamp:2019-03-06T18:41:51', info=null}
service-bus-messaging Service Bus Messaging Sql Filter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-messaging-sql-filter.md
Title: Azure Service Bus Subscription Rule SQL Filter syntax | Microsoft Docs description: This article provides details about SQL filter grammar. A SQL filter supports a subset of the SQL-92 standard. Previously updated : 05/31/2022 Last updated : 08/16/2023 # Subscription Rule SQL Filter Syntax
-A *SQL filter* is one of the available filter types for Service Bus topic subscriptions. It's a text expression that leans on a subset of the SQL-92 standard. Filter expressions are used with the `sqlExpression` element of the 'sqlFilter' property of a Service Bus `Rule` in an [Azure Resource Manager template](service-bus-resource-manager-namespace-topic-with-rule.md), or the Azure CLI `az servicebus topic subscription rule create` command's [`--filter-sql-expression`](/cli/azure/servicebus/topic/subscription/rule#az-servicebus-topic-subscription-rule-create) argument, and several SDK functions that allow managing subscription rules. The allowed expressions are shown below.
+A *SQL filter* is one of the available filter types for Service Bus topic subscriptions. It's a text expression that leans on a subset of the SQL-92 standard. Filter expressions are used with the `sqlExpression` element of the 'sqlFilter' property of a Service Bus `Rule` in an [Azure Resource Manager template](service-bus-resource-manager-namespace-topic-with-rule.md), or the Azure CLI `az servicebus topic subscription rule create` command's [`--filter-sql-expression`](/cli/azure/servicebus/topic/subscription/rule#az-servicebus-topic-subscription-rule-create) argument, and several SDK functions that allow managing subscription rules. The allowed expressions are shown in this section.
Service Bus Premium also supports the [JMS SQL message selector syntax](https://docs.oracle.com/javaee/7/api/javax/jms/Message.html) through the JMS 2.0 API.
Service Bus Premium also supports the [JMS SQL message selector syntax](https://
## Remarks
-An attempt to access a non-existent system property is an error, while an attempt to access a non-existent user property isn't an error. Instead, a non-existent user property is internally evaluated as an unknown value. An unknown value is treated specially during operator evaluation.
+An attempt to access a nonexistent system property is an error, while an attempt to access a nonexistent user property isn't an error. Instead, a nonexistent user property is internally evaluated as an unknown value. An unknown value is treated specially during operator evaluation.
## property_name
Consider the following Sql Filter semantics:
### Property evaluation semantics -- An attempt to evaluate a non-existent system property throws a `FilterException` exception.
+- An attempt to evaluate a nonexistent system property throws a `FilterException` exception.
- A property that doesn't exist is internally evaluated as **unknown**.
service-fabric How To Managed Cluster Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-networking.md
The following steps describe enable public IP on your node.
```json {
- "name": "Secondary Node Type",
+ "name": "<secondary_node_type_name>",
"apiVersion": "2023-02-01-preview", "properties": { "isPrimary" : false,
- "vmImageResourceId": "/subscriptions/<SubscriptionID>/resourceGroups/<myRG>/providers/Microsoft.Compute/images/<MyCustomImage>",
+ "vmImageResourceId": "/subscriptions/<your_subscription_id>/resourceGroups/<your_resource_group>/providers/Microsoft.Compute/images/<your_custom_image>",
"vmSize": "Standard_D2", "vmInstanceCount": 5, "dataDiskSizeGB": 100,
The following steps describe enable public IP on your node.
"ipAddress": "<ip_address_0>", "ipConfiguration": { "id": "<configuration_id_0>",
- "resourceGroup": "<your_resource_group"
+ "resourceGroup": "<your_resource_group>"
}, "ipTags": [], "name": "<name>", "provisioningState": "Succeeded", "publicIPAddressVersion": "IPv4", "publicIPAllocationMethod": "Static",
- "resourceGroup": "<your_resource_group",
+ "resourceGroup": "<your_resource_group>",
"resourceGuid": "resource_guid_0", "sku": { "name": "Standard"
The following steps describe enable public IP on your node.
"ipAddress": "<ip_address_1>", "ipConfiguration": { "id": "<configuration_id_1>",
- "resourceGroup": "<your_resource_group"
+ "resourceGroup": "<your_resource_group>"
}, "ipTags": [], "name": "<name>",
The following steps describe enable public IP on your node.
"ipAddress": "<ip_address_2>", "ipConfiguration": { "id": "<configuration_id_2>",
- "resourceGroup": "<your_resource_group"
+ "resourceGroup": "<your_resource_group>"
}, "ipTags": [], "name": "<name>",
service-fabric Service Fabric Cluster Creation Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-creation-create-template.md
+ Last updated 07/14/2022
spring-apps How To Configure Palo Alto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-configure-palo-alto.md
This article describes how to use Azure Spring Apps with a Palo Alto firewall.
-For example, the [Azure Spring Apps reference architecture](./reference-architecture.md) includes an Azure Firewall to secure your applications. However, if your current deployments include a Palo Alto firewall, you can omit the Azure Firewall from the Azure Spring Apps deployment and use Palo Alto instead, as described in this article.
+If your current deployments include a Palo Alto firewall, you can omit the Azure Firewall from the Azure Spring Apps deployment and use Palo Alto instead, as described in this article.
-You should keep configuration information, such as rules and address wildcards, in CSV files in a Git repository. This article shows you how to use automation to apply these files to Palo Alto. To understand the configuration to be applied to Palo Alto, see [Customer responsibilities for running Azure Spring Apps in a virtual network](./vnet-customer-responsibilities.md).
+You should keep configuration information, such as rules and address wildcards, in CSV files in a Git repository. This article shows you how to use automation to apply these files to Palo Alto. To understand the configuration to be applied to Palo Alto, see [Customer responsibilities for running Azure Spring Apps in a virtual network](./vnet-customer-responsibilities.md).
-> [!Note]
+> [!NOTE]
> In describing the use of REST APIs, this article uses the PowerShell variable syntax to indicate names and values that are left to your discretion. Be sure to use the same values in all the steps. > > After you've configured the TLS/SSL certificate in Palo Alto, remove the `-SkipCertificateCheck` argument from all Palo Alto REST API calls in the examples below.
spring-apps Quickstart Deploy Infrastructure Vnet Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-deploy-infrastructure-vnet-azure-cli.md
The Enterprise deployment plan includes the following Tanzu components:
## Review the Azure CLI deployment script
-The deployment script used in this quickstart is from the [Azure Spring Apps reference architecture](reference-architecture.md).
+The deployment script used in this quickstart is from the [Azure Spring Apps reference architecture](/previous-versions/azure/spring-apps/reference-architecture).
### [Standard plan](#tab/azure-spring-apps-standard)
In this quickstart, you deployed an Azure Spring Apps instance into an existing
* [Simple Hello World](./quickstart.md?pivots=programming-language-java&tabs=Azure-CLI). * Use [custom domains](how-to-custom-domain.md) with Azure Spring Apps. * Expose applications in Azure Spring Apps to the internet using Azure Application Gateway. For more information, see [Expose applications with end-to-end TLS in a virtual network](expose-apps-gateway-end-to-end-tls.md).
-* View the secure end-to-end [Azure Spring Apps reference architecture](reference-architecture.md), which is based on the [Microsoft Azure Well-Architected Framework](/azure/architecture/framework/).
+* View the secure end-to-end [Azure Spring Apps reference architecture](/previous-versions/azure/spring-apps/reference-architecture), which is based on the [Microsoft Azure Well-Architected Framework](/azure/architecture/framework/).
spring-apps Quickstart Deploy Infrastructure Vnet Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-deploy-infrastructure-vnet-bicep.md
In this quickstart, you deployed an Azure Spring Apps instance into an existing
* [Simple Hello World](./quickstart.md?pivots=programming-language-java&tabs=Azure-CLI). * Use [custom domains](how-to-custom-domain.md) with Azure Spring Apps. * Expose applications in Azure Spring Apps to the internet using Azure Application Gateway. For more information, see [Expose applications with end-to-end TLS in a virtual network](expose-apps-gateway-end-to-end-tls.md).
-* View the secure end-to-end [Azure Spring Apps reference architecture](reference-architecture.md), which is based on the [Microsoft Azure Well-Architected Framework](/azure/architecture/framework/).
+* View the secure end-to-end [Azure Spring Apps reference architecture](/previous-versions/azure/spring-apps/reference-architecture), which is based on the [Microsoft Azure Well-Architected Framework](/azure/architecture/framework/).
spring-apps Quickstart Deploy Infrastructure Vnet Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-deploy-infrastructure-vnet-terraform.md
For more customization including custom domain support, see the [Azure Spring Ap
## Review the Terraform plan
-The configuration file used in this quickstart is from the [Azure Spring Apps reference architecture](reference-architecture.md).
+The configuration file used in this quickstart is from the [Azure Spring Apps reference architecture](/previous-versions/azure/spring-apps/reference-architecture).
### [Standard plan](#tab/azure-spring-apps-standard)
In this quickstart, you deployed an Azure Spring Apps instance into an existing
* [Simple Hello World](./quickstart.md?pivots=programming-language-java&tabs=Azure-CLI) * Use [custom domains](how-to-custom-domain.md) with Azure Spring Apps. * Expose applications in Azure Spring Apps to the internet using Azure Application Gateway. For more information, see [Expose applications with end-to-end TLS in a virtual network](expose-apps-gateway-end-to-end-tls.md).
-* View the secure end-to-end [Azure Spring Apps reference architecture](reference-architecture.md), which is based on the [Microsoft Azure Well-Architected Framework](/azure/architecture/framework/).
+* View the secure end-to-end [Azure Spring Apps reference architecture](/previous-versions/azure/spring-apps/reference-architecture), which is based on the [Microsoft Azure Well-Architected Framework](/azure/architecture/framework/).
spring-apps Quickstart Deploy Infrastructure Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-deploy-infrastructure-vnet.md
The Enterprise deployment plan includes the following Tanzu components:
## Review the template
-The templates used in this quickstart are from the [Azure Spring Apps Reference Architecture](reference-architecture.md).
+The templates used in this quickstart are from the [Azure Spring Apps Reference Architecture](/previous-versions/azure/spring-apps/reference-architecture).
### [Standard plan](#tab/azure-spring-apps-standard)
In this quickstart, you deployed an Azure Spring Apps instance into an existing
* [Simple Hello World](./quickstart.md?pivots=programming-language-java&tabs=Azure-CLI) * Use [custom domains](how-to-custom-domain.md) with Azure Spring Apps. * Expose applications in Azure Spring Apps to the internet using Azure Application Gateway. For more information, see [Expose applications with end-to-end TLS in a virtual network](expose-apps-gateway-end-to-end-tls.md).
-* View the secure end-to-end [Azure Spring Apps reference architecture](reference-architecture.md), which is based on the [Microsoft Azure Well-Architected Framework](/azure/architecture/framework/).
+* View the secure end-to-end [Azure Spring Apps reference architecture](/previous-versions/azure/spring-apps/reference-architecture), which is based on the [Microsoft Azure Well-Architected Framework](/azure/architecture/framework/).
* Learn more about [Azure Resource Manager](../azure-resource-manager/management/overview.md).
spring-apps Reference Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/reference-architecture.md
- Previously updated : 05/31/2022-- Title: Azure Spring Apps reference architecture---
-description: This reference architecture is a foundation using a typical enterprise hub and spoke design for the use of Azure Spring Apps.
--
-# Azure Spring Apps reference architecture
-
-> [!NOTE]
-> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
-
-**This article applies to:** ✔️ Standard ✔️ Enterprise
-
-This reference architecture is a foundation using a typical enterprise hub and spoke design for the use of Azure Spring Apps. In the design, Azure Spring Apps is deployed in a single spoke that's dependent on shared services hosted in the hub. The architecture is built with components to achieve the tenets in the [Microsoft Azure Well-Architected Framework][16].
-
-There are two flavors of Azure Spring Apps: Standard plan and Enterprise plan.
-
-The Azure Spring Apps Standard plan is composed of the Spring Cloud Config Server, the Spring Cloud Service Registry, and the kpack build service.
-
-The Azure Spring Apps Enterprise plan is composed of the VMware Tanzu® Build Service™, Application Configuration Service for VMware Tanzu®, VMware Tanzu® Service Registry, Spring Cloud Gateway for VMware Tanzu®, and API portal for VMware Tanzu®.
-
-For an implementation of this architecture, see the [Azure Spring Apps Reference Architecture][10] on GitHub.
-
-Deployment options for this architecture include Azure Resource Manager (ARM), Terraform, Azure CLI, and Bicep. The artifacts in this repository provide a foundation that you can customize for your environment. You can group resources such as Azure Firewall or Application Gateway into different resource groups or subscriptions. This grouping helps keep different functions separate, such as IT infrastructure, security, business application teams, and so on.
-
-## Planning the address space
-
-Azure Spring Apps requires two dedicated subnets:
-
-* Service runtime
-* Spring Boot applications
-
-Each of these subnets requires a dedicated Azure Spring Apps cluster. Multiple clusters can't share the same subnets. The minimum size of each subnet is /28. The number of application instances that Azure Spring Apps can support varies based on the size of the subnet. You can find the detailed virtual network requirements in the [Virtual network requirements][11] section of [Deploy Azure Spring Apps in a virtual network][17].
-
-> [!WARNING]
-> The selected subnet size can't overlap with the existing virtual network address space, and shouldn't overlap with any peered or on-premises subnet address ranges.
-
-## Use cases
-
-Typical uses for this architecture include:
-
-* Private applications: Internal applications deployed in hybrid cloud environments
-* Public applications: Externally facing applications
-
-These use cases are similar except for their security and network traffic rules. This architecture is designed to support the nuances of each.
-
-## Private applications
-
-The following list describes the infrastructure requirements for private applications. These requirements are typical in highly regulated environments.
-
-* A subnet must only have one instance of Azure Spring Apps.
-* Adherence to at least one Security Benchmark should be enforced.
-* Application host Domain Name Service (DNS) records should be stored in Azure Private DNS.
-* Azure service dependencies should communicate through Service Endpoints or Private Link.
-* Data at rest should be encrypted.
-* Data in transit should be encrypted.
-* DevOps deployment pipelines can be used (for example, Azure DevOps) and require network connectivity to Azure Spring Apps.
-* Egress traffic should travel through a central Network Virtual Appliance (NVA) (for example, Azure Firewall).
-* If [Azure Spring Apps Config Server][8] is used to load config properties from a repository, the repository must be private.
-* Microsoft's Zero Trust security approach requires secrets, certificates, and credentials to be stored in a secure vault. The recommended service is Azure Key Vault.
-* Name resolution of hosts on-premises and in the Cloud should be bidirectional.
-* No direct egress to the public Internet except for control plane traffic.
-* Resource Groups managed by the Azure Spring Apps deployment must not be modified.
-* Subnets managed by the Azure Spring Apps deployment must not be modified.
-
-The following list shows the components that make up the design:
-
-* On-premises network
- * Domain Name Service (DNS)
- * Gateway
-* Hub subscription
- * Application Gateway Subnet
- * Azure Firewall Subnet
- * Shared Services Subnet
-* Connected subscription
- * Azure Bastion Subnet
- * Virtual Network Peer
-
-The following list describes the Azure services in this reference architecture:
-
-* [Azure Key Vault][2]: a hardware-backed credential management service that has tight integration with Microsoft identity services and compute resources.
-
-* [Azure Monitor][3]: an all-encompassing suite of monitoring services for applications that deploy both in Azure and on-premises.
-
-* [Azure Pipelines][5]: a fully featured Continuous Integration / Continuous Development (CI/CD) service that can automatically deploy updated Spring Boot apps to Azure Spring Apps.
-
-* [Microsoft Defender for Cloud][4]: a unified security management and threat protection system for workloads across on-premises, multiple clouds, and Azure.
-
-* [Azure Spring Apps][1]: a managed service that's designed and optimized specifically for Java-based Spring Boot applications and .NET-based [Steeltoe][9] applications.
-
-The following diagrams represent a well-architected hub and spoke design that addresses the above requirements:
-
-### [Standard plan](#tab/azure-spring-standard)
--
-### [Enterprise plan](#tab/azure-spring-enterprise)
----
-## Public applications
-
-The following list describes the infrastructure requirements for public applications. These requirements are typical in highly regulated environments.
-
-* A subnet must only have one instance of Azure Spring Apps.
-* Adherence to at least one Security Benchmark should be enforced.
-* Application host Domain Name Service (DNS) records should be stored in Azure Private DNS.
-* Azure DDoS Protection should be enabled.
-* Azure service dependencies should communicate through Service Endpoints or Private Link.
-* Data at rest should be encrypted.
-* Data in transit should be encrypted.
-* DevOps deployment pipelines can be used (for example, Azure DevOps) and require network connectivity to Azure Spring Apps.
-* Egress traffic should travel through a central Network Virtual Appliance (NVA) (for example, Azure Firewall).
-* Ingress traffic should be managed by at least Application Gateway or Azure Front Door.
-* Internet routable addresses should be stored in Azure Public DNS.
-* Microsoft's Zero Trust security approach requires secrets, certificates, and credentials to be stored in a secure vault. The recommended service is Azure Key Vault.
-* Name resolution of hosts on-premises and in the Cloud should be bidirectional.
-* No direct egress to the public Internet except for control plane traffic.
-* Resource Groups managed by the Azure Spring Apps deployment must not be modified.
-* Subnets managed by the Azure Spring Apps deployment must not be modified.
-
-The following list shows the components that make up the design:
-
-* On-premises network
- * Domain Name Service (DNS)
- * Gateway
-* Hub subscription
- * Application Gateway Subnet
- * Azure Firewall Subnet
- * Shared Services Subnet
-* Connected subscription
- * Azure Bastion Subnet
- * Virtual Network Peer
-
-The following list describes the Azure services in this reference architecture:
-
-* [Azure Application Firewall][7]: a feature of Azure Application Gateway that provides centralized protection of applications from common exploits and vulnerabilities.
-
-* [Azure Application Gateway][6]: a load balancer responsible for application traffic with Transport Layer Security (TLS) offload operating at layer 7.
-
-* [Azure Key Vault][2]: a hardware-backed credential management service that has tight integration with Microsoft identity services and compute resources.
-
-* [Azure Monitor][3]: an all-encompassing suite of monitoring services for applications that deploy both in Azure and on-premises.
-
-* [Azure Pipelines][5]: a fully featured Continuous Integration / Continuous Development (CI/CD) service that can automatically deploy updated Spring Boot apps to Azure Spring Apps.
-
-* [Microsoft Defender for Cloud][4]: a unified security management and threat protection system for workloads across on-premises, multiple clouds, and Azure.
-
-* [Azure Spring Apps][1]: a managed service that's designed and optimized specifically for Java-based Spring Boot applications and .NET-based [Steeltoe][9] applications.
-
-The following diagrams represent a well-architected hub and spoke design that addresses the above requirements. Only the hub-virtual-network communicates with the internet:
-
-### [Standard plan](#tab/azure-spring-standard)
--
-### [Enterprise plan](#tab/azure-spring-enterprise)
----
-## Azure Spring Apps on-premises connectivity
-
-Applications in Azure Spring Apps can communicate to various Azure, on-premises, and external resources. By using the hub and spoke design, applications can route traffic externally or to the on-premises network using Express Route or Site-to-Site Virtual Private Network (VPN).
-
-## Azure Well-Architected Framework considerations
-
-The [Azure Well-Architected Framework][16] is a set of guiding tenets to follow in establishing a strong infrastructure foundation. The framework contains the following categories: cost optimization, operational excellence, performance efficiency, reliability, and security.
-
-### Cost optimization
-
-Because of the nature of distributed system design, infrastructure sprawl is a reality. This reality results in unexpected and uncontrollable costs. Azure Spring Apps is built using components that scale so that it can meet demand and optimize cost. The core of this architecture is the Azure Kubernetes Service (AKS). The service is designed to reduce the complexity and operational overhead of managing Kubernetes, which includes efficiencies in the operational cost of the cluster.
-
-You can deploy different applications and application types to a single instance of Azure Spring Apps. The service supports autoscaling of applications triggered by metrics or schedules that can improve utilization and cost efficiency.
-
-You can also use Application Insights and Azure Monitor to lower operational cost. With the visibility provided by the comprehensive logging solution, you can implement automation to scale the components of the system in real time. You can also analyze log data to reveal inefficiencies in the application code that you can address to improve the overall cost and performance of the system.
-
-### Operational excellence
-
-Azure Spring Apps addresses multiple aspects of operational excellence. You can combine these aspects to ensure that the service runs efficiently in production environments, as described in the following list:
-
-* You can use Azure Pipelines to ensure that deployments are reliable and consistent while helping you avoid human error.
-* You can use Azure Monitor and Application Insights to store log and telemetry data.
- You can assess collected log and metric data to ensure the health and performance of your applications. Application Performance Monitoring (APM) is fully integrated into the service through a Java agent. This agent provides visibility into all the deployed applications and dependencies without requiring extra code. For more information, see the blog post [Effortlessly monitor applications and dependencies in Azure Spring Apps][15].
-* You can use Microsoft Defender for Cloud to ensure that applications maintain security by providing a platform to analyze and assess the data provided.
-* The service supports various deployment patterns. For more information, see [Set up a staging environment in Azure Spring Apps][14].
-
-### Reliability
-
-Azure Spring Apps is built on AKS. While AKS provides a level of resiliency through clustering, this reference architecture goes even further by incorporating services and architectural considerations to increase availability of the application if there's component failure.
-
-By building on top of a well-defined hub and spoke design, the foundation of this architecture ensures that you can deploy it to multiple regions. For the private application use case, the architecture uses Azure Private DNS to ensure continued availability during a geographic failure. For the public application use case, Azure Front Door and Azure Application Gateway ensure availability.
-
-### Security
-
-The security of this architecture is addressed by its adherence to industry-defined controls and benchmarks. In this context, "control" means a concise and well-defined best practice, such as "Employ the least privilege principle when implementing information system access. IAM-05" The controls in this architecture are from the [Cloud Control Matrix][19] (CCM) by the [Cloud Security Alliance][18] (CSA) and the [Microsoft Azure Foundations Benchmark][20] (MAFB) by the [Center for Internet Security][21] (CIS). In the applied controls, the focus is on the primary security design principles of governance, networking, and application security. It is your responsibility to handle the design principles of Identity, Access Management, and Storage as they relate to your target infrastructure.
-
-#### Governance
-
-The primary aspect of governance that this architecture addresses is segregation through the isolation of network resources. In the CCM, DCS-08 recommends ingress and egress control for the datacenter. To satisfy the control, the architecture uses a hub and spoke design using Network Security Groups (NSGs) to filter east-west traffic between resources. The architecture also filters traffic between central services in the hub and resources in the spoke. The architecture uses an instance of Azure Firewall to manage traffic between the internet and the resources within the architecture.
-
-The following list shows the control that addresses datacenter security in this reference:
-
-| CSA CCM Control ID | CSA CCM Control Domain |
-|:-|:--|
-| DCS-08 | Datacenter Security Unauthorized Persons Entry |
-
-#### Network
-
-The network design supporting this architecture is derived from the traditional hub and spoke model. This decision ensures that network isolation is a foundational construct. CCM control IVS-06 recommends that traffic between networks and virtual machines are restricted and monitored between trusted and untrusted environments. This architecture adopts the control by implementation of the NSGs for east-west traffic (within the "data center"), and the Azure Firewall for north-south traffic (outside of the "data center"). CCM control IPY-04 recommends that the infrastructure should use secure network protocols for the exchange of data between services. The Azure services supporting this architecture all use standard secure protocols such as TLS for HTTP and SQL.
-
-The following list shows the CCM controls that address network security in this reference:
-
-| CSA CCM Control ID | CSA CCM Control Domain |
-| :-- | :-|
-| IPY-04 | Network Protocols |
-| IVS-06 | Network Security |
-
-The network implementation is further secured by defining controls from the MAFB. The controls ensure that traffic into the environment is restricted from the public Internet.
-
-The following list shows the CIS controls that address network security in this reference:
-
-| CIS Control ID | CIS Control Description |
-|:|:|
-| 6.2 | Ensure that SSH access is restricted from the internet. |
-| 6.3 | Ensure no SQL Databases allow ingress 0.0.0.0/0 (ANY IP). |
-| 6.5 | Ensure that Network Watcher is 'Enabled'. |
-| 6.6 | Ensure that ingress using UDP is restricted from the internet. |
-
-Azure Spring Apps requires management traffic to egress from Azure when deployed in a secured environment. You must allow the network and application rules listed in [Customer responsibilities for running Azure Spring Apps in a virtual network](./vnet-customer-responsibilities.md).
-
-#### Application security
-
-This design principle covers the fundamental components of identity, data protection, key management, and application configuration. By design, an application deployed in Azure Spring Apps runs with least privilege required to function. The set of authorization controls is directly related to data protection when using the service. Key management strengthens this layered application security approach.
-
-The following list shows the CCM controls that address key management in this reference:
-
-| CSA CCM Control ID | CSA CCM Control Domain |
-|:-|:--|
-| EKM-01 | Encryption and Key Management Entitlement |
-| EKM-02 | Encryption and Key Management Key Generation |
-| EKM-03 | Encryption and Key Management Sensitive Data Protection |
-| EKM-04 | Encryption and Key Management Storage and Access |
-
-From the CCM, EKM-02, and EKM-03 recommend policies and procedures to manage keys and to use encryption protocols to protect sensitive data. EKM-01 recommends that all cryptographic keys have identifiable owners so that they can be managed. EKM-04 recommends the use of standard algorithms.
-
-The following list shows the CIS controls that address key management in this reference:
-
-| CIS Control ID | CIS Control Description |
-|:|:-|
-| 8.1 | Ensure that the expiration date is set on all keys. |
-| 8.2 | Ensure that the expiration date is set on all secrets. |
-| 8.4 | Ensure the key vault is recoverable. |
-
-The CIS controls 8.1 and 8.2 recommend that expiration dates are set for credentials to ensure that rotation is enforced. CIS control 8.4 ensures that the contents of the key vault can be restored to maintain business continuity.
-
-The aspects of application security set a foundation for the use of this reference architecture to support a Spring workload in Azure.
-
-## Next steps
-
-Explore this reference architecture through the ARM, Terraform, and Azure CLI deployments available in the [Azure Spring Apps Reference Architecture][10] repository.
-
-<!-- Reference links in article -->
-[1]: ./index.yml
-[2]: ../key-vault/index.yml
-[3]: ../azure-monitor/index.yml
-[4]: ../security-center/index.yml
-[5]: /azure/devops/pipelines/
-[6]: ../application-gateway/index.yml
-[7]: ../web-application-firewall/index.yml
-[8]: ./how-to-config-server.md
-[9]: https://steeltoe.io/
-[10]: https://github.com/Azure/azure-spring-apps-landing-zone-accelerator/tree/reference-architecture
-[11]: ./how-to-deploy-in-azure-virtual-network.md#virtual-network-requirements
-[12]: ./vnet-customer-responsibilities.md#azure-spring-apps-network-requirements
-[13]: ./vnet-customer-responsibilities.md#azure-spring-apps-fqdn-requirements--application-rules
-[14]: ./how-to-staging-environment.md
-[15]: https://devblogs.microsoft.com/java/monitor-applications-and-dependencies-in-azure-spring-cloud/
-[16]: /azure/architecture/framework/
-[17]: ./how-to-deploy-in-azure-virtual-network.md#virtual-network-requirements
-[18]: https://cloudsecurityalliance.org/
-[19]: https://cloudsecurityalliance.org/research/working-groups/cloud-controls-matrix
-[20]: /azure/security/benchmarks/v2-cis-benchmark
-[21]: https://www.cisecurity.org/
spring-apps Secure Communications End To End https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/secure-communications-end-to-end.md
Azure Spring Apps is jointly built, operated, and supported by Microsoft and VMw
- [Deploy Spring microservices to Azure](/training/modules/azure-spring-cloud-workshop/) - [Azure Key Vault Certificates Spring Cloud Azure Starter (GitHub.com)](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/spring/spring-cloud-azure-starter-keyvault-certificates/pom.xml)-- [Azure Spring Apps reference architecture](reference-architecture.md)
+- [Azure Spring Apps architecture design](/azure/architecture/web-apps/spring-apps?toc=/azure/spring-apps/toc.json&bc=/azure/spring-apps/breadcrumb/toc.json)
- Migrate your [Spring Boot](/azure/developer/java/migration/migrate-spring-boot-to-azure-spring-apps), [Spring Cloud](/azure/developer/java/migration/migrate-spring-cloud-to-azure-spring-apps), and [Tomcat](/azure/developer/java/migration/migrate-tomcat-to-azure-spring-apps) applications to Azure Spring Apps
storage-mover Agent Register https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-mover/agent-register.md
The agent displays detailed progress. Once the registration is complete, you're
To accomplish seamless authentication with Azure and authorization to various Azure resources, the agent is registered with the following Azure - Azure Storage Mover (Microsoft.StorageMover)-- Azure ARC (Microsoft.HybridCompute)
+- Azure Arc (Microsoft.HybridCompute)
### Azure Storage Mover service
Registration to the Azure Storage mover service is visible and manageable throug
You can reference this Azure Resource Manager (ARM) resource when you want to assign migration jobs to the specific agent VM it symbolizes.
-### Azure ARC service
+### Azure Arc service
-The agent is also registered with the [Azure ARC service](../azure-arc/overview.md). ARC is used to assign and maintain an [Azure AD managed identity](../active-directory/managed-identities-azure-resources/overview.md) for this registered agent.
+The agent is also registered with the [Azure Arc service](../azure-arc/overview.md). Arc is used to assign and maintain an [Azure AD managed identity](../active-directory/managed-identities-azure-resources/overview.md) for this registered agent.
Azure Storage Mover uses a system-assigned managed identity. A managed identity is a service principal of a special type that can only be used with Azure resources. When the managed identity is deleted, the corresponding service principal is also automatically removed. The process of deletion is automatically initiated when you unregister the agent. However, there are other ways to remove this identity. Doing so incapacitates the registered agent and require the agent to be unregistered. Only the registration process can get an agent to obtain and maintain its Azure identity properly. > [!NOTE]
-> During public preview, there is a side effect of the registration with the Azure ARC service. A separate resource of the type *Server-Azure Arc* is also deployed in the same resource group as your storage mover resource. You won't be able to manage the agent through this resource.
+> During public preview, there is a side effect of the registration with the Azure Arc service. A separate resource of the type *Server-Azure Arc* is also deployed in the same resource group as your storage mover resource. You won't be able to manage the agent through this resource.
It may appear that you're able to manage aspects of the storage mover agent through the *Server-Azure Arc* resource, but in most cases you can't. It's best to exclusively manage the agent through the *Registered agents* pane in your storage move resource or through the local administrative shell. > [!WARNING]
-> Do not delete the Azure ARC server resource that is created for a registered agent in the same resource group as the storage mover resource. The only safe time to delete this resource is when you previously unregistered the agent this resource corresponds to.
+> Do not delete the Azure Arc server resource that is created for a registered agent in the same resource group as the storage mover resource. The only safe time to delete this resource is when you previously unregistered the agent this resource corresponds to.
### Authorization
For a migration job, access to the target endpoint is perhaps the most important
These assignments are made in the admin's sign-in context in the Azure portal. Therefore, the admin must be a member of the role-based access control (RBAC) control plane role "Owner" for the target container. This assignment is made just-in-time when you start a migration job. It is at this point that you've selected an agent to execute a migration job. As part of this start action, the agent is given permissions to the data plane of the target container. The agent isn't authorized to perform any management plane actions, such as deleting the target container or configuring any features on it. > [!WARNING]
-> Access is granted to a specific agent just-in-time for running a migration job. However, the agent's authorization to access the target is not automatically removed. You must either manually remove the agent's managed identity from a specific target or unregister the agent to destroy the service principal. This action removes all target storage authorization as well as the ability of the agent to communicate with the Storage Mover and Azure ARC services.
+> Access is granted to a specific agent just-in-time for running a migration job. However, the agent's authorization to access the target is not automatically removed. You must either manually remove the agent's managed identity from a specific target or unregister the agent to destroy the service principal. This action removes all target storage authorization as well as the ability of the agent to communicate with the Storage Mover and Azure Arc services.
## Next steps
storage Blob V11 Samples Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-v11-samples-dotnet.md
description: View code samples that use the Azure Blob Storage client library for .NET version 11.x. -+ Last updated 04/03/2023
storage Blob V11 Samples Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-v11-samples-javascript.md
description: View code samples that use the Azure Blob Storage client library for JavaScript version 11.x. -+ Last updated 04/03/2023
storage Blob V2 Samples Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-v2-samples-python.md
description: View code samples that use the Azure Blob Storage client library for Python version 2.1. -+ Last updated 04/03/2023
storage Client Side Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/client-side-encryption.md
description: The Blob Storage client library supports client-side encryption and
-+ Last updated 12/12/2022
storage Object Replication Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/object-replication-overview.md
Object replication isn't supported for blobs in the source account that are encr
Customer-managed failover isn't supported for either the source or the destination account in an object replication policy.
-Object replication is not supported for blobs that are uploaded to the Data Lake Storage endpoint (`dfs.core.windows.net`) by using [Data Lake Storage Gen2](/rest/api/storageservices/data-lake-storage-gen2) APIs.
+Object replication is not supported for blobs that are uploaded by using [Data Lake Storage Gen2](/rest/api/storageservices/data-lake-storage-gen2) APIs.
## How object replication works
storage Sas Service Create Dotnet Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/sas-service-create-dotnet-container.md
description: Learn how to create a service shared access signature (SAS) for a container using the Azure Blob Storage client library for .NET. -+ Last updated 06/22/2023
storage Sas Service Create Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/sas-service-create-dotnet.md
description: Learn how to create a service shared access signature (SAS) for a blob using the Azure Blob Storage client library for .NET. -+ Last updated 06/22/2023
storage Sas Service Create Java Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/sas-service-create-java-container.md
description: Learn how to create a service shared access signature (SAS) for a container using the Azure Blob Storage client library for Java. -+ Last updated 06/23/2023
storage Sas Service Create Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/sas-service-create-java.md
description: Learn how to create a service shared access signature (SAS) for a blob using the Azure Blob Storage client library for Java. -+ Last updated 06/23/2023
storage Sas Service Create Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/sas-service-create-javascript.md
description: Learn how to create a service shared access signature (SAS) for a container or blob using the Azure Blob Storage client library for JavaScript. -+ Last updated 01/19/2023
storage Sas Service Create Python Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/sas-service-create-python-container.md
description: Learn how to create a service shared access signature (SAS) for a container using the Azure Blob Storage client library for Python. -+ Last updated 06/09/2023
storage Sas Service Create Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/sas-service-create-python.md
description: Learn how to create a service shared access signature (SAS) for a blob using the Azure Blob Storage client library for Python. -+ Last updated 06/09/2023
storage Simulate Primary Region Failure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/simulate-primary-region-failure.md
description: Simulate an error in reading data from the primary region when the storage account is configured for read-access geo-zone-redundant storage (RA-GZRS). -+ Last updated 09/06/2022
storage Snapshots Manage Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/snapshots-manage-dotnet.md
description: Learn how to use the .NET client library to create a read-only snap
-+ Last updated 08/27/2020 ms.devlang: csharp
storage Storage Auth Abac Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac-powershell.md
-+ Last updated 03/15/2023
storage Storage Blob Account Delegation Sas Create Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-account-delegation-sas-create-javascript.md
description: Create and use account SAS tokens in a JavaScript application that
-+ Last updated 11/30/2022
storage Storage Blob Append https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-append.md
Last updated 03/28/2022-+ ms.devlang: csharp, python
storage Storage Blob Client Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-client-management.md
description: Learn how to create and manage clients that interact with data reso
-+ Last updated 02/08/2023
storage Storage Blob Container Create Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-create-java.md
description: Learn how to create a blob container in your Azure Storage account
-+ Last updated 08/02/2023
storage Storage Blob Container Create Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-create-javascript.md
description: Learn how to create a blob container in your Azure Storage account using the JavaScript client library. -+ Last updated 11/30/2022
storage Storage Blob Container Create Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-create-python.md
description: Learn how to create a blob container in your Azure Storage account
-+ Last updated 08/02/2023
storage Storage Blob Container Create Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-create-typescript.md
description: Learn how to create a blob container in your Azure Storage account using the JavaScript client library using TypeScript. -+ Last updated 03/21/2023
storage Storage Blob Container Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-create.md
description: Learn how to create a blob container in your Azure Storage account
-+ Last updated 07/25/2022
storage Storage Blob Container Delete Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-delete-java.md
description: Learn how to delete and restore a blob container in your Azure Stor
-+ Last updated 08/02/2023
storage Storage Blob Container Delete Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-delete-javascript.md
-+ Last updated 11/30/2022 ms.devlang: javascript
storage Storage Blob Container Delete Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-delete-python.md
description: Learn how to delete and restore a blob container in your Azure Stor
-+ Last updated 08/02/2023
storage Storage Blob Container Delete Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-delete-typescript.md
-+ Last updated 03/21/2023 ms.devlang: TypeScript
storage Storage Blob Container Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-delete.md
-+ Last updated 03/28/2022
storage Storage Blob Container Lease Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-lease-java.md
-+ Last updated 08/02/2023 ms.devlang: java
storage Storage Blob Container Lease Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-lease-javascript.md
-+ Last updated 05/01/2023 ms.devlang: javascript
storage Storage Blob Container Lease Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-lease-python.md
-+ Last updated 08/02/2023 ms.devlang: python
storage Storage Blob Container Lease Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-lease-typescript.md
-+ Last updated 05/01/2023 ms.devlang: typescript
storage Storage Blob Container Lease https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-lease.md
-+ Last updated 04/10/2023 ms.devlang: csharp
storage Storage Blob Container Properties Metadata Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-properties-metadata-java.md
description: Learn how to set and retrieve system properties and store custom me
-+ Last updated 08/02/2023
storage Storage Blob Container Properties Metadata Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-properties-metadata-javascript.md
-+ Last updated 11/30/2022
storage Storage Blob Container Properties Metadata Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-properties-metadata-python.md
description: Learn how to set and retrieve system properties and store custom me
-+ Last updated 08/02/2023
storage Storage Blob Container Properties Metadata Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-properties-metadata-typescript.md
-+ Last updated 03/21/2023
storage Storage Blob Container Properties Metadata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-properties-metadata.md
-+ Last updated 03/28/2022 ms.devlang: csharp
storage Storage Blob Container User Delegation Sas Create Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-user-delegation-sas-create-dotnet.md
description: Learn how to create a user delegation SAS for a container with Azur
-+ Last updated 06/22/2023
storage Storage Blob Container User Delegation Sas Create Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-user-delegation-sas-create-java.md
description: Learn how to create a user delegation SAS for a container with Azur
-+ Last updated 06/12/2023
storage Storage Blob Container User Delegation Sas Create Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-user-delegation-sas-create-python.md
description: Learn how to create a user delegation SAS for a container with Azur
-+ Last updated 06/09/2023
storage Storage Blob Containers List Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-containers-list-java.md
description: Learn how to list blob containers in your Azure Storage account usi
-+ Last updated 08/02/2023
storage Storage Blob Containers List Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-containers-list-javascript.md
-+ Last updated 11/30/2022
storage Storage Blob Containers List Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-containers-list-python.md
description: Learn how to list blob containers in your Azure Storage account usi
-+ Last updated 08/02/2023
storage Storage Blob Containers List Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-containers-list-typescript.md
-+ Last updated 03/21/2023
storage Storage Blob Containers List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-containers-list.md
-+ Last updated 03/28/2022 ms.devlang: csharp
storage Storage Blob Copy Async Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-async-dotnet.md
Last updated 04/11/2023-+ ms.devlang: csharp
storage Storage Blob Copy Async Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-async-java.md
Last updated 08/02/2023-+ ms.devlang: java
storage Storage Blob Copy Async Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-async-javascript.md
Last updated 05/08/2023-+ ms.devlang: javascript
storage Storage Blob Copy Async Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-async-python.md
Last updated 08/02/2023-+ ms.devlang: python
storage Storage Blob Copy Async Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-async-typescript.md
Last updated 05/08/2023-+ ms.devlang: typescript
storage Storage Blob Copy Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-java.md
Last updated 04/18/2023-+ ms.devlang: java
storage Storage Blob Copy Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-javascript.md
Last updated 05/08/2023-+ ms.devlang: javascript
storage Storage Blob Copy Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-python.md
Last updated 04/28/2023-+ ms.devlang: python
storage Storage Blob Copy Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-typescript.md
Last updated 05/08/2023-+ ms.devlang: typescript
storage Storage Blob Copy Url Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-url-dotnet.md
Last updated 04/11/2023-+ ms.devlang: csharp
storage Storage Blob Copy Url Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-url-java.md
Last updated 08/02/2023-+ ms.devlang: java
storage Storage Blob Copy Url Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-url-javascript.md
Last updated 05/08/2023-+ ms.devlang: javascript
storage Storage Blob Copy Url Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-url-python.md
Last updated 08/02/2023-+ ms.devlang: python
storage Storage Blob Copy Url Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-url-typescript.md
Last updated 05/08/2023-+ ms.devlang: typescript
storage Storage Blob Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy.md
Last updated 04/14/2023-+ ms.devlang: csharp
storage Storage Blob Create User Delegation Sas Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-create-user-delegation-sas-javascript.md
-+ Last updated 07/15/2022
storage Storage Blob Delete Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-delete-java.md
Last updated 08/02/2023-+ ms.devlang: java
storage Storage Blob Delete Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-delete-javascript.md
Last updated 11/30/2022-+ ms.devlang: javascript
storage Storage Blob Delete Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-delete-python.md
Last updated 08/02/2023-+ ms.devlang: python
storage Storage Blob Delete Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-delete-typescript.md
Last updated 03/21/2023-+ ms.devlang: typescript
storage Storage Blob Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-delete.md
Last updated 05/11/2023-+ ms.devlang: csharp
storage Storage Blob Dotnet Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-dotnet-get-started.md
-+ Last updated 07/12/2023 ms.devlang: csharp
storage Storage Blob Download Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-download-java.md
Last updated 08/02/2023-+ ms.devlang: java
storage Storage Blob Download Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-download-javascript.md
Last updated 04/21/2023-+ ms.devlang: javascript
storage Storage Blob Download Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-download-python.md
Last updated 08/02/2023-+ ms.devlang: python
storage Storage Blob Download Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-download-typescript.md
Last updated 06/21/2023-+ ms.devlang: typescript
storage Storage Blob Download https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-download.md
Last updated 05/23/2023-+ ms.devlang: csharp
storage Storage Blob Get Url Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-get-url-javascript.md
Last updated 09/13/2022-+ ms.devlang: javascript
storage Storage Blob Get Url Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-get-url-typescript.md
Last updated 03/21/2023-+ ms.devlang: typescript
storage Storage Blob Java Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-java-get-started.md
-+ Last updated 07/12/2023
storage Storage Blob Javascript Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-javascript-get-started.md
-+ Last updated 11/30/2022
storage Storage Blob Lease Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-lease-java.md
-+ Last updated 08/02/2023 ms.devlang: java
storage Storage Blob Lease Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-lease-javascript.md
-+ Last updated 05/01/2023 ms.devlang: javascript
storage Storage Blob Lease Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-lease-python.md
-+ Last updated 08/02/2023 ms.devlang: python
storage Storage Blob Lease Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-lease-typescript.md
-+ Last updated 05/01/2023 ms.devlang: typescript
storage Storage Blob Lease https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-lease.md
-+ Last updated 04/10/2023 ms.devlang: csharp
storage Storage Blob Object Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-object-model.md
-+ Last updated 03/07/2023
storage Storage Blob Properties Metadata Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-properties-metadata-java.md
Last updated 08/02/2023-+ ms.devlang: java
storage Storage Blob Properties Metadata Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-properties-metadata-javascript.md
Last updated 11/30/2022-+ ms.devlang: javascript
storage Storage Blob Properties Metadata Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-properties-metadata-python.md
Last updated 08/02/2023-+ ms.devlang: python
storage Storage Blob Properties Metadata Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-properties-metadata-typescript.md
Last updated 03/21/2023-+ ms.devlang: typescript
storage Storage Blob Properties Metadata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-properties-metadata.md
Last updated 03/28/2022-+ ms.devlang: csharp
storage Storage Blob Python Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-python-get-started.md
-+ Last updated 07/12/2023
storage Storage Blob Query Endpoint Srp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-query-endpoint-srp.md
-+ Last updated 06/07/2023
storage Storage Blob Tags Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-tags-java.md
Last updated 08/02/2023-+ ms.devlang: java
storage Storage Blob Tags Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-tags-javascript.md
Last updated 11/30/2022-+ ms.devlang: javascript
storage Storage Blob Tags Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-tags-python.md
Last updated 08/02/2023-+ ms.devlang: python
storage Storage Blob Tags Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-tags-typescript.md
Last updated 03/21/2023-+ ms.devlang: typescript
storage Storage Blob Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-tags.md
Last updated 03/28/2022-+ ms.devlang: csharp
storage Storage Blob Typescript Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-typescript-get-started.md
-+ Last updated 03/21/2023
storage Storage Blob Upload Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-upload-java.md
Last updated 08/02/2023-+ ms.devlang: java
storage Storage Blob Upload Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-upload-javascript.md
Last updated 06/20/2023-+ ms.devlang: javascript
storage Storage Blob Upload Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-upload-python.md
Last updated 08/02/2023-+ ms.devlang: python
storage Storage Blob Upload Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-upload-typescript.md
Last updated 06/21/2023-+ ms.devlang: typescript
storage Storage Blob Upload https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-upload.md
Last updated 07/07/2023-+ ms.devlang: csharp
storage Storage Blob Use Access Tier Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-use-access-tier-dotnet.md
-+ Last updated 07/03/2023 ms.devlang: csharp
storage Storage Blob Use Access Tier Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-use-access-tier-java.md
-+ Last updated 08/02/2023 ms.devlang: java
storage Storage Blob Use Access Tier Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-use-access-tier-javascript.md
-+ Last updated 06/28/2023 ms.devlang: javascript
storage Storage Blob Use Access Tier Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-use-access-tier-python.md
-+ Last updated 08/02/2023 ms.devlang: python
storage Storage Blob Use Access Tier Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-use-access-tier-typescript.md
-+ Last updated 06/28/2023 ms.devlang: typescript
storage Storage Blob User Delegation Sas Create Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-user-delegation-sas-create-dotnet.md
description: Learn how to create a user delegation SAS for a blob with Azure Act
-+ Last updated 06/22/2023
storage Storage Blob User Delegation Sas Create Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-user-delegation-sas-create-java.md
description: Learn how to create a user delegation SAS for a blob with Azure Act
-+ Last updated 06/12/2023
storage Storage Blob User Delegation Sas Create Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-user-delegation-sas-create-python.md
description: Learn how to create a user delegation SAS for a blob with Azure Act
-+ Last updated 06/06/2023
storage Storage Blobs List Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-list-java.md
description: Learn how to list blobs in your storage account using the Azure Sto
-+ Previously updated : 08/02/2023 Last updated : 08/16/2023 ms.devlang: java
To list the blobs in a storage account, call one of these methods:
- [listBlobs](/java/api/com.azure.storage.blob.BlobContainerClient) - [listBlobsByHierarchy](/java/api/com.azure.storage.blob.BlobContainerClient)
+### Manage how many results are returned
+
+By default, a listing operation returns up to 5000 results at a time, but you can specify the number of results that you want each listing operation to return. The examples presented in this article show you how to return results in pages. To learn more about pagination concepts, see [Pagination with the Azure SDK for Java](/azure/developer/java/sdk/pagination).
+
+### Filter results with a prefix
+
+To filter the list of blobs, pass a string as the `prefix` parameter to [ListBlobsOptions.setPrefix(String prefix)](/java/api/com.azure.storage.blob.models.listblobsoptions). The prefix string can include one or more characters. Azure Storage then returns only the blobs whose names start with that prefix.
+ ### Flat listing versus hierarchical listing Blobs in Azure Storage are organized in a flat paradigm, rather than a hierarchical paradigm (like a classic file system). However, you can organize blobs into *virtual directories* in order to mimic a folder structure. A virtual directory forms part of the name of the blob and is indicated by the delimiter character.
To organize blobs into virtual directories, use a delimiter character in the blo
If you name your blobs using a delimiter, then you can choose to list blobs hierarchically. For a hierarchical listing operation, Azure Storage returns any virtual directories and blobs beneath the parent object. You can call the listing operation recursively to traverse the hierarchy, similar to how you would traverse a classic file system programmatically.
-If you've enabled the hierarchical namespace feature on your account, directories aren't virtual. Instead, they're concrete, independent objects. Therefore, directories appear in the list as zero-length blobs.
- ## Use a flat listing By default, a listing operation returns blobs in a flat listing. In a flat listing, blobs aren't organized by virtual directory.
Page 3
Name: folderA/folderB/file3.txt, Is deleted? false ```
+> [!NOTE]
+> The sample output shown assumes that you have a storage account with a flat namespace. If you've enabled the hierarchical namespace feature for your storage account, directories are not virtual. Instead, they are concrete, independent objects. As a result, directories appear in the list as zero-length blobs.</br></br>For an alternative listing option when working with a hierarchical namespace, see [List directory contents (Azure Data Lake Storage Gen2)](data-lake-storage-directory-file-acl-java.md#list-directory-contents).
+ ## Use a hierarchical listing When you call a listing operation hierarchically, Azure Storage returns the virtual directories and blobs at the first level of the hierarchy.
storage Storage Blobs List Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-list-javascript.md
-+ Previously updated : 11/30/2022 Last updated : 08/16/2023 ms.devlang: javascript
Related functionality can be found in the following methods:
### Manage how many results are returned
-By default, a listing operation returns up to 5000 results at a time, but you can specify the number of results that you want each listing operation to return. The examples presented in this article show you how to return results in pages.
+By default, a listing operation returns up to 5000 results at a time, but you can specify the number of results that you want each listing operation to return. The examples presented in this article show you how to return results in pages. To learn more about pagination concepts, see [Pagination with the Azure SDK for JavaScript](/azure/developer/javascript/core/use-azure-sdk#asynchronous-paging-of-results).
### Filter results with a prefix
-To filter the list of blobs, specify a string for the `prefix` property in the [list options](/javascript/api/@azure/storage-blob/containerlistblobsoptions). The prefix string can include one or more characters. Azure Storage then returns only the blobs whose names start with that prefix.
+To filter the list of blobs, specify a string for the `prefix` property in [ContainerListBlobsOptions](/javascript/api/@azure/storage-blob/containerlistblobsoptions). The prefix string can include one or more characters. Azure Storage then returns only the blobs whose names start with that prefix.
```javascript const listOptions = {
To organize blobs into virtual directories, use a delimiter character in the blo
If you name your blobs using a delimiter, then you can choose to list blobs hierarchically. For a hierarchical listing operation, Azure Storage returns any virtual directories and blobs beneath the parent object. You can call the listing operation recursively to traverse the hierarchy, similar to how you would traverse a classic file system programmatically.
-If you've enabled the hierarchical namespace feature on your account, directories are not virtual. Instead, they are concrete, independent objects. Therefore, directories appear in the list as zero-length blobs.
- ## Use a flat listing By default, a listing operation returns blobs in a flat listing. In a flat listing, blobs are not organized by virtual directory.
Flat listing: 5: folder2/sub1/c
Flat listing: 6: folder2/sub1/d ```
+> [!NOTE]
+> The sample output shown assumes that you have a storage account with a flat namespace. If you've enabled the hierarchical namespace feature for your storage account, directories are not virtual. Instead, they are concrete, independent objects. As a result, directories appear in the list as zero-length blobs.</br></br>For an alternative listing option when working with a hierarchical namespace, see [List directory contents (Azure Data Lake Storage Gen2)](data-lake-storage-directory-file-acl-javascript.md#list-directory-contents).
+ ## Use a hierarchical listing When you call a listing operation hierarchically, Azure Storage returns the virtual directories and blobs at the first level of the hierarchy.
storage Storage Blobs List Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-list-python.md
description: Learn how to list blobs in your storage account using the Azure Sto
-+ Previously updated : 08/02/2023 Last updated : 08/16/2023 ms.devlang: python
To list the blobs in a container using a hierarchical listing, call the followin
- [ContainerClient.walk_blobs](/python/api/azure-storage-blob/azure.storage.blob.containerclient#azure-storage-blob-containerclient-walk-blobs) (along with the name, you can optionally include metadata, tags, and other information associated with each blob)
+### Filter results with a prefix
+
+To filter the list of blobs, specify a string for the `name_starts_with` keyword argument. The prefix string can include one or more characters. Azure Storage then returns only the blobs whose names start with that prefix.
+ ### Flat listing versus hierarchical listing Blobs in Azure Storage are organized in a flat paradigm, rather than a hierarchical paradigm (like a classic file system). However, you can organize blobs into *virtual directories* in order to mimic a folder structure. A virtual directory forms part of the name of the blob and is indicated by the delimiter character.
To organize blobs into virtual directories, use a delimiter character in the blo
If you name your blobs using a delimiter, then you can choose to list blobs hierarchically. For a hierarchical listing operation, Azure Storage returns any virtual directories and blobs beneath the parent object. You can call the listing operation recursively to traverse the hierarchy, similar to how you would traverse a classic file system programmatically.
-If you've enabled the hierarchical namespace feature on your account, directories aren't virtual. Instead, they're concrete, independent objects. Therefore, directories appear in the list as zero-length blobs.
- ## Use a flat listing By default, a listing operation returns blobs in a flat listing. In a flat listing, blobs aren't organized by virtual directory.
Name: folderA/file2.txt
Name: folderA/folderB/file3.txt ```
-You can also specify options to filter list results or show additional information. The following example lists blobs with a specified prefix, and also lists blob tags:
+You can also specify options to filter list results or show additional information. The following example lists blobs and blob tags:
:::code language="python" source="~/azure-storage-snippets/blobs/howto/python/blob-devguide-py/blob-devguide-blobs.py" id="Snippet_list_blobs_flat_options":::
Name: folderA/file2.txt, Tags: None
Name: folderA/folderB/file3.txt, Tags: {'tag1': 'value1', 'tag2': 'value2'} ```
+> [!NOTE]
+> The sample output shown assumes that you have a storage account with a flat namespace. If you've enabled the hierarchical namespace feature for your storage account, directories are not virtual. Instead, they are concrete, independent objects. As a result, directories appear in the list as zero-length blobs.</br></br>For an alternative listing option when working with a hierarchical namespace, see [List directory contents (Azure Data Lake Storage Gen2)](data-lake-storage-directory-file-acl-python.md#list-directory-contents).
+ ## Use a hierarchical listing When you call a listing operation hierarchically, Azure Storage returns the virtual directories and blobs at the first level of the hierarchy.
storage Storage Blobs List Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-list-typescript.md
-+ Previously updated : 03/21/2023 Last updated : 08/16/2023 ms.devlang: typescript
Related functionality can be found in the following methods:
### Manage how many results are returned
-By default, a listing operation returns up to 5000 results at a time, but you can specify the number of results that you want each listing operation to return. The examples presented in this article show you how to return results in pages.
+By default, a listing operation returns up to 5000 results at a time, but you can specify the number of results that you want each listing operation to return. The examples presented in this article show you how to return results in pages. To learn more about pagination concepts, see [Pagination with the Azure SDK for JavaScript](/azure/developer/javascript/core/use-azure-sdk#asynchronous-paging-of-results)
### Filter results with a prefix
-To filter the list of blobs, specify a string for the `prefix` property in the [list options](/javascript/api/@azure/storage-blob/containerlistblobsoptions). The prefix string can include one or more characters. Azure Storage then returns only the blobs whose names start with that prefix.
+To filter the list of blobs, specify a string for the `prefix` property in [ContainerListBlobsOptions](/javascript/api/@azure/storage-blob/containerlistblobsoptions). The prefix string can include one or more characters. Azure Storage then returns only the blobs whose names start with that prefix.
```typescript const listOptions: ContainerListBlobsOptions = {
To organize blobs into virtual directories, use a delimiter character in the blo
If you name your blobs using a delimiter, then you can choose to list blobs hierarchically. For a hierarchical listing operation, Azure Storage returns any virtual directories and blobs beneath the parent object. You can call the listing operation recursively to traverse the hierarchy, similar to how you would traverse a classic file system programmatically.
-If you've enabled the hierarchical namespace feature on your account, directories are not virtual. Instead, they are concrete, independent objects. Therefore, directories appear in the list as zero-length blobs.
- ## Use a flat listing By default, a listing operation returns blobs in a flat listing. In a flat listing, blobs are not organized by virtual directory.
Flat listing: 5: folder2/sub1/c
Flat listing: 6: folder2/sub1/d ```
+> [!NOTE]
+> The sample output shown assumes that you have a storage account with a flat namespace. If you've enabled the hierarchical namespace feature for your storage account, directories are not virtual. Instead, they are concrete, independent objects. As a result, directories appear in the list as zero-length blobs.</br></br>For an alternative listing option when working with a hierarchical namespace, see [List directory contents (Azure Data Lake Storage Gen2)](data-lake-storage-directory-file-acl-javascript.md#list-directory-contents).
+ ## Use a hierarchical listing When you call a listing operation hierarchically, Azure Storage returns the virtual directories and blobs at the first level of the hierarchy.
storage Storage Blobs List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-list.md
-+ Previously updated : 02/14/2023 Last updated : 08/16/2023 ms.devlang: csharp
To list the blobs in a storage account, call one of these methods:
### Manage how many results are returned
-By default, a listing operation returns up to 5000 results at a time, but you can specify the number of results that you want each listing operation to return. The examples presented in this article show you how to return results in pages.
+By default, a listing operation returns up to 5000 results at a time, but you can specify the number of results that you want each listing operation to return. The examples presented in this article show you how to return results in pages. To learn more about pagination concepts, see [Pagination with the Azure SDK for .NET](/dotnet/azure/sdk/pagination).
### Filter results with a prefix
By default, a listing operation returns blobs in a flat listing. In a flat listi
The following example lists the blobs in the specified container using a flat listing, with an optional segment size specified, and writes the blob name to a console window.
-If you've enabled the hierarchical namespace feature on your account, directories are not virtual. Instead, they are concrete, independent objects. Therefore, directories appear in the list as zero-length blobs.
- :::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/dotnet-v12/CRUD.cs" id="Snippet_ListBlobsFlatListing"::: The sample output is similar to:
Blob name: FolderA/FolderB/FolderC/blob2.txt
Blob name: FolderA/FolderB/FolderC/blob3.txt ```
+> [!NOTE]
+> The sample output shown assumes that you have a storage account with a flat namespace. If you've enabled the hierarchical namespace feature for your storage account, directories are not virtual. Instead, they are concrete, independent objects. As a result, directories appear in the list as zero-length blobs.</br></br>For an alternative listing option when working with a hierarchical namespace, see [List directory contents (Azure Data Lake Storage Gen2)](data-lake-storage-directory-file-acl-dotnet.md#list-directory-contents).
+ ## Use a hierarchical listing When you call a listing operation hierarchically, Azure Storage returns the virtual directories and blobs at the first level of the hierarchy.
storage Storage Blobs Tune Upload Download Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-tune-upload-download-python.md
description: Learn how to tune your uploads and downloads for better performance
-+ Last updated 07/07/2023 ms.devlang: python
storage Storage Blobs Tune Upload Download https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-tune-upload-download.md
description: Learn how to tune your uploads and downloads for better performance
-+ Last updated 12/09/2022 ms.devlang: csharp
storage Storage Create Geo Redundant Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-create-geo-redundant-storage.md
description: Use read-access geo-zone-redundant (RA-GZRS) storage to make your a
-+ Last updated 09/02/2022
storage Storage Encrypt Decrypt Blobs Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-encrypt-decrypt-blobs-key-vault.md
Title: Encrypt and decrypt blobs using Azure Key Vault
description: Learn how to encrypt and decrypt a blob using client-side encryption with Azure Key Vault. -+ Last updated 11/2/2022
storage Storage Quickstart Blobs Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-dotnet.md
description: In this quickstart, you will learn how to use the Azure Blob Storag
Last updated 11/09/2022-+ ms.devlang: csharp
storage Storage Quickstart Blobs Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-go.md
description: In this quickstart, you learn how to use the Azure Blob Storage cli
Last updated 02/13/2023-+ ms.devlang: golang
storage Storage Quickstart Blobs Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-java.md
Last updated 10/24/2022-+ ms.devlang: java
storage Storage Quickstart Blobs Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-nodejs.md
description: In this quickstart, you learn how to use the Azure Blob Storage for
Last updated 10/28/2022-+ ms.devlang: javascript
storage Storage Quickstart Blobs Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-python.md
Last updated 10/24/2022 -+ ms.devlang: python
storage Storage Retry Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-retry-policy.md
description: Learn about retry policies and how to implement them for Blob Storage. This article helps you set up a retry policy for Blob Storage requests using the Azure Storage client library for .NET. -+ Last updated 12/14/2022
storage Storage Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-network-security.md
Title: Configure Azure Storage firewalls and virtual networks
-description: Configure layered network security for your storage account by using Azure Storage firewalls and Azure Virtual Network.
+description: Configure layered network security for your storage account by using the Azure Storage firewall.
Previously updated : 08/01/2023 Last updated : 08/15/2023 -+ # Configure Azure Storage firewalls and virtual networks
-Azure Storage provides a layered security model. This model enables you to control the level of access to your storage accounts that your applications and enterprise environments demand, based on the type and subset of networks or resources that you use.
+Azure Storage provides a layered security model. This model enables you to control the level of access to your storage accounts that your applications and enterprise environments require. In this article, you will learn how to configure the Azure Storage firewall to protect the data in your storage account at the network layer.
-When you configure network rules, only applications that request data over the specified set of networks or through the specified set of Azure resources can access a storage account. You can limit access to your storage account to requests that come from specified IP addresses, IP ranges, subnets in an Azure virtual network, or resource instances of some Azure services.
+> [!IMPORTANT]
+> Azure Storage firewall rules only apply to [data plane](../../azure-resource-manager/management/control-plane-and-data-plane.md#data-plane) operations. [Control plane](../../azure-resource-manager/management/control-plane-and-data-plane.md#control-plane) operations are not subject to the restrictions specified in firewall rules.
+>
+> Some operations, such as blob container operations, can be performed through both the control plane and the data plane. So if you attempt to perform an operation such as listing containers from the Azure portal, the operation will succeed unless it is blocked by another mechanism. Attempts to access blob data from an application such as Azure Storage Explorer are controlled by the firewall restrictions.
+>
+> For a list of data plane operations, see the [Azure Storage REST API Reference](/rest/api/storageservices/).
+> For a list of control plane operations, see the [Azure Storage Resource Provider REST API Reference](/rest/api/storagerp/).
-Storage accounts have a public endpoint that's accessible through the internet. You can also create [private endpoints for your storage account](storage-private-endpoints.md). Creating private endpoints assigns a private IP address from your virtual network to the storage account. It helps secure traffic between your virtual network and the storage account over a private link.
+## Configure network access to Azure Storage
-The Azure Storage firewall provides access control for the public endpoint of your storage account. You can also use the firewall to block all access through the public endpoint when you're using private endpoints. Your firewall configuration also enables trusted Azure platform services to access the storage account.
+You can control access to the data in your storage account over network endpoints, or through trusted services or resources in any combination including:
-An application that accesses a storage account when network rules are in effect still requires proper authorization for the request. Authorization is supported with Azure Active Directory (Azure AD) credentials for blobs and queues, with a valid account access key, or with a shared access signature (SAS) token. When you configure a blob container for anonymous public access, requests to read data in that container don't need to be authorized. The firewall rules remain in effect and will block anonymous traffic.
+- [Allow access from selected virtual network subnets using private endpoints](storage-private-endpoints.md).
+- [Allow access from selected virtual network subnets using service endpoints](#grant-access-from-a-virtual-network).
+- [Allow access from specific public IP addresses or ranges](#grant-access-from-an-internet-ip-range).
+- [Allow access from selected Azure resource instances](#grant-access-from-azure-resource-instances).
+- [Allow access from trusted Azure services](#grant-access-to-trusted-azure-services) (using [Manage exceptions](#manage-exceptions)).
+- [Configure exceptions for logging and metrics services](#manage-exceptions).
-Turning on firewall rules for your storage account blocks incoming requests for data by default, unless the requests originate from a service that operates within an Azure virtual network or from allowed public IP addresses. Requests that are blocked include those from other Azure services, from the Azure portal, and from logging and metrics services.
+### About virtual network endpoints
-You can grant access to Azure services that operate from within a virtual network by allowing traffic from the subnet that hosts the service instance. You can also enable a limited number of scenarios through the exceptions mechanism that this article describes. To access data from the storage account through the Azure portal, you need to be on a machine within the trusted boundary (either IP or virtual network) that you set up.
+There are two types of virtual network endpoints for storage accounts:
+- [Virtual Network service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md)
+- [Private endpoints](storage-private-endpoints.md)
-## Scenarios
+Virtual network service endpoints are public and accessible via the internet. The Azure Storage firewall provides the ability to control access to your storage account over such public endpoints. When you enable public network access to your storage account, all incoming requests for data are blocked by default. Only applications that request data from allowed sources that you configure in your storage account firewall settings will be able to access your data. Sources can include the source IP address or virtual network subnet of a client, or an Azure service or resource instance through which clients or services access your data. Requests that are blocked include those from other Azure services, from the Azure portal, and from logging and metrics services, unless you explicitly allow access in your firewall configuration.
-To secure your storage account, you should first configure a rule to deny access to traffic from all networks (including internet traffic) on the public endpoint, by default. Then, you should configure rules that grant access to traffic from specific virtual networks. You can also configure rules to grant access to traffic from selected public internet IP address ranges, enabling connections from specific internet or on-premises clients. This configuration helps you build a secure network boundary for your applications.
+A private endpoint uses a private IP address from your virtual network to access a storage account over the Microsoft backbone network. With a private endpoint, traffic between your virtual network and the storage account are secured over a private link. Storage firewall rules only apply to the public endpoints of a storage account, not private endpoints. The process of approving the creation of a private endpoint grants implicit access to traffic from the subnet that hosts the private endpoint. You can use [Network Policies](../../private-link/disable-private-endpoint-network-policy.md) to control traffic over private endpoints if you want to refine access rules. If you want to use private endpoints exclusively, you can use the firewall to block all access through the public endpoint.
-You can combine firewall rules that allow access from specific virtual networks and from public IP address ranges on the same storage account. You can apply storage firewall rules to existing storage accounts or when you create new storage accounts.
+To help you decide when to use each type of endpoint in your environment, see [Compare Private Endpoints and Service Endpoints](../../virtual-network/vnet-integration-for-azure-services.md#compare-private-endpoints-and-service-endpoints).
-Storage firewall rules apply to the public endpoint of a storage account. You don't need any firewall access rules to allow traffic for private endpoints of a storage account. The process of approving the creation of a private endpoint grants implicit access to traffic from the subnet that hosts the private endpoint.
+### How to approach network security for your storage account
-> [!IMPORTANT]
-> When referencing a service endpoint in a client application, it's recommended that you avoid taking a dependency on a cached IP address. The storage account IP address is subject to change, and relying on a cached IP address may result in unexpected behavior.
->
-> Additionally, it's recommended that you honor the time-to-live (TTL) of the DNS record and avoid overriding it. Overriding the DNS TTL may result in unexpected behavior.
+To secure your storage account and build a secure network boundary for your applications:
+
+1. Start by disabling all public network access for the storage account under the **Public network access** setting in the storage account firewall.
+1. Where possible, configure private links to your storage account from private endpoints on virtual network subnets where the clients reside that require access to your data.
+1. If client applications require access over the public endpoints, change the **Public network access** setting to **Enabled from selected virtual networks and IP addresses**. Then, as needed:
-Network rules are enforced on all network protocols for Azure Storage, including REST and SMB. To access data by using tools such as the Azure portal, Azure Storage Explorer, and AzCopy, you must configure explicit network rules.
+ 1. Specify the virtual network subnets from which you want to allow access.
+ 1. Specify the public IP address ranges of clients from which you want to allow access, such as those on on-premises networks.
+ 1. Allow access from selected Azure resource instances.
+ 1. Add exceptions to allow access from trusted services required for operations such as backing up data.
+ 1. Add exceptions for logging and metrics.
After you apply network rules, they're enforced for all requests. SAS tokens that grant access to a specific IP address serve to limit the access of the token holder, but they don't grant new access beyond configured network rules.
-Network rules don't affect virtual machine (VM) disk traffic, including mount and unmount operations and disk I/O. Network rules help protect REST access to page blobs.
+## Restrictions and considerations
+
+Before implementing network security for your storage accounts, review the important restrictions and considerations discussed in this section.
+
+> [!div class="checklist"]
+>
+> - Azure Storage firewall rules only apply to [data plane](../../azure-resource-manager/management/control-plane-and-data-plane.md#data-plane) operations. [Control plane](../../azure-resource-manager/management/control-plane-and-data-plane.md#control-plane) operations are not subject to the restrictions specified in firewall rules.
+> - Review the [Restrictions for IP network rules](#restrictions-for-ip-network-rules).
+> - To access data by using tools such as the Azure portal, Azure Storage Explorer, and AzCopy, you must be on a machine within the trusted boundary that you establish when configuring network security rules.
+> - Network rules are enforced on all network protocols for Azure Storage, including REST and SMB.
+> - Network rules don't affect virtual machine (VM) disk traffic, including mount and unmount operations and disk I/O, but they do help protect REST access to page blobs.
+> - You can use unmanaged disks in storage accounts with network rules applied to back up and restore VMs by [creating an exception](#manage-exceptions). Firewall exceptions aren't applicable to managed disks, because Azure already manages them.
+> - Classic storage accounts don't support firewalls and virtual networks.
+> - If you delete a subnet that's included in a virtual network rule, it will be removed from the network rules for the storage account. If you create a new subnet by the same name, it won't have access to the storage account. To allow access, you must explicitly authorize the new subnet in the network rules for the storage account.
+> - When referencing a service endpoint in a client application, it's recommended that you avoid taking a dependency on a cached IP address. The storage account IP address is subject to change, and relying on a cached IP address may result in unexpected behavior. Additionally, it's recommended that you honor the time-to-live (TTL) of the DNS record and avoid overriding it. Overriding the DNS TTL may result in unexpected behavior.
+> - By design, access to a storage account from trusted services takes the highest precedence over other network access restrictions. If you set **Public network access** to **Disabled** after previously setting it to **Enabled from selected virtual networks and IP addresses**, any [resource instances](#grant-access-from-azure-resource-instances) and [exceptions](#manage-exceptions) that you previously configured, including [Allow Azure services on the trusted services list to access this storage account](#grant-access-to-trusted-azure-services), will remain in effect. As a result, those resources and services might still have access to the storage account.
+
+### Authorization
-Classic storage accounts don't support firewalls and virtual networks.
+Clients granted access via network rules must continue to meet the authorization requirements of the storage account to access the data. Authorization is supported with Azure Active Directory (Azure AD) credentials for blobs and queues, with a valid account access key, or with a shared access signature (SAS) token.
-You can use unmanaged disks in storage accounts with network rules applied to back up and restore VMs by creating an exception. The [Manage exceptions](#manage-exceptions) section of this article documents this process. Firewall exceptions aren't applicable with managed disks, because Azure already manages them.
+When you configure a blob container for anonymous public access, requests to read data in that container don't need to be authorized, but the firewall rules remain in effect and will block anonymous traffic.
## Change the default network access rule
By default, storage accounts accept connections from clients on any network. You
You must set the default rule to **deny**, or network rules have no effect. However, changing this setting can affect your application's ability to connect to Azure Storage. Be sure to grant access to any allowed networks or set up access through a private endpoint before you change this setting. + ### [Portal](#tab/azure-portal) 1. Go to the storage account that you want to secure.
You can enable a [service endpoint](../../virtual-network/virtual-network-servic
Each storage account supports up to 200 virtual network rules. You can combine these rules with [IP network rules](#grant-access-from-an-internet-ip-range). > [!IMPORTANT]
-> If you delete a subnet that's included in a network rule, it will be removed from the network rules for the storage account. If you create a new subnet by the same name, it won't have access to the storage account. To allow access, you must explicitly authorize the new subnet in the network rules for the storage account.
+> When referencing a service endpoint in a client application, it's recommended that you avoid taking a dependency on a cached IP address. The storage account IP address is subject to change, and relying on a cached IP address may result in unexpected behavior.
+>
+> Additionally, it's recommended that you honor the time-to-live (TTL) of the DNS record and avoid overriding it. Overriding the DNS TTL may result in unexpected behavior.
### Required permissions
Cross-region service endpoints for Azure Storage became generally available in A
Configuring service endpoints between virtual networks and service instances in a [paired region](../../best-practices-availability-paired-regions.md) can be an important part of your disaster recovery plan. Service endpoints allow continuity during a regional failover and access to read-only geo-redundant storage (RA-GRS) instances. Network rules that grant access from a virtual network to a storage account also grant access to any RA-GRS instance.
-When you're planning for disaster recovery during a regional outage, you should create the virtual networks in the paired region in advance. Enable service endpoints for Azure Storage, with network rules granting access from these alternative virtual networks. Then apply these rules to your geo-redundant storage accounts.
+When you're planning for disaster recovery during a regional outage, create the virtual networks in the paired region in advance. Enable service endpoints for Azure Storage, with network rules granting access from these alternative virtual networks. Then apply these rules to your geo-redundant storage accounts.
Local and cross-region service endpoints can't coexist on the same subnet. To replace existing service endpoints with cross-region ones, delete the existing `Microsoft.Storage` endpoints and re-create them as cross-region endpoints (`Microsoft.Storage.Global`).
If you want to enable access to your storage account from a virtual network or s
6. Select **Save** to apply your changes.
+> [!IMPORTANT]
+> If you delete a subnet that's included in a network rule, it will be removed from the network rules for the storage account. If you create a new subnet by the same name, it won't have access to the storage account. To allow access, you must explicitly authorize the new subnet in the network rules for the storage account.
+ #### [PowerShell](#tab/azure-powershell) 1. Install [Azure PowerShell](/powershell/azure/install-azure-powershell) and [sign in](/powershell/azure/authenticate-azureps).
If you want to enable access to your storage account from a virtual network or s
You can use IP network rules to allow access from specific public internet IP address ranges by creating IP network rules. Each storage account supports up to 200 rules. These rules grant access to specific internet-based services and on-premises networks and block general internet traffic.
+### Restrictions for IP network rules
+ The following restrictions apply to IP address ranges: - IP network rules are allowed only for *public internet* IP addresses.
To learn more about working with storage analytics, see [Use Azure Storage analy
## Next steps Learn more about [Azure network service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md).- Dig deeper into [Azure Storage security](../blobs/security-recommendations.md).
storage Elastic San Connect Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-connect-aks.md
description: Learn how to connect to an Azure Elastic SAN Preview volume an Azur
Previously updated : 04/28/2023 Last updated : 07/11/2023
The iSCSI CSI driver for Kubernetes is [licensed under the Apache 2.0 license](h
## Prerequisites -- Have an [Azure Elastic SAN](elastic-san-create.md) with volumes - Use either the [latest Azure CLI](/cli/azure/install-azure-cli) or install the [latest Azure PowerShell module](/powershell/azure/install-azure-powershell) - Meet the [compatibility requirements](https://github.com/kubernetes-csi/csi-driver-iscsi/blob/master/README.md#container-images--kubernetes-compatibility) for the iSCSI CSI driver
+- [Deploy an Elastic SAN Preview](elastic-san-create.md)
+- [Configure a virtual network endpoint](elastic-san-networking.md#configure-a-virtual-network-endpoint)
+- [Configure virtual network rules](elastic-san-networking.md#configure-virtual-network-rules)
## Limitations
After deployment, check the pods status to verify that the driver installed.
```bash kubectl -n kube-system get pod -o wide -l app=csi-iscsi-node ```
-### Configure Elastic SAN Volume Group
-
-To connect an Elastic SAN volume to an AKS cluster, you need to configure Elastic SAN Volume Group to allow access from AKS node pool subnets, follow [Configure Elastic SAN networking Preview](elastic-san-networking.md)
### Get volume information
storage Elastic San Connect Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-connect-linux.md
description: Learn how to connect to an Azure Elastic SAN Preview volume from a
Previously updated : 04/24/2023 Last updated : 07/11/2023
In this article, you'll add the Storage service endpoint to an Azure virtual net
## Prerequisites -- Complete [Deploy an Elastic SAN Preview](elastic-san-create.md)-- An Azure Virtual Network, which you'll need to establish a connection from compute clients in Azure to your Elastic SAN volumes.
+- Use either the [latest Azure CLI](/cli/azure/install-azure-cli) or install the [latest Azure PowerShell module](/powershell/azure/install-azure-powershell)
+- [Deploy an Elastic SAN Preview](elastic-san-create.md)
+- [Configure a virtual network endpoint](elastic-san-networking.md#configure-a-virtual-network-endpoint)
+- [Configure virtual network rules](elastic-san-networking.md#configure-virtual-network-rules)
## Limitations [!INCLUDE [elastic-san-regions](../../../includes/elastic-san-regions.md)]
-## Networking configuration
-
-To connect to a SAN volume, you need to enable the storage service endpoint on your Azure virtual network subnet, and then connect your volume groups to your Azure virtual network subnets.
-
-### Enable Storage service endpoint
-
-In your virtual network, enable the Storage service endpoint on your subnet. This ensures traffic is routed optimally to your Elastic SAN. To enable service point for Azure Storage, you must have the appropriate permissions for the virtual network. This operation can be performed by a user that has been given permission to the Microsoft.Network/virtualNetworks/subnets/joinViaServiceEndpoint/action [Azure resource provider operation](../../role-based-access-control/resource-provider-operations.md#microsoftnetwork) via a custom Azure role. An Elastic SAN and the virtual networks granted access may be in different subscriptions, including subscriptions that are a part of a different Azure AD tenant.
-
-> [!NOTE]
-> Configuration of rules that grant access to subnets in virtual networks that are a part of a different Azure Active Directory tenant are currently only supported through PowerShell, CLI and REST APIs. These rules cannot be configured through the Azure portal, though they may be viewed in the portal.
-
-# [Portal](#tab/azure-portal)
-
-1. Navigate to your virtual network and select **Service Endpoints**.
-1. Select **+ Add** and for **Service** select **Microsoft.Storage.Global**.
-1. Select any policies you like, and the subnet you deploy your Elastic SAN into and select **Add**.
--
-# [PowerShell](#tab/azure-powershell)
-
-```powershell
-$resourceGroupName = "yourResourceGroup"
-$vnetName = "yourVirtualNetwork"
-$subnetName = "yourSubnet"
-
-$virtualNetwork = Get-AzVirtualNetwork -ResourceGroupName $resourceGroupName -Name $vnetName
-
-$subnet = Get-AzVirtualNetworkSubnetConfig -VirtualNetwork $virtualNetwork -Name $subnetName
-
-$virtualNetwork | Set-AzVirtualNetworkSubnetConfig -Name $subnetName -AddressPrefix $subnet.AddressPrefix -ServiceEndpoint "Microsoft.Storage.Global" | Set-AzVirtualNetwork
-```
-
-# [Azure CLI](#tab/azure-cli)
-
-```azurecli
-az network vnet subnet update --resource-group "myresourcegroup" --vnet-name "myvnet" --name "mysubnet" --service-endpoints "Microsoft.Storage.Global"
-```
--
-### Configure volume group networking
-
-Now that you've enabled the service endpoint, configure the network security settings on your volume groups. You can grant network access to a volume group from one or more Azure virtual networks.
-
-By default, no network access is allowed to any volumes in a volume group. Adding a virtual network to your volume group lets you establish iSCSI connections from clients in the same virtual network and subnet to the volumes in the volume group. For details on accessing your volumes from another region, see [Azure Storage cross-region service endpoints](elastic-san-networking.md#azure-storage-cross-region-service-endpoints).
-
-# [Portal](#tab/azure-portal)
-
-1. Navigate to your SAN and select **Volume groups**.
-1. Select a volume group and select **Create**.
-1. Add an existing virtual network and subnet and select **Save**.
-
-# [PowerShell](#tab/azure-powershell)
-
-```azurepowershell
-$rule = New-AzElasticSanVirtualNetworkRuleObject -VirtualNetworkResourceId $subnet.Id -Action Allow
-
-Add-AzElasticSanVolumeGroupNetworkRule -ResourceGroupName $resourceGroupName -ElasticSanName $sanName -VolumeGroupName $volGroupName -NetworkAclsVirtualNetworkRule $rule
-
-```
-# [Azure CLI](#tab/azure-cli)
-
-```azurecli
-# First, get the current length of the list of virtual networks. This is needed to ensure you append a new network instead of replacing existing ones.
-virtualNetworkListLength = az elastic-san volume-group show -e $sanName -n $volumeGroupName -g $resourceGroupName --query 'length(networkAcls.virtualNetworkRules)'
-
-az elastic-san volume-group update -e $sanName -g $resourceGroupName --name $volumeGroupName --network-acls virtual-network-rules[$virtualNetworkListLength] "{virtualNetworkRules:[{id:/subscriptions/subscriptionID/resourceGroups/RGName/providers/Microsoft.Network/virtualNetworks/vnetName/subnets/default, action:Allow}]}"
-```
-- ## Connect to a volume You can either create single sessions or multiple-sessions to every Elastic SAN volume based on your application's multi-threaded capabilities and performance requirements. To achieve higher IOPS and throughput to a volume and reach its maximum limits, use multiple sessions and adjust the queue depth and IO size as needed, if your workload allows.
storage Elastic San Connect Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-connect-windows.md
description: Learn how to connect to an Azure Elastic SAN Preview volume from a
Previously updated : 04/24/2023 Last updated : 07/11/2023
In this article, you'll add the Storage service endpoint to an Azure virtual net
## Prerequisites -- Complete [Deploy an Elastic SAN Preview](elastic-san-create.md)-- An Azure Virtual Network, which you'll need to establish a connection from compute clients in Azure to your Elastic SAN volumes.
+- Use either the [latest Azure CLI](/cli/azure/install-azure-cli) or install the [latest Azure PowerShell module](/powershell/azure/install-azure-powershell)
+- [Deploy an Elastic SAN Preview](elastic-san-create.md)
+- [Configure a virtual network endpoint](elastic-san-networking.md#configure-a-virtual-network-endpoint)
+- [Configure virtual network rules](elastic-san-networking.md#configure-virtual-network-rules)
## Limitations [!INCLUDE [elastic-san-regions](../../../includes/elastic-san-regions.md)]
-## Configure networking
-
-To connect to a SAN volume, you need to enable the storage service endpoint on your Azure virtual network subnet, and then connect your volume groups to your Azure virtual network subnets.
-
-### Enable Storage service endpoint
-
-In your virtual network, enable the Storage service endpoint on your subnet. This ensures traffic is routed optimally to your Elastic SAN. To enable service point for Azure Storage, you must have the appropriate permissions for the virtual network. This operation can be performed by a user that has been given permission to the Microsoft.Network/virtualNetworks/subnets/joinViaServiceEndpoint/action [Azure resource provider operation](../../role-based-access-control/resource-provider-operations.md#microsoftnetwork) via a custom Azure role. An Elastic SAN and the virtual networks granted access may be in different subscriptions, including subscriptions that are a part of a different Azure AD tenant.
-
-> [!NOTE]
-> Configuration of rules that grant access to subnets in virtual networks that are a part of a different Azure Active Directory tenant are currently only supported through PowerShell, CLI and REST APIs. These rules cannot be configured through the Azure portal, though they may be viewed in the portal.
-
-# [Portal](#tab/azure-portal)
-
-1. Navigate to your virtual network and select **Service Endpoints**.
-1. Select **+ Add** and for **Service** select **Microsoft.Storage.Global**.
-1. Select any policies you like, and the subnet you deploy your Elastic SAN into and select **Add**.
--
-# [PowerShell](#tab/azure-powershell)
-
-```powershell
-$resourceGroupName = "yourResourceGroup"
-$vnetName = "yourVirtualNetwork"
-$subnetName = "yourSubnet"
-
-$virtualNetwork = Get-AzVirtualNetwork -ResourceGroupName $resourceGroupName -Name $vnetName
-
-$subnet = Get-AzVirtualNetworkSubnetConfig -VirtualNetwork $virtualNetwork -Name $subnetName
-
-$virtualNetwork | Set-AzVirtualNetworkSubnetConfig -Name $subnetName -AddressPrefix $subnet.AddressPrefix -ServiceEndpoint "Microsoft.Storage.Global" | Set-AzVirtualNetwork
-```
-
-# [Azure CLI](#tab/azure-cli)
-
-```azurecli
-az network vnet subnet update --resource-group "myresourcegroup" --vnet-name "myvnet" --name "mysubnet" --service-endpoints "Microsoft.Storage.Global"
-```
--
-### Configure volume group networking
-
-Now that you've enabled the service endpoint, configure the network security settings on your volume groups. You can grant network access to a volume group from one or more Azure virtual networks.
-
-By default, no network access is allowed to any volumes in a volume group. Adding a virtual network to your volume group lets you establish iSCSI connections from clients in the same virtual network and subnet to the volumes in the volume group. For details on accessing your volumes from another region, see [Azure Storage cross-region service endpoints](elastic-san-networking.md#azure-storage-cross-region-service-endpoints).
-
-# [Portal](#tab/azure-portal)
-
-1. Navigate to your SAN and select **Volume groups**.
-1. Select a volume group and select **Create**.
-1. Add an existing virtual network and subnet and select **Save**.
-
-# [PowerShell](#tab/azure-powershell)
-
-```azurepowershell
-$rule = New-AzElasticSanVirtualNetworkRuleObject -VirtualNetworkResourceId $subnet.Id -Action Allow
-
-Add-AzElasticSanVolumeGroupNetworkRule -ResourceGroupName $resourceGroupName -ElasticSanName $sanName -VolumeGroupName $volGroupName -NetworkAclsVirtualNetworkRule $rule
-
-```
-# [Azure CLI](#tab/azure-cli)
-
-```azurecli
-# First, get the current length of the list of virtual networks. This is needed to ensure you append a new network instead of replacing existing ones.
-virtualNetworkListLength = az elastic-san volume-group show -e $sanName -n $volumeGroupName -g $resourceGroupName --query 'length(networkAcls.virtualNetworkRules)'
-
-az elastic-san volume-group update -e $sanName -g $resourceGroupName --name $volumeGroupName --network-acls virtual-network-rules[$virtualNetworkListLength] "{virtualNetworkRules:[{id:/subscriptions/subscriptionID/resourceGroups/RGName/providers/Microsoft.Network/virtualNetworks/vnetName/subnets/default, action:Allow}]}"
-```
-- ## Connect to a volume You can either create single sessions or multiple-sessions to every Elastic SAN volume based on your application's multi-threaded capabilities and performance requirements. To achieve higher IOPS and throughput to a volume and reach its maximum limits, use multiple sessions and adjust the queue depth and IO size as needed, if your workload allows.
storage Elastic San Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-create.md
description: Learn how to deploy an Azure Elastic SAN (preview) with the Azure p
Previously updated : 08/14/2023 Last updated : 08/16/2023
This article explains how to deploy and configure an elastic storage area networ
# [PowerShell](#tab/azure-powershell)
+Replace all placeholder text with your own values when assigning values to variables and use the same variables in of all the examples in this article:
+
+| Placeholder | Description |
+|-|-|
+| `<ResourceGroupName>` | The name of the resource group where the resources are to be deployed. |
+| `<ElasticSanName>` | The name of the Elastic SAN to be created. |
+| `<ElasticSanVolumeGroupName>` | The name of the Elastic SAN Volume Group to be created. |
+| `<VolumeName>` | The name of the Elastic SAN Volume to be created. |
+| `<Location>` | The region where new resources will be created. |
+ The following command creates an Elastic SAN that uses locally redundant storage. To create one that uses zone-redundant storage, replace `Premium_LRS` with `Premium_ZRS`. ```azurepowershell ## Variables
-$rgName = "yourResourceGroupName"
+$RgName = "<ResourceGroupName>"
## Select the same availability zone as where you plan to host your workload
-$zone = 1
+$Zone = 1
## Select the same region as your Azure virtual network
-$region = "yourRegion"
-$sanName = "desiredSANName"
-$volGroupName = "desiredVolumeGroupName"
-$volName = "desiredVolumeName"
+$Location = "<Location>"
+$EsanName = "<ElasticSanName>"
+$EsanVgName = "<ElasticSanVolumeGroupName>"
+$VolumeName = "<VolumeName>"
## Create the SAN, itself
-New-AzElasticSAN -ResourceGroupName $rgName -Name $sanName -AvailabilityZone $zone -Location $region -BaseSizeTib 100 -ExtendedCapacitySizeTiB 20 -SkuName Premium_LRS
+New-AzElasticSAN -ResourceGroupName $RgName -Name $EsanName -AvailabilityZone $Zone -Location $Location -BaseSizeTib 100 -ExtendedCapacitySizeTiB 20 -SkuName Premium_LRS
``` # [Azure CLI](#tab/azure-cli)
+Replace all placeholder text with your own values when assigning values to variables and use the same variables in of all the examples in this article:
+
+| Placeholder | Description |
+|-|-|
+| `<ResourceGroupName>` | The name of the resource group where the resources are to be deployed. |
+| `<ElasticSanName>` | The name of the Elastic SAN to be created. |
+| `<ElasticSanVolumeGroupName>` | The name of the Elastic SAN Volume Group to be created. |
+| `<VolumeName>` | The name of the Elastic SAN Volume to be created. |
+| `<Location>` | The region where new resources will be created. |
+ The following command creates an Elastic SAN that uses locally redundant storage. To create one that uses zone-redundant storage, replace `Premium_LRS` with `Premium_ZRS`. ```azurecli ## Variables
-sanName="yourSANNameHere"
-resourceGroupName="yourResourceGroupNameHere"
-sanLocation="desiredRegion"
-volumeGroupName="desiredVolumeGroupName"
+RgName="<ResourceGroupName>"
+EsanName="<ElasticSanName>"
+EsanVgName="<ElasticSanVolumeGroupName>"
+Location="<Location>"
-az elastic-san create -n $sanName -g $resourceGroupName -l $sanLocation --base-size-tib 100 --extended-capacity-size-tib 20 --sku "{name:Premium_LRS,tier:Premium}"
+az elastic-san create -n $EsanName -g $RgName -l $Location --base-size-tib 100 --extended-capacity-size-tib 20 --sku "{name:Premium_LRS,tier:Premium}"
```
Now that you've configured the basic settings and provisioned your storage, you
# [PowerShell](#tab/azure-powershell)
+The following sample command creates an Elastic SAN volume group in the Elastic SAN you created previously. Use the same variables and values you defined when you [created the Elastic SAN](#create-the-san).
```azurepowershell ## Create the volume group, this script only creates one.
-New-AzElasticSanVolumeGroup -ResourceGroupName $rgName -ElasticSANName $sanName -Name $volGroupName
+New-AzElasticSanVolumeGroup -ResourceGroupName $RgName -ElasticSANName $EsanName -Name $EsanVgName
``` # [Azure CLI](#tab/azure-cli)
+The following sample command creates an Elastic SAN volume group in the Elastic SAN you created previously. Use the same variables and values you defined when you [created the Elastic SAN](#create-the-san).
+ ```azurecli
-az elastic-san volume-group create --elastic-san-name $sanName -g $resourceGroupName -n $volumeGroupName
+az elastic-san volume-group create --elastic-san-name $EsanName -g $RgName -n $EsanVgName
```
Volumes are usable partitions of the SAN's total capacity, you must allocate a p
# [PowerShell](#tab/azure-powershell)
-In this article, we provide you the command to create a single volume. To create a batch of volumes, see [Create multiple Elastic SAN volumes](elastic-san-batch-create-sample.md).
+The following sample command creates a single volume in the Elastic SAN volume group you created previously. To create a batch of volumes, see [Create multiple Elastic SAN volumes](elastic-san-batch-create-sample.md). Use the same variables and values you defined when you [created the Elastic SAN](#create-the-san).
> [!IMPORTANT] > The volume name is part of your volume's iSCSI Qualified Name, and can't be changed once created.
-Replace `volumeName` with the name you'd like the volume to use, then run the following script:
+Use the same variables you set for , then run the following script:
```azurepowershell ## Create the volume, this command only creates one.
-New-AzElasticSanVolume -ResourceGroupName $rgName -ElasticSanName $sanName -VolumeGroupName $volGroupName -Name $volName -sizeGiB 2000
+New-AzElasticSanVolume -ResourceGroupName $RgName -ElasticSanName $EsanName -VolumeGroupName $EsanVgName -Name $VolumeName -sizeGiB 2000
``` # [Azure CLI](#tab/azure-cli)
New-AzElasticSanVolume -ResourceGroupName $rgName -ElasticSanName $sanName -Volu
> [!IMPORTANT] > The volume name is part of your volume's iSCSI Qualified Name, and can't be changed once created.
-Replace `$volumeName` with the name you'd like the volume to use, then run the following script:
+The following sample command creates an Elastic SAN volume in the Elastic SAN volume group you created previously. Use the same variables and values you defined when you [created the Elastic SAN](#create-the-san).
```azurecli
-az elastic-san volume create --elastic-san-name $sanName -g $resourceGroupName -v $volumeGroupName -n $volumeName --size-gib 2000
+az elastic-san volume create --elastic-san-name $EsanName -g $RgName -v $EsanVgName -n $VolumeName --size-gib 2000
```
storage Elastic San Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-introduction.md
description: An overview of Azure Elastic SAN Preview, a service that enables yo
Previously updated : 05/02/2023 Last updated : 08/15/2023
The status of items in this table may change over time.
| Encryption at rest| ✔️ | | Encryption in transit| ⛔ | | [LRS or ZRS redundancy types](elastic-san-planning.md#redundancy)| ✔️ |
-| Private endpoints | Γ¢ö |
+| Private endpoints | ✔️ |
| Grant network access to specific Azure virtual networks| ✔️ | | Soft delete | ⛔ | | Snapshots | ⛔ |
storage Elastic San Networking Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-networking-concepts.md
+
+ Title: Azure Elastic SAN networking Preview concepts
+description: An overview of Azure Elastic SAN Preview networking options, including storage service endpoints, private endpoints, and iSCSI.
+++ Last updated : 08/16/2023++++
+# Elastic SAN Preview networking
+
+Azure Elastic storage area network (SAN) Preview allows you to secure and control the level of access to your Elastic SAN volumes that your applications and enterprise environments require. This article describes the options for allowing users and applications access to Elastic SAN volumes from an [Azure virtual network infrastructure](../../virtual-network/vnet-integration-for-azure-services.md).
+
+You can configure Elastic SAN volume groups to only allow access over specific endpoints on specific virtual network subnets. The allowed subnets may belong to a virtual network in the same subscription, or those in a different subscription, including subscriptions belonging to a different Azure Active Directory tenant. Once network access is configured for a volume group, the configuration is inherited by all volumes belonging to the group.
+
+Depending on your configuration, applications on peered virtual networks or on-premises networks can also access volumes in the group. On-premises networks must be connected to the virtual network by a VPN or ExpressRoute. For more details about virtual network configurations, see [Azure virtual network infrastructure](../../virtual-network/vnet-integration-for-azure-services.md).
+
+There are two types of virtual network endpoints you can configure to allow access to an Elastic SAN volume group:
+
+- [Storage service endpoints](#storage-service-endpoints)
+- [Private endpoints](#private-endpoints)
+
+To decide which option is best for you, see [Compare Private Endpoints and Service Endpoints](../../virtual-network/vnet-integration-for-azure-services.md#compare-private-endpoints-and-service-endpoints). Generally, you should use private endpoints instead of service endpoints since Private Link offers better capabilities. For more information, see [Azure Private Link](../../private-link/private-endpoint-overview.md).
+
+After configuring endpoints, you can configure network rules to further control access to your Elastic SAN volume group. Once the endpoints and network rules have been configured, clients can connect to volumes in the group to process their workloads.
+
+## Storage service endpoints
+
+[Azure Virtual Network (VNet) service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md) provide secure and direct connectivity to Azure services using an optimized route over the Azure backbone network. Service endpoints allow you to secure your critical Azure service resources so only specific virtual networks can access them.
+
+[Cross-region service endpoints for Azure Storage](../common/storage-network-security.md#azure-storage-cross-region-service-endpoints) work between virtual networks and storage service instances in any region. With cross-region service endpoints, subnets no longer use a public IP address to communicate with any storage account, including those in another region. Instead, all the traffic from a subnet to a storage account uses a private IP address as a source IP.
+
+> [!TIP]
+> The original local service endpoints, identified as **Microsoft.Storage**, are still supported for backward compatibility, but you should create cross-region endpoints, identified as **Microsoft.Storage.Global**, for new deployments.
+>
+> Cross-region service endpoints and local ones can't coexist on the same subnet. To use cross-region service endpoints, you might have to delete existing **Microsoft.Storage** endpoints and recreate them as **Microsoft.Storage.Global**.
+
+## Private endpoints
+
+> [!IMPORTANT]
+> Private endpoints for Elastic SAN Preview are currently only supported in France Central.
+
+Azure [Private Link](../../private-link/private-link-overview.md) enables you to access an Elastic SAN volume group securely over a [private endpoint](../../private-link/private-endpoint-overview.md) from a virtual network subnet. Traffic between your virtual network and the service traverses the Microsoft backbone network, eliminating the risk of exposing your service to the public internet. An Elastic SAN private endpoint uses a set of IP addresses from the subnet address space for each volume group. The maximum number used per endpoint is 20.
+
+Private endpoints have several advantages over service endpoints. For a complete comparison of private endpoints to service endpoints, see [Compare Private Endpoints and Service Endpoints](../../virtual-network/vnet-integration-for-azure-services.md#compare-private-endpoints-and-service-endpoints).
+
+Traffic between the virtual network and the Elastic SAN is routed over an optimal path on the Azure backbone network. Unlike service endpoints, you don't need to configure network rules to allow traffic from a private endpoint since the storage firewall only controls access through public endpoints.
+
+For details on how to configure private endpoints, see [Enable private endpoint](elastic-san-networking.md#configure-a-private-endpoint).
+
+## Virtual network rules
+
+To further secure access to your Elastic SAN volumes, you can create virtual network rules for volume groups configured with service endpoints to allow access from specific subnets. You don't need network rules to allow traffic from a private endpoint since the storage firewall only controls access through public endpoints.
+
+Each volume group supports up to 200 virtual network rules. If you delete a subnet that has been included in a network rule, it will be removed from the network rules for the volume group. If you create a new subnet with the same name, it won't have access to the volume group. To allow access, you must explicitly authorize the new subnet in the network rules for the volume group.
+
+Clients granted access via these network rules must also be granted the appropriate permissions to the Elastic SAN to volume group.
+
+To learn how to define network rules, see [Managing virtual network rules](elastic-san-networking.md#configure-virtual-network-rules).
+
+## Client connections
+
+After you have enabled the desired endpoints and granted access in your network rules, you can connect to the appropriate Elastic SAN volumes using the iSCSI protocol. For more details on how to configure client connections, see [Configure access to Elastic SAN volumes from clients](elastic-san-networking.md#configure-client-connections)
+
+> [!NOTE]
+> If a connection between a virtual machine (VM) and an Elastic SAN volume is lost, the connection will retry for 90 seconds until terminating. Losing a connection to an Elastic SAN volume won't cause the VM to restart.
+
+## Next steps
+
+[Configure Elastic SAN networking Preview](elastic-san-networking.md)
storage Elastic San Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-networking.md
Title: Azure Elastic SAN networking Preview
-description: An overview of Azure Elastic SAN Preview, a service that enables you to create and use network file shares in the cloud using either SMB or NFS protocols.
+ Title: How to configure Azure Elastic SAN Preview networking
+description: How to configure networking for Azure Elastic SAN Preview, a service that enables you to create and use network file shares in the cloud using either SMB or NFS protocols.
Previously updated : 05/04/2023 Last updated : 08/16/2023 -+
-# Configure Elastic SAN networking Preview
+# Configure networking for an Elastic SAN Preview
-Azure Elastic storage area network (SAN) allows you to secure and control the level of access to your Elastic SAN volumes that your applications and enterprise environments demand, based on the type and subset of networks or resources used. When network rules are configured, only applications requesting data over the specified set of networks or through the specified set of Azure resources that can access an Elastic SAN Preview. Access to your SAN's volumes are limited to resources in subnets in the same Azure Virtual Network that your SAN's volume group is configured with.
+Azure Elastic storage area network (SAN) Preview allows you to secure and control the level of access to your Elastic SAN volumes that your applications and enterprise environments require.
-Volume groups are configured to allow access only from specific subnets. The allowed subnets may belong to a virtual network in the same subscription, or those in a different subscription, including subscriptions belonging to a different Azure Active Directory tenant.
+This article describes how to configure your Elastic SAN to allow access from your Azure virtual network infrastructure.
-You must enable a [Service endpoint](../../virtual-network/virtual-network-service-endpoints-overview.md) for Azure Storage within the virtual network. The service endpoint routes traffic from the virtual network through an optimal path to the Azure Storage service. The identities of the subnet and the virtual network are also transmitted with each request. Administrators can then configure network rules for the SAN that allow requests to be received from specific subnets in a virtual network. Clients granted access via these network rules must continue to meet the authorization requirements of the Elastic SAN to access the data.
+You can configure your Elastic SAN volume groups to allow access only from endpoints on specific virtual network subnets. The allowed subnets may belong to virtual networks in the same subscription, or those in a different subscription, including a subscription belonging to a different Azure Active Directory tenant.
-Each volume group supports up to 200 virtual network rules.
+To configure network access to your Elastic SAN:
-> [!IMPORTANT]
-> If you delete a subnet that has been included in a network rule, it will be removed from the network rules for the volume group. If you create a new subnet with the same name, it won't have access to the volume group. To allow access, you must explicitly authorize the new subnet in the network rules for the volume group.
+> [!div class="checklist"]
+> - [Configure a virtual network endpoint](#configure-a-virtual-network-endpoint).
+> - [Configure virtual network rules](#configure-virtual-network-rules) to control the source and type of traffic to your Elastic SAN.
+> - [Configure client connections](#configure-client-connections).
+
+## Configure a virtual network endpoint
+
+You can allow access to your Elastic SAN volume groups from two types of Azure virtual network endpoints:
+
+- [Storage service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md)
+- [Private endpoints](../../private-link/private-endpoint-overview.md)
+
+To decide which type of endpoint works best for you, see [Compare Private Endpoints and Service Endpoints](../../virtual-network/vnet-integration-for-azure-services.md#compare-private-endpoints-and-service-endpoints).
+
+Each volume group can be configured to allow access from either public storage service endpoints or private endpoints, but not both at the same time. Once network access is configured for a volume group, the configuration is inherited by all volumes belonging to the group.
-## Enable Storage service endpoint
+The process for enabling each type of endpoint follows:
-In your virtual network, enable the Storage service endpoint on your subnet. This ensures traffic is routed optimally to your Elastic SAN. To enable service point for Azure Storage, you must have the appropriate permissions for the virtual network. This operation can be performed by a user that has been given permission to the Microsoft.Network/virtualNetworks/subnets/joinViaServiceEndpoint/action [Azure resource provider operation](../../role-based-access-control/resource-provider-operations.md#microsoftnetwork) via a custom Azure role. An Elastic SAN and the virtual networks granted access may be in different subscriptions, including subscriptions that are a part of a different Azure AD tenant.
+- [Configure an Azure Storage service endpoint](#configure-an-azure-storage-service-endpoint)
+- [Configure a private endpoint](#configure-a-private-endpoint)
+
+### Configure an Azure Storage service endpoint
+
+You can configure an Azure Storage service endpoint from the virtual network where access is required. You must have permission to the `Microsoft.Network/virtualNetworks/subnets/joinViaServiceEndpoint/action` [Azure resource provider operation](../../role-based-access-control/resource-provider-operations.md#microsoftnetwork) via a custom Azure role to configure a service endpoint.
> [!NOTE] > Configuration of rules that grant access to subnets in virtual networks that are a part of a different Azure Active Directory tenant are currently only supported through PowerShell, CLI and REST APIs. These rules cannot be configured through the Azure portal, though they may be viewed in the portal. # [Portal](#tab/azure-portal)+ 1. Navigate to your virtual network and select **Service Endpoints**.
-1. Select **+ Add** and for **Service** select **Microsoft.Storage**.
-1. Select any policies you like, and the subnet you deploy your Elastic SAN into and select **Add**.
+1. Select **+ Add**.
+1. On the **Add service endpoints** screen:
+ 1. For **Service** select **Microsoft.Storage.Global** to add a [cross-region service endpoint](../common/storage-network-security.md#azure-storage-cross-region-service-endpoints).
+
+ > [!NOTE]
+ > You might see **Microsoft.Storage** listed as an available storage service endpoint. That option is for intra-region endpoints which exist for backward compatibility only. Always use cross-region endpoints unless you have a specific reason for using intra-region ones.
+
+1. For **Subnets** select all the subnets where you want to allow access.
+1. Select **Add**.
:::image type="content" source="media/elastic-san-create/elastic-san-service-endpoint.png" alt-text="Screenshot of the virtual network service endpoint page, adding the storage service endpoint." lightbox="media/elastic-san-create/elastic-san-service-endpoint.png"::: # [PowerShell](#tab/azure-powershell)
-```powershell
-$resourceGroupName = "yourResourceGroup"
-$vnetName = "yourVirtualNetwork"
-$subnetName = "yourSubnet"
+Use this sample code to create a storage service endpoint for your Elastic SAN volume group with PowerShell.
-$virtualNetwork = Get-AzVirtualNetwork -ResourceGroupName $resourceGroupName -Name $vnetName
+```powershell
+# Define some variables
+$RgName = "<ResourceGroupName>"
+$VnetName = "<VnetName>"
+$SubnetName = "<SubnetName>"
-$subnet = Get-AzVirtualNetworkSubnetConfig -VirtualNetwork $virtualNetwork -Name $subnetName
+# Get the virtual network and subnet
+$Vnet = Get-AzVirtualNetwork -ResourceGroupName $RgName -Name $VnetName
+$Subnet = Get-AzVirtualNetworkSubnetConfig -VirtualNetwork $Vnet -Name $SubnetName
-$virtualNetwork | Set-AzVirtualNetworkSubnetConfig -Name $subnetName -AddressPrefix $subnet.AddressPrefix -ServiceEndpoint "Microsoft.Storage.Global" | Set-AzVirtualNetwork
+# Enable the storage service endpoint
+$Vnet | Set-AzVirtualNetworkSubnetConfig -Name $SubnetName -AddressPrefix $Subnet.AddressPrefix -ServiceEndpoint "Microsoft.Storage.Global" | Set-AzVirtualNetwork
``` # [Azure CLI](#tab/azure-cli)
+Use this sample code to create a storage service endpoint for your Elastic SAN volume group with the Azure CLI.
+ ```azurecli
-az network vnet subnet update --resource-group "myresourcegroup" --vnet-name "myvnet" --name "mysubnet" --service-endpoints "Microsoft.Storage.Global"
+# Define some variables
+RgName="<ResourceGroupName>"
+VnetName="<VnetName>"
+SubnetName="<SubnetName>"
+
+# Enable the storage service endpoint
+az network vnet subnet update --resource-group $RgName --vnet-name $VnetName --name $SubnetName --service-endpoints "Microsoft.Storage.Global"
```+
-### Available virtual network regions
+### Configure a private endpoint
+
+> [!IMPORTANT]
+> - Private endpoints for Elastic SAN Preview are currently only supported in France Central.
+>
+> - Before you can create a private endpoint connection to a volume group, it must contain at least one volume.
+
+There are two steps involved in configuring a private endpoint connection:
+
+> [!div class="checklist"]
+> - Creating the endpoint and the associated connection.
+> - Approving the connection.
+
+To create a private endpoint for an Elastic SAN volume group, you must have the [Elastic SAN Volume Group Owner](../../role-based-access-control/built-in-roles.md#elastic-san-volume-group-owner) role. To approve a new private endpoint connection, you must have permission to the [Azure resource provider operation](../../role-based-access-control/resource-provider-operations.md#microsoftelasticsan) `Microsoft.ElasticSan/elasticSans/PrivateEndpointConnectionsApproval/action`. Permission for this operation is included in the [Elastic SAN Network Admin](../../role-based-access-control/built-in-roles.md#elastic-san-owner) role, but it can also be granted via a custom Azure role.
+
+If you create the endpoint from a user account that has all of the necessary roles and permissions required for creation and approval, the process can be completed in one step. If not, it will require two separate steps by two different users.
+
+The Elastic SAN and the virtual network may be in different resource groups, regions and subscriptions, including subscriptions that belong to different Azure AD tenants. In these examples, we are creating the private endpoint in the same resource group as the virtual network.
+
+# [Portal](#tab/azure-portal)
+
+Currently, you can only configure a private endpoint using PowerShell or the Azure CLI.
+
+# [PowerShell](#tab/azure-powershell)
-Service endpoints for Azure Storage work between virtual networks and service instances in any region. They also work between virtual networks and service instances in [paired regions](../../availability-zones/cross-region-replication-azure.md) to allow continuity during a regional failover. When planning for disaster recovery during a regional outage, you should create the virtual networks in the paired region in advance. Enable service endpoints for Azure Storage, with network rules granting access from these alternative virtual networks. Then apply these rules to your zone-redundant SANs.
+Deploying a private endpoint for an Elastic SAN Volume group using PowerShell involves these steps:
-#### Azure Storage cross-region service endpoints
+1. Get the subnet from which applications will connect.
+1. Get the Elastic SAN Volume Group.
+1. Create a private link service connection using the volume group as input.
+1. Create the private endpoint using the subnet and the private link service connection as input.
+1. **(Optional** *if you are using the two-step process (creation, then approval))*: The Elastic SAN Network Admin approves the connection.
-Cross-region service endpoints for Azure became generally available in April of 2023. With cross-region service endpoints, subnets will no longer use a public IP address to communicate with any storage account. Instead, all the traffic from subnets to storage accounts will use a private IP address as a source IP. As a result, any storage accounts that use IP network rules to permit traffic from those subnets will no longer have an effect.
+Use this sample code to create a private endpoint for your Elastic SAN volume group with PowerShell. Replace all placeholder text with your own values:
-To use cross-region service endpoints, it might be necessary to delete existing **Microsoft.Storage** endpoints and recreate them as cross-region (**Microsoft.Storage.Global**).
+| Placeholder | Description |
+|-|-|
+| `<ResourceGroupName>` | The name of the resource group where the resources are deployed. |
+| `<SubnetName>` | The name of the subnet from which access to the volume group will be configured. |
+| `<VnetName>` | The name of the virtual network that includes the subnet. |
+| `<ElasticSanVolumeGroupName>` | The name of the Elastic SAN Volume Group to which a connection is to be created. |
+| `<ElasticSanName>` | The name of the Elastic SAN that the volume group belongs to. |
+| `<PrivateLinkSvcConnectionName>` | The name of the new private link service connection to the volume group. |
+| `<PrivateEndpointName>` | The name of the new private endpoint. |
+| `<Location>` | The region where the new private endpoint will be created. |
+| `<ApprovalDesc>` | The description provided for the approval of the private endpoint connection. |
+
+```powershell
+# Set the resource group name.
+$RgName = "<ResourceGroupName>"
+
+# Get the virtual network and subnet, which is input to creating the private endpoint.
+$VnetName = "<VnetName>"
+$SubnetName = "<SubnetName>"
+
+$Vnet = Get-AzVirtualNetwork -Name $VnetName -ResourceGroupName $RgName
+$Subnet = $Vnet | Select -ExpandProperty subnets | Where-Object {$_.Name -eq $SubnetName}
+
+# Get the Elastic SAN, which is input to creating the private endpoint service connection.
+$EsanName = "<ElasticSanName>"
+$EsanVgName = "<ElasticSanVolumeGroupName>"
+
+$Esan = Get-AzElasticSan -Name $EsanName -ResourceGroupName $RgName
+
+# Create the private link service connection, which is input to creating the private endpoint.
+$PLSvcConnectionName = "<PrivateLinkSvcConnectionName>"
+$EsanPlSvcConn = New-AzPrivateLinkServiceConnection -Name $PLSvcConnectionName -PrivateLinkServiceId $Esan.Id -GroupId $EsanVgName
+
+# Create the private endpoint.
+$EndpointName = '<PrivateEndpointName>'
+$Location = '<Location>'
+$PeArguments = @{
+ Name = $EndpointName
+ ResourceGroupName = $RgName
+ Location = $Location
+ Subnet = $Subnet
+ PrivateLinkServiceConnection = $EsanPlSvcConn
+}
+New-AzPrivateEndpoint @PeArguments # -ByManualRequest # (Uncomment the `-ByManualRequest` parameter if you are using the two-step process).
+```
+
+Use this sample code to approve the private link service connection if you are using the two-step process. Use the same variables from the previous code sample:
+
+```powershell
+# Get the private endpoint and associated connection.
+$PrivateEndpoint = Get-AzPrivateEndpoint -Name $EndpointName -ResourceGroupName $RgName
+$PeConnArguments = @{
+ ServiceName = $EsanName
+ ResourceGroupName = $RgName
+ PrivateLinkResourceType = "Microsoft.ElasticSan/elasticSans"
+}
+$EndpointConnection = Get-AzPrivateEndpointConnection @PeConnArguments |
+Where-Object {($_.PrivateEndpoint.Id -eq $PrivateEndpoint.Id)}
+
+# Approve the private link service connection.
+$ApprovalDesc="<ApprovalDesc>"
+Approve-AzPrivateEndpointConnection @PeConnArguments -Name $EndpointConnection.Name -Description $ApprovalDesc
+
+# Get the private endpoint connection anew and verify the connection status.
+$EndpointConnection = Get-AzPrivateEndpointConnection @PeConnArguments |
+Where-Object {($_.PrivateEndpoint.Id -eq $PrivateEndpoint.Id)}
+$EndpointConnection.PrivateLinkServiceConnectionState
+```
+
+# [Azure CLI](#tab/azure-cli)
-## Managing virtual network rules
+Deploying a private endpoint for an Elastic SAN Volume group using the Azure CLI involves two steps:
+
+1. Get the private connection resource ID of the Elastic SAN.
+1. Create the private endpoint using inputs:
+ 1. Private connection resource ID
+ 1. Volume group name
+ 1. Resource group name
+ 1. Subnet name
+ 1. Vnet name
+
+Use this sample code to create a private endpoint for your Elastic SAN volume group with the Azure CLI. Replace all placeholder text with your own values:
+
+| Placeholder | Description |
+|-|-|
+| `<ResourceGroupName>` | The name of the resource group where the resources are deployed. |
+| `<SubnetName>` | The name of the subnet from which access to the volume group will be configured. |
+| `<VnetName>` | The name of the virtual network that includes the subnet. |
+| `<ElasticSanVolumeGroupName>` | The name of the Elastic SAN Volume Group to which a connection is to be created. |
+| `<ElasticSanName>` | The name of the Elastic SAN that the volume group belongs to. |
+| `<PrivateLinkSvcConnectionName>` | The name of the new private link service connection to the volume group. |
+| `<PrivateEndpointName>` | The name of the new private endpoint. |
+| `<Location>` | The region where the new private endpoint will be created. |
+
+```azurecli
+# Define some variables
+RgName="<ResourceGroupName>"
+VnetName="<VnetName>"
+SubnetName="<SubnetName>"
+EsanName="<ElasticSanName>"
+EsanVgName="<ElasticSanVolumeGroupName>"
+EndpointName="<PrivateEndpointName>"
+PLSvcConnectionName="<PrivateLinkSvcConnectionName>"
+Location="<Location>"
+
+id=$(az elastic-san show \
+ --elastic-san-name $EsanName \
+ --resource-group $RgName \
+ --query 'id' \
+ --output tsv)
+
+# Create private endpoint
+az network private-endpoint create \
+ --connection-name $PLSvcConnectionName \
+ --name $EndpointName \
+ --private-connection-resource-id $id \
+ --resource-group $RgName \
+ --vnet-name $VnetName \
+ --subnet $SubnetName \
+ --location $Location \
+ --group-id $EsanVgName
+
+# Verify the status of the private endpoint
+az network private-endpoint show \
+ --name $EndpointName \
+ --resource-group $RgName
+```
++
+## Configure virtual network rules
You can manage virtual network rules for volume groups through the Azure portal, PowerShell, or CLI.
-> [!NOTE]
+> [!IMPORTANT]
> If you want to enable access to your storage account from a virtual network/subnet in another Azure AD tenant, you must use PowerShell or the Azure CLI. The Azure portal does not show subnets in other Azure AD tenants.
+>
+> If you delete a subnet that has been included in a network rule, it will be removed from the network rules for the volume group. If you create a new subnet with the same name, it won't have access to the volume group. To allow access, you must explicitly authorize the new subnet in the network rules for the volume group.
### [Portal](#tab/azure-portal)
You can manage virtual network rules for volume groups through the Azure portal,
- List virtual network rules. ```azurepowershell
- $Rules = Get-AzElasticSanVolumeGroup -ResourceGroupName $rgName -ElasticSanName $sanName -Name $volGroupName
+ $Rules = Get-AzElasticSanVolumeGroup -ResourceGroupName $RgName -ElasticSanName $sanName -Name $volGroupName
$Rules.NetworkAclsVirtualNetworkRule ```
You can manage virtual network rules for volume groups through the Azure portal,
- Add a network rule for a virtual network and subnet. ```azurepowershell
- $rule = New-AzElasticSanVirtualNetworkRuleObject -VirtualNetworkResourceId $subnet.Id -Action Allow
+ $rule = New-AzElasticSanVirtualNetworkRuleObject -VirtualNetworkResourceId $Subnet.Id -Action Allow
- Add-AzElasticSanVolumeGroupNetworkRule -ResourceGroupName $resourceGroupName -ElasticSanName $sanName -VolumeGroupName $volGroupName -NetworkAclsVirtualNetworkRule $rule
+ Add-AzElasticSanVolumeGroupNetworkRule -ResourceGroupName $RgName -ElasticSanName $sanName -VolumeGroupName $volGroupName -NetworkAclsVirtualNetworkRule $rule
``` > [!TIP]
You can manage virtual network rules for volume groups through the Azure portal,
- List information from a particular volume group, including their virtual network rules. ```azurecli
- az elastic-san volume-group show -e $sanName -g $resourceGroupName -n $volumeGroupName
+ az elastic-san volume-group show -e $sanName -g $RgName -n $volumeGroupName
``` - Enable service endpoint for Azure Storage on an existing virtual network and subnet.
You can manage virtual network rules for volume groups through the Azure portal,
```azurecli # First, get the current length of the list of virtual networks. This is needed to ensure you append a new network instead of replacing existing ones.
- virtualNetworkListLength = az elastic-san volume-group show -e $sanName -n $volumeGroupName -g $resourceGroupName --query 'length(networkAcls.virtualNetworkRules)'
+ virtualNetworkListLength = az elastic-san volume-group show -e $sanName -n $volumeGroupName -g $RgName --query 'length(networkAcls.virtualNetworkRules)'
- az elastic-san volume-group update -e $sanName -g $resourceGroupName --name $volumeGroupName --network-acls virtual-network-rules[$virtualNetworkListLength] "{virtualNetworkRules:[{id:/subscriptions/subscriptionID/resourceGroups/RGName/providers/Microsoft.Network/virtualNetworks/vnetName/subnets/default, action:Allow}]}"
+ az elastic-san volume-group update -e $sanName -g $RgName --name $volumeGroupName --network-acls virtual-network-rules[$virtualNetworkListLength] "{virtualNetworkRules:[{id:/subscriptions/subscriptionID/resourceGroups/RGName/providers/Microsoft.Network/virtualNetworks/$VnetName/subnets/default, action:Allow}]}"
``` - Remove a network rule. The following command removes the first network rule, modify it to remove the network rule you'd like. ```azurecli
- az elastic-san volume-group update -e $sanName -g $resourceGroupName -n $volumeGroupName --network-acls virtual-network-rules[1]=null
+ az elastic-san volume-group update -e $sanName -g $RgName -n $volumeGroupName --network-acls virtual-network-rules[1]=null
```
+## Configure client connections
+
+After you have enabled the desired endpoints and granted access in your network rules, you are ready to configure your clients to connect to the appropriate Elastic SAN volumes.
+
+> [!NOTE]
+> If a connection between a virtual machine (VM) and an Elastic SAN volume is lost, the connection will retry for 90 seconds until terminating. Losing a connection to an Elastic SAN volume won't cause the VM to restart.
+ ## Next steps
-[Plan for deploying an Elastic SAN Preview](elastic-san-planning.md)
+- [Connect Azure Elastic SAN Preview volumes to an Azure Kubernetes Service cluster](elastic-san-connect-aks.md)
+- [Connect to Elastic SAN Preview volumes - Linux](elastic-san-connect-linux.md)
+- [Connect to Elastic SAN Preview volumes - Windows](elastic-san-connect-windows.md)
storage Elastic San Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-planning.md
description: Understand planning for an Azure Elastic SAN deployment. Learn abou
Previously updated : 05/02/2023 Last updated : 06/09/2023
Using the same example of a 100 TiB SAN that has 250,000 IOPS and 4,000 MB/s. Sa
## Networking
-In Preview, Elastic SAN supports public access from selected virtual networks, restricting access to specified virtual networks. You configure volume groups to allow network access only from specific vnet subnets. Once a volume group is configured to allow access from a subnet, this configuration is inherited by all volumes belonging to the volume group. You can then mount volumes from any clients in the subnet, with the [internet Small Computer Systems Interface](https://en.wikipedia.org/wiki/ISCSI) (iSCSI) protocol. You must enable [service endpoint for Azure Storage](../../virtual-network/virtual-network-service-endpoints-overview.md) in your virtual network before setting up the network rule on volume group.
+In the Elastic SAN Preview, you can configure access to volume groups over both public [Azure Storage service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md) and [private endpoints](../../private-link/private-endpoint-overview.md) from selected virtual network subnets. Once network access is configured for a volume group, the configuration is inherited by all volumes belonging to the group.
-If a connection between a virtual machine (VM) and an Elastic SAN volume is lost, the connection will retry for 90 seconds until terminating. Losing a connection to an Elastic SAN volume won't cause the VM to restart.
+To allow network access, you must [enable a service endpoint for Azure Storage](elastic-san-networking.md#configure-an-azure-storage-service-endpoint) or a [private endpoint](elastic-san-networking.md#configure-a-private-endpoint) in your virtual network, then [setup a network rule](elastic-san-networking.md#configure-virtual-network-rules) on the volume group for any service endpoints. You don't need a network rule to allow traffic from a private endpoint since the storage firewall only controls access through public endpoints. You can then mount volumes from [AKS](elastic-san-connect-aks.md), [Linux](elastic-san-connect-linux.md), or [Windows](elastic-san-connect-windows.md) clients in the subnet with the [internet Small Computer Systems Interface](https://en.wikipedia.org/wiki/ISCSI) (iSCSI) protocol.
## Redundancy
The following iSCSI features aren't currently supported:
For a video that goes over the general planning and deployment with a few example scenarios, see [Getting started with Azure Elastic SAN](/shows/inside-azure-for-it/getting-started-with-azure-elastic-san).
+[Networking options for Elastic SAN Preview](elastic-san-networking-concepts.md)
[Deploy an Elastic SAN Preview](elastic-san-create.md)
storage Elastic San Shared Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-shared-volumes.md
+
+ Title: Use clustered applications on Azure Elastic SAN
+description: Learn more about using clustered applications on an Elastic SAN volume and sharing volumes between compute clients.
+++ Last updated : 08/15/2023++++
+# Use clustered applications on Azure Elastic SAN
+
+Azure Elastic SAN volumes can be simultaneously attached to multiple compute clients, allowing you to deploy or migrate cluster applications to Azure. You need to use a cluster manager to share an Elastic SAN volume, like Windows Server Failover Cluster (WSFC), or Pacemaker. The cluster manager handles cluster node communications and write locking. Elastic SAN doesn't natively offer a fully managed filesystem that can be accessed over SMB or NFS.
+
+When used as a shared volume, elastic SAN volumes can be shared across availability zones or regions. If you share a volume across availability zones, you should select [zone-redundant storage (ZRS)](elastic-san-planning.md#redundancy) when deploying your SAN. Sharing a volume in a local-redundant storage SAN across zones reduces your performance due to increased latency between the volume and clients.
+
+## Limitations
+
+- Volumes in an Elastic SAN using [ZRS](elastic-san-planning.md#redundancy) can't be used as shared volumes.
+- Elastic SAN connection scripts can be used to attach shared volumes to virtual machines in Virtual Machine Scale Sets or virtual machines in Availability Sets. Fault domain alignment isn't supported.
+- The maximum number of sessions a shared volume supports is 128.
+ - An individual client can create multiple sessions to an individual volume for increased performance. For example, if you create 32 sessions on each of your clients, only four clients could connect to a single volume.
+
+See [Support for Azure Storage features](elastic-san-introduction.md#support-for-azure-storage-features) for other limitations of Elastic SAN.
+
+## Regional availability
+
+Currently, only Elastic SAN volumes in France Central can be used as shared volumes.
+
+## How it works
+
+Elastic SAN shared volumes use [SCSI-3 Persistent Reservations](https://www.t10.org/members/w_spc3.htm) to allow initiators (clients) to control access to a shared elastic SAN volume. This protocol enables an initiator to reserve access to an elastic SAN volume, limit write (or read) access by other initiators, and persist the reservation on a volume beyond the lifetime of a session by default.
+
+SCSI-3 PR has a pivotal role in maintaining data consistency and integrity within shared volumes in cluster scenarios. Compute nodes in a cluster can read or write to their attached elastic SAN volumes based on the reservation chosen by their cluster applications.
+
+## Persistent reservation flow
+
+The following diagram illustrates a sample 2-node clustered database application that uses SCSI-3 PR to enable failover from one node to the other.
++
+The flow is as follows:
+
+1. The clustered application running on both Azure VM1 and VM2 registers its intent to read or write to the elastic SAN volume.
+1. The application instance on VM1 then takes an exclusive reservation to write to the volume.
+1. This reservation is enforced on your volume and the database can now exclusively write to the volume. Any writes from the application instance on VM2 fail.
+1. If the application instance on VM1 goes down, the instance on VM2 can initiate a database failover and take over control of the volume.
+1. This reservation is now enforced on the volume, and it won't accept writes from VM1. It only accepts writes from VM2.
+1. The clustered application can complete the database failover and serve requests from VM2.
+
+The following diagram illustrates another common clustered workload consisting of multiple nodes reading data from an elastic SAN volume for running parallel processes, such as training of machine learning models.
++
+The flow is as follows:
+1. The clustered application running on all VMs registers its intent to read or write to the elastic SAN volume.
+1. The application instance on VM1 takes an exclusive reservation to write to the volume while opening up reads to the volume from other VMs.
+1. This reservation is enforced on the volume.
+1. All nodes in the cluster can now read from the volume. Only one node writes back results to the volume, on behalf of all nodes in the cluster.
+
+## Supported SCSI PR commands
+
+The following commands are supported with Elastic SAN volumes:
+
+To interact with the volume, start with the appropriate persistent reservation action:
+- PR_REGISTER_KEY
+- PR_REGISTER_AND_IGNORE
+- PR_GET_CONFIGURATION
+- PR_RESERVE
+- PR_PREEMPT_RESERVATION
+- PR_CLEAR_RESERVATION
+- PR_RELEASE_RESERVATION
+
+When using PR_RESERVE, PR_PREEMPT_RESERVATION, or PR_RELEASE_RESERVATION, provide one of the following persistent reservation type:
+- PR_NONE
+- PR_WRITE_EXCLUSIVE
+- PR_EXCLUSIVE_ACCESS
+- PR_WRITE_EXCLUSIVE_REGISTRANTS_ONLY
+- PR_EXCLUSIVE_ACCESS_REGISTRANTS_ONLY
+- PR_WRITE_EXCLUSIVE_ALL_REGISTRANTS
+- PR_EXCLUSIVE_ACCESS_ALL_REGISTRANTS
+
+Persistent reservation type determines access to the volume from each node in the cluster.
+
+|Persistent Reservation Type |Reservation Holder |Registered |Others |
+|||||
+|NO RESERVATION |N/A |Read-Write |Read-Write |
+|WRITE EXCLUSIVE |Read-Write |Read-Only |Read-Only |
+|EXCLUSIVE ACCESS |Read-Write |No Access |No Access |
+|WRITE EXCLUSIVE - REGISTRANTS ONLY |Read-Write |Read-Write |Read-Only |
+|EXCLUSIVE ACCESS - REGISTRANTS ONLY |Read-Write |Read-Write |No Access |
+|WRITE EXCLUSIVE - ALL REGISTRANTS |Read-Write |Read-Write |Read-Only |
+|EXCLUSIVE ACCESS - ALL REGISTRANTS |Read-Write |Read-Write |No Access |
+
+You also need to provide a persistent-reservation-key when using:
+- PR_RESERVE
+- PR_REGISTER_AND_IGNORE
+- PR_REGISTER_KEY
+- PR_PREEMPT_RESERVATION
+- PR_CLEAR_RESERVATION
+- PR_RELEASE-RESERVATION.
stream-analytics Capture Event Hub Data Parquet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/capture-event-hub-data-parquet.md
Previously updated : 05/24/2022 Last updated : 08/15/2023 # Capture data from Event Hubs in Parquet format-
-This article explains how to use the no code editor to automatically capture streaming data in Event Hubs in an Azure Data Lake Storage Gen2 account in Parquet format. You have the flexibility of specifying a time or size interval.
+This article explains how to use the no code editor to automatically capture streaming data in Event Hubs in an Azure Data Lake Storage Gen2 account in the Parquet format.
## Prerequisites -- Your Azure Event Hubs and Azure Data Lake Storage Gen2 resources must be publicly accessible and can't be behind a firewall or secured in an Azure Virtual Network.-- The data in your Event Hubs must be serialized in either JSON, CSV, or Avro format.
+- An Azure Event Hubs namespace with an event hub and an Azure Data Lake Storage Gen2 account with a container to store the captured data. These resources must be publicly accessible and can't be behind a firewall or secured in an Azure virtual network.
+
+ If you don't have an event hub, create one by following instructions from [Quickstart: Create an event hub](../event-hubs/event-hubs-create.md).
+
+ If you don't have a Data Lake Storage Gen2 account, create one by following instructions from [Create a storage account](../storage/blobs/create-data-lake-storage-account.md)
+- The data in your Event Hubs must be serialized in either JSON, CSV, or Avro format. For testing purposes, select **Generate data (preview)** on the left menu, select **Stocks data** for dataset, and then select **Send**.
+
+ :::image type="content" source="./media/capture-event-hub-data-parquet/stocks-data.png" alt-text="Screenshot showing the Generate data page to generate sample stocks data." lightbox="./media/capture-event-hub-data-parquet/stocks-data.png":::
## Configure a job to capture data Use the following steps to configure a Stream Analytics job to capture data in Azure Data Lake Storage Gen2. 1. In the Azure portal, navigate to your event hub.
-1. Select **Features** > **Process Data**, and select **Start** on the **Capture data to ADLS Gen2 in Parquet format** card.
+1. On the left menu, select **Process Data** under **Features**. Then, select **Start** on the **Capture data to ADLS Gen2 in Parquet format** card.
+ :::image type="content" source="./media/capture-event-hub-data-parquet/process-event-hub-data-cards.png" alt-text="Screenshot showing the Process Event Hubs data start cards." lightbox="./media/capture-event-hub-data-parquet/process-event-hub-data-cards.png" :::
-1. Enter a **name** to identify your Stream Analytics job. Select **Create**.
- :::image type="content" source="./media/capture-event-hub-data-parquet/new-stream-analytics-job-name.png" alt-text="Screenshot showing the New Stream Analytics job window where you enter the job name." lightbox="./media/capture-event-hub-data-parquet/new-stream-analytics-job-name.png" :::
-1. Specify the **Serialization** type of your data in the Event Hubs and the **Authentication method** that the job will use to connect to Event Hubs. Then select **Connect**.
+1. Enter a **name** for your Stream Analytics job, and then select **Create**.
+
+ :::image type="content" source="./media/capture-event-hub-data-parquet/new-stream-analytics-job-name.png" alt-text="Screenshot showing the New Stream Analytics job window where you enter the job name." :::
+1. Specify the **Serialization** type of your data in the Event Hubs and the **Authentication method** that the job uses to connect to Event Hubs. Then select **Connect**.
+ :::image type="content" source="./media/capture-event-hub-data-parquet/event-hub-configuration.png" alt-text="Screenshot showing the Event Hubs connection configuration." lightbox="./media/capture-event-hub-data-parquet/event-hub-configuration.png" :::
-1. When the connection is established successfully, you'll see:
+1. When the connection is established successfully, you see:
- Fields that are present in the input data. You can choose **Add field** or you can select the three dot symbol next to a field to optionally remove, rename, or change its name. - A live sample of incoming data in the **Data preview** table under the diagram view. It refreshes periodically. You can select **Pause streaming preview** to view a static view of the sample input.
+
:::image type="content" source="./media/capture-event-hub-data-parquet/edit-fields.png" alt-text="Screenshot showing sample data under Data Preview." lightbox="./media/capture-event-hub-data-parquet/edit-fields.png" ::: 1. Select the **Azure Data Lake Storage Gen2** tile to edit the configuration. 1. On the **Azure Data Lake Storage Gen2** configuration page, follow these steps: 1. Select the subscription, storage account name and container from the drop-down menu. 1. Once the subscription is selected, the authentication method and storage account key should be automatically filled in.
+ 1. Select **Parquet** for **Serialization** format.
+
+ :::image type="content" source="./media/capture-event-hub-data-parquet/job-top-settings.png" alt-text="Screenshot showing the Data Lake Storage Gen2 configuration page." lightbox="./media/capture-event-hub-data-parquet/job-top-settings.png":::
1. For streaming blobs, the directory path pattern is expected to be a dynamic value. It's required for the date to be a part of the file path for the blob ΓÇô referenced as `{date}`. To learn about custom path patterns, see to [Azure Stream Analytics custom blob output partitioning](stream-analytics-custom-path-patterns-blob-storage-output.md).
+
:::image type="content" source="./media/capture-event-hub-data-parquet/blob-configuration.png" alt-text="First screenshot showing the Blob window where you edit a blob's connection configuration." lightbox="./media/capture-event-hub-data-parquet/blob-configuration.png" ::: 1. Select **Connect**
-1. When the connection is established, you'll see fields that are present in the output data.
+1. When the connection is established, you see fields that are present in the output data.
1. Select **Save** on the command bar to save your configuration.+
+ :::image type="content" source="./media/capture-event-hub-data-parquet/save-configuration.png" alt-text="Screenshot showing the Save button selected on the command bar." :::
1. Select **Start** on the command bar to start the streaming flow to capture data. Then in the Start Stream Analytics job window: 1. Choose the output start time.
+ 1. Select the pricing plan.
1. Select the number of Streaming Units (SU) that the job runs with. SU represents the computing resources that are allocated to execute a Stream Analytics job. For more information, see [Streaming Units in Azure Stream Analytics](stream-analytics-streaming-unit-consumption.md).
- 1. In the **Choose Output data error handling** list, select the behavior you want when the output of the job fails due to data error. Select **Retry** to have the job retry until it writes successfully or select another option.
+
:::image type="content" source="./media/capture-event-hub-data-parquet/start-job.png" alt-text="Screenshot showing the Start Stream Analytics job window where you set the output start time, streaming units, and error handling." lightbox="./media/capture-event-hub-data-parquet/start-job.png" :::
+1. You should see the Stream Analytic job in the **Stream Analytics job** tab of the **Process data** page for your event hub.
-## Verify output
-Verify that the Parquet files are generated in the Azure Data Lake Storage container.
-
+ :::image type="content" source="./media/capture-event-hub-data-parquet/process-data-page-jobs.png" alt-text="Screenshot showing the Stream Analytics job on the Process data page." lightbox="./media/capture-event-hub-data-parquet/process-data-page-jobs.png" :::
+
+## Verify output
-The new job is shown on the **Stream Analytics jobs** tab. Select **Open metrics** to monitor it.
+1. On the Event Hubs instance page for your event hub, select **Generate data**, select **Stocks data** for dataset, and then select **Send** to send some sample data to the event hub.
+1. Verify that the Parquet files are generated in the Azure Data Lake Storage container.
+ :::image type="content" source="./media/capture-event-hub-data-parquet/verify-captured-data.png" alt-text="Screenshot showing the generated Parquet files in the ADLS container." lightbox="./media/capture-event-hub-data-parquet/verify-captured-data.png" :::
+1. Select **Process data** on the left menu. Switch to the **Stream Analytics jobs** tab. Select **Open metrics** to monitor it.
-Here's an example screenshot of metrics showing input and output events.
+ :::image type="content" source="./media/capture-event-hub-data-parquet/open-metrics-link.png" alt-text="Screenshot showing Open Metrics link selected." lightbox="./media/capture-event-hub-data-parquet/open-metrics-link.png" :::
+
+ Here's an example screenshot of metrics showing input and output events.
+ :::image type="content" source="./media/capture-event-hub-data-parquet/job-metrics.png" alt-text="Screenshot showing metrics of the Stream Analytics job." lightbox="./media/capture-event-hub-data-parquet/job-metrics.png" :::
## Next steps
stream-analytics No Code Transform Filter Ingest Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/no-code-transform-filter-ingest-sql.md
Previously updated : 06/07/2022 Last updated : 06/13/2023 # Use Azure Stream Analytics no-code editor to transform and store data in Azure SQL database
stream-analytics Powerbi Output Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/powerbi-output-managed-identity.md
Previously updated : 05/30/2021 Last updated : 08/16/2023 # Use Managed Identity to authenticate your Azure Stream Analytics job to Power BI
-[Managed Identity authentication](../active-directory/managed-identities-azure-resources/overview.md) for output to Power BI gives Stream Analytics jobs direct access to a workspace within your Power BI account. This feature allows for deployments of Stream Analytics jobs to be fully automated, since it is no longer required for a user to interactively log in to Power BI via the Azure portal. Additionally, long running jobs that write to Power BI are now better supported, since you will not need to periodically reauthorize the job.
+[Managed Identity authentication](../active-directory/managed-identities-azure-resources/overview.md) for output to Power BI gives Stream Analytics jobs direct access to a workspace within your Power BI account. This feature allows for deployments of Stream Analytics jobs to be fully automated, since it's no longer required for a user to interactively sign in to Power BI via the Azure portal. Additionally, long running jobs that write to Power BI are now better supported, since you won't need to periodically reauthorize the job.
This article shows you how to enable Managed Identity for the Power BI output(s) of a Stream Analytics job through the Azure portal and through an Azure Resource Manager deployment.
+> [!NOTE]
+> Only **system-assigned** managed identities are supported with the Power BI output. Currently, using user-assigned managed identities with the Power BI output isn't supported.
+ ## Prerequisites
-The following are required for using this feature:
+You must have the following prerequisites before you use this feature:
- A Power BI account with a [Pro license](/power-bi/service-admin-purchasing-power-bi-pro).--- An upgraded workspace within your Power BI account. See [Power BI's announcement](https://powerbi.microsoft.com/blog/announcing-new-workspace-experience-general-availability-ga/) of this feature for more details.
+- An upgraded workspace within your Power BI account. For more information, see [Power BI's announcement](https://powerbi.microsoft.com/blog/announcing-new-workspace-experience-general-availability-ga/).
## Create a Stream Analytics job using the Azure portal
-1. Create a new Stream Analytics job or open an existing job in the Azure portal. From the menu bar located on the left side of the screen, select **Managed Identity** located under **Configure**. Ensure that "Use System-assigned Managed Identity" is selected and then select the **Save** button on the bottom of the screen.
+1. Create a new Stream Analytics job or open an existing job in the Azure portal.
+1. From the menu bar located on the left side of the screen, select **Managed Identity** located under **Settings**.
- ![Configure Stream Analytics managed identity](./media/common/stream-analytics-enable-managed-identity.png)
+ :::image type="content" source="./media/stream-analytics-powerbi-output-managed-identity/managed-identity-select-button.png" alt-text="Screenshot showing the Managed Identity page with Select identity button selected." lightbox="./media/stream-analytics-powerbi-output-managed-identity/managed-identity-select-button.png":::
+1. On the **Select identity** page, select **System assigned identity***. If you select the latter option, specify the managed identity you want to use. Then, select **Save**.
+ :::image type="content" source="./media/stream-analytics-powerbi-output-managed-identity/system-assigned-identity.png" alt-text="Screenshot showing the Select identity page with System assigned identity selected." lightbox="./media/stream-analytics-powerbi-output-managed-identity/system-assigned-identity.png":::
+1. On the **Managed identity** page, confirm that you see the **Principal ID** and **Principal name** assigned to your Stream Analytics job. The principal name should be same as your Stream Analytics job name.
2. Before configuring the output, give the Stream Analytics job access to your Power BI workspace by following the directions in the [Give the Stream Analytics job access to your Power BI workspace](#give-the-stream-analytics-job-access-to-your-power-bi-workspace) section of this article.
+3. Navigate to the **Outputs** section of your Stream Analytic's job, select **+ Add**, and then choose **Power BI**. Then, select the **Authorize** button and sign in with your Power BI account.
-3. Navigate to the **Outputs** section of your Stream Analytic's job, select **+ Add**, and then choose **Power BI**. Then, select the **Authorize** button and log in with your Power BI account.
-
- ![Authorize with Power BI account](./media/stream-analytics-powerbi-output-managed-identity/stream-analytics-authorize-powerbi.png)
+ [ ![Authorize with Power BI account](./media/stream-analytics-powerbi-output-managed-identity/stream-analytics-authorize-powerbi.png) ](./media/stream-analytics-powerbi-output-managed-identity/stream-analytics-authorize-powerbi.png#lightbox)
4. Once authorized, a dropdown list will be populated with all of the workspaces you have access to. Select the workspace that you authorized in the previous step. Then select **Managed Identity** as the "Authentication mode". Finally, select the **Save** button.
- ![Configure Power BI output with Managed Identity](./media/stream-analytics-powerbi-output-managed-identity/stream-analytics-configure-powerbi-with-managed-id.png)
+ :::image type="content" source="./media/stream-analytics-powerbi-output-managed-identity/stream-analytics-configure-powerbi-with-managed-id.png" alt-text="Screenshot showing the Power BI output configuration with Managed identity authentication mode selected." lightbox="./media/stream-analytics-powerbi-output-managed-identity/stream-analytics-configure-powerbi-with-managed-id.png":::
## Azure Resource Manager deployment
Azure Resource Manager allows you to fully automate the deployment of your Strea
} ```
- If you plan to use the Power BI REST API to add the Stream Analytics job to your Power BI workspace, make note of the returned "principalId".
+ If you plan to use the Power BI REST API to add the Stream Analytics job to your Power BI workspace, make note of the returned `principalId`.
3. Now that the job is created, continue to the [Give the Stream Analytics job access to your Power BI workspace](#give-the-stream-analytics-job-access-to-your-power-bi-workspace) section of this article.
Now that the Stream Analytics job has been created, it can be given access to a
### Use the Power BI UI > [!Note]
- > In order to add the Stream Analytics job to your Power BI workspace using the UI, you also have to enable service principal access in the **Developer settings** in the Power BI admin portal. See [Get started with a service principal](/power-bi/developer/embed-service-principal) for more details.
+ > In order to add the Stream Analytics job to your Power BI workspace using the UI, you also have to enable service principal access in the **Developer settings** in the Power BI admin portal. For more information, see [Get started with a service principal](/power-bi/developer/embed-service-principal).
-1. Navigate to the workspace's access settings. See this article for more details: [Give access to your workspace](/power-bi/service-create-the-new-workspaces#give-access-to-your-workspace).
+1. Navigate to the workspace's access settings. For more information, see [Give access to your workspace](/power-bi/service-create-the-new-workspaces#give-access-to-your-workspace).
2. Type the name of your Stream Analytics job in the text box and select **Contributor** as the access level. 3. Select **Add** and close the pane.
- ![Add Stream Analytics job to Power BI workspace](./media/stream-analytics-powerbi-output-managed-identity/stream-analytics-add-job-to-powerbi-workspace.png)
+ [ ![Add Stream Analytics job to Power BI workspace](./media/stream-analytics-powerbi-output-managed-identity/stream-analytics-add-job-to-powerbi-workspace.png) ](./media/stream-analytics-powerbi-output-managed-identity/stream-analytics-add-job-to-powerbi-workspace.png#lightbox)
### Use the Power BI PowerShell cmdlets
Now that the Stream Analytics job has been created, it can be given access to a
> [!Important] > Please ensure you are using version 1.0.821 or later of the cmdlets.
-```powershell
-Install-Module -Name MicrosoftPowerBIMgmt
-```
-
-2. Log in to Power BI.
-
-```powershell
-Login-PowerBI
-```
+ ```powershell
+ Install-Module -Name MicrosoftPowerBIMgmt
+ ```
+2. Sign in to Power BI.
+ ```powershell
+ Login-PowerBI
+ ```
3. Add your Stream Analytics job as a Contributor to the workspace.
-```powershell
-Add-PowerBIWorkspaceUser -WorkspaceId <group-id> -PrincipalId <principal-id> -PrincipalType App -AccessRight Contributor
-```
+ ```powershell
+ Add-PowerBIWorkspaceUser -WorkspaceId <group-id> -PrincipalId <principal-id> -PrincipalType App -AccessRight Contributor
+ ```
### Use the Power BI REST API
Request Body
### Use a Service Principal to grant permission for an ASA job's Managed Identity
-For automated deployments, using an interactive login to give an ASA job access to a Power BI workspace is not possible. This can be done be using service principal to grant permission for an ASA job's managed identity. This is possible using PowerShell:
+For automated deployments, using an interactive sign-in to give an ASA job access to a Power BI workspace isn't possible. It can be done using service principal to grant permission for an ASA job's managed identity. This is possible using PowerShell:
```powershell Connect-PowerBIServiceAccount -ServicePrincipal -TenantId "<tenant-id>" -CertificateThumbprint "<thumbprint>" -ApplicationId "<app-id>"
Add-PowerBIWorkspaceUser -WorkspaceId <group-id> -PrincipalId <principal-id> -Pr
## Remove Managed Identity
-The Managed Identity created for a Stream Analytics job is deleted only when the job is deleted. There is no way to delete the Managed Identity without deleting the job. If you no longer want to use the Managed Identity, you can change the authentication method for the output. The Managed Identity will continue to exist until the job is deleted, and will be used if you decide to used Managed Identity authentication again.
+The Managed Identity created for a Stream Analytics job is deleted only when the job is deleted. There's no way to delete the Managed Identity without deleting the job. If you no longer want to use the Managed Identity, you can change the authentication method for the output. The Managed Identity will continue to exist until the job is deleted, and will be used if you decide to used Managed Identity authentication again.
## Limitations Below are the limitations of this feature: -- Classic Power BI workspaces are not supported.
+- Classic Power BI workspaces aren't supported.
- Azure accounts without Azure Active Directory. -- Multi-tenant access is not supported. The Service principal created for a given Stream Analytics job must reside in the same Azure Active Directory tenant in which the job was created, and cannot be used with a resource that resides in a different Azure Active Directory tenant.
+- Multi-tenant access isn't supported. The Service principal created for a given Stream Analytics job must reside in the same Azure Active Directory tenant in which the job was created, and can't be used with a resource that resides in a different Azure Active Directory tenant.
-- [User Assigned Identity](../active-directory/managed-identities-azure-resources/overview.md) is not supported. This means you are not able to enter your own service principal to be used by their Stream Analytics job. The service principal must be generated by Azure Stream Analytics.
+- [User Assigned Identity](../active-directory/managed-identities-azure-resources/overview.md) isn't supported. This means you aren't able to enter your own service principal to be used by their Stream Analytics job. The service principal must be generated by Azure Stream Analytics.
## Next steps
stream-analytics Sql Reference Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/sql-reference-data.md
Use the following steps to add Azure SQL Database as a reference input source us
1. Create a Stream Analytics job.
-2. Create a storage account to be used by the Stream Analytics job.
+2. Create a storage account to be used by the Stream Analytics job.
+ > [!IMPORTANT]
+ > The Azure Stream Analytics retains snapshots within this storage account. When configuring the retention policy, it is imperative to ensure that the chosen timespan effectively encompasses the desired recovery duration for your Stream Analytics job.
-3. Create your Azure SQL Database with a data set to be used as reference data by the Stream Analytics job.
+4. Create your Azure SQL Database with a data set to be used as reference data by the Stream Analytics job.
### Define SQL Database reference data input
Use the following steps to add Azure SQL Database as a reference input source us
2. Become familiar with the [Stream Analytics tools for Visual Studio](stream-analytics-quick-create-vs.md) quickstart. 3. Create a storage account.
+ > [!IMPORTANT]
+ > The Azure Stream Analytics retains snapshots within this storage account. When configuring the retention policy, it is imperative to ensure that the chosen timespan effectively encompasses the desired recovery duration for your Stream Analytics job.
### Create a SQL Database table
stream-analytics Stream Analytics Get Started With Azure Stream Analytics To Process Data From Iot Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-get-started-with-azure-stream-analytics-to-process-data-from-iot-devices.md
Previously updated : 03/23/2022 Last updated : 08/15/2023 # Process real-time IoT data streams with Azure Stream Analytics
In this article, you learn how to create stream-processing logic to gather data
## Scenario
-Contoso, which is a company in the industrial automation space, has completely automated its manufacturing process. The machinery in this plant has sensors that are capable of emitting streams of data in real time. In this scenario, a production floor manager wants to have real-time insights from the sensor data to look for patterns and take actions on them. You can use Stream Analytics Query Language (SAQL) over the sensor data to find interesting patterns from the incoming stream of data.
+Contoso, a company in the industrial automation space, has automated its manufacturing process. The machinery in this plant has sensors that are capable of emitting streams of data in real time. In this scenario, a production floor manager wants to have real-time insights from the sensor data to look for patterns and take actions on them. You can use Stream Analytics Query Language (SAQL) over the sensor data to find interesting patterns from the incoming stream of data.
-In this example, the data is generated from a Texas Instruments sensor tag device. The payload of the data is in JSON format and looks like the following:
+In this example, the data is generated from a Texas Instruments sensor tag device. The payload of the data is in JSON format as shown in the following sample snippet:
```json {
In this example, the data is generated from a Texas Instruments sensor tag devic
} ```
-In a real-world scenario, you could have hundreds of these sensors generating events as a stream. Ideally, a gateway device would run code to push these events to [Azure Event Hubs](https://azure.microsoft.com/services/event-hubs/) or [Azure IoT Hubs](https://azure.microsoft.com/services/iot-hub/). Your Stream Analytics job would ingest these events from Event Hubs or Iot Hubs and run real-time analytics queries against the streams. Then, you could send the results to one of the [supported outputs](stream-analytics-define-outputs.md).
+In a real-world scenario, you could have hundreds of these sensors generating events as a stream. Ideally, a gateway device would run code to push these events to [Azure Event Hubs](https://azure.microsoft.com/services/event-hubs/) or [Azure IoT Hubs](https://azure.microsoft.com/services/iot-hub/). Your Stream Analytics job would ingest these events from Event Hubs or IoT Hubs and run real-time analytics queries against the streams. Then, you could send the results to one of the [supported outputs](stream-analytics-define-outputs.md).
-For ease of use, this getting started guide provides a sample data file, which was captured from real sensor tag devices. You can run queries on the sample data and see results. In subsequent tutorials, you will learn how to connect your job to inputs and outputs and deploy them to the Azure service.
+For ease of use, this getting started guide provides a sample data file, which was captured from real sensor tag devices. You can run queries on the sample data and see results. In subsequent tutorials, you learn how to connect your job to inputs and outputs and deploy them to the Azure service.
## Create a Stream Analytics job
-1. In the [Azure portal](https://portal.azure.com), select **+ Create a resource** from the left navigation menu. Then, select **Stream Analytics job** from **Analytics**.
+1. Navigate to the [Azure portal](https://portal.azure.com).
+1. On the left navigation menu, select **All services**, select **Analytics**, hover the mouse over **Stream Analytics jobs**, and then select **Create**.
- ![Create a new Stream Analytics job](./media/stream-analytics-get-started-with-iot-devices/stream-analytics-get-started-with-iot-devices-02.png)
-
-1. Enter a unique job name and verify the subscription is the correct one for your job. Create a new resource group or select an existing one from your subscription.
-
-1. Select a location for your job. Use the same location for your resource group and all resources to increased processing speed and reduced of costs. After you've made the configurations, select **Create**.
+ :::image type="content" source="./media/stream-analytics-get-started-with-iot-devices/stream-analytics-get-started-with-iot-devices-02.png" alt-text="Screenshot that shows the selection of Create button for a Stream Analytics job." lightbox="./media/stream-analytics-get-started-with-iot-devices/stream-analytics-get-started-with-iot-devices-02.png":::
+1. On the **New Stream Analytics job** page, follow these steps:
+ 1. For **Subscription**, select your **Azure subscription**.
+ 1. For **Resource group**, select an existing resource group or create a resource group.
+ 1. For **Name**, enter a unique name for the Stream Analytics job.
+ 1. Select the **Region** in which you want to deploy the Stream Analytics job. Use the same location for your resource group and all resources to increase the processing speed and reduce costs.
+ 1. Select **Review + create**.
- ![Create a new Stream Analytics job details](./media/stream-analytics-get-started-with-iot-devices/stream-analytics-get-started-with-iot-devices-03.png)
+ :::image type="content" source="./media/stream-analytics-get-started-with-iot-devices/stream-analytics-get-started-with-iot-devices-03.png" alt-text="Screenshot that shows the New Stream Analytics job page.":::
+1. On the **Review + create** page, review settings, and select **Create**.
+1. After the deployment succeeds, select **Go to resource** to navigate to the **Stream Analytics job** page for your Stream Analytics job.
## Create an Azure Stream Analytics query
-The next step after your job is created is to write a query. You can test queries against sample data without connecting an input or output to your job.
-
-Download the [HelloWorldASA-InputStream.json](https://github.com/Azure/azure-stream-analytics/blob/master/Samples/GettingStarted/HelloWorldASA-InputStream.json
-) from GitHub. Then, navigate to your Azure Stream Analytics job in the Azure portal.
-
-Select **Query** under **Job topology** from the left menu. Then select **Upload sample input**. Upload the `HelloWorldASA-InputStream.json` file, and select **Ok**.
+After your job is created, write a query. You can test queries against sample data without connecting an input or output to your job.
-![Stream Analytics dashboard query tile](./media/stream-analytics-get-started-with-iot-devices/stream-analytics-get-started-with-iot-devices-05.png)
+1. Download the [HelloWorldASA-InputStream.json](https://github.com/Azure/azure-stream-analytics/blob/master/Samples/GettingStarted/HelloWorldASA-InputStream.json
+) from GitHub.
+1. On the **Azure Stream Analytics job** page in the Azure portal, select **Query** under **Job topology** from the left menu.
+1. Select **Upload sample input**, select the `HelloWorldASA-InputStream.json` file you downloaded, and select **OK**.
-Notice that a preview of the data is automatically populated in the **Input preview** table.
+ :::image type="content" source="./media/stream-analytics-get-started-with-iot-devices/stream-analytics-get-started-with-iot-devices-05.png" alt-text="Screenshot that shows the **Query** page with **Upload sample input** selected." lightbox="./media/stream-analytics-get-started-with-iot-devices/stream-analytics-get-started-with-iot-devices-05.png":::
+1. Notice that a preview of the data is automatically populated in the **Input preview** table.
-![Preview of sample input data](./media/stream-analytics-get-started-with-iot-devices/input-preview.png)
+ :::image type="content" source="./media/stream-analytics-get-started-with-iot-devices/input-preview.png" alt-text="Screenshot that shows sample input data in the Input preview tab.":::
### Query: Archive your raw data The simplest form of query is a pass-through query that archives all input data to its designated output. This query is the default query populated in a new Azure Stream Analytics job.
-```sql
-SELECT
- *
-INTO
- Output
-FROM
- InputStream
-```
+1. In the **Query** window, enter the following query, and then select **Test query** on the toolbar.
-Select **Test query** and view the results in the **Test results** table.
+ ```sql
+ SELECT
+ *
+ INTO
+ youroutputalias
+ FROM
+ yourinputalias
+ ```
+2. View the results in the **Test results** tab in the bottom pane.
-![Test results for Stream Analytics query](./media/stream-analytics-get-started-with-iot-devices/stream-analytics-get-started-with-iot-devices-07.png)
+ :::image type="content" source="./media/stream-analytics-get-started-with-iot-devices/stream-analytics-get-started-with-iot-devices-07.png" alt-text="Screenshot that shows the sample query and its results.":::
### Query: Filter the data based on a condition
-Let's try to filter the results based on a condition. We would like to show results for only those events that come from "sensorA."
-
-```sql
-SELECT
- time,
- dspl AS SensorName,
- temp AS Temperature,
- hmdt AS Humidity
-INTO
- Output
-FROM
- InputStream
-WHERE dspl='sensorA'
-```
+Let's update the query to filter the results based on a condition. For example, the following query shows events that come from `sensorA`."
+
+1. Update the query with the following sample:
-Paste the query in the editor and select **Test query** to review the results.
+ ```sql
+ SELECT
+ time,
+ dspl AS SensorName,
+ temp AS Temperature,
+ hmdt AS Humidity
+ INTO
+ youroutputalias
+ FROM
+ yourinputalias
+ WHERE dspl='sensorA'
+ ```
+2. Select **Test query** to see the results of the query.
-![Filtering a data stream](./media/stream-analytics-get-started-with-iot-devices/stream-analytics-get-started-with-iot-devices-08.png)
+ :::image type="content" source="./media/stream-analytics-get-started-with-iot-devices/stream-analytics-get-started-with-iot-devices-08.png" alt-text="Screenshot that shows the query results with the filter.":::
### Query: Alert to trigger a business workflow Let's make our query more detailed. For every type of sensor, we want to monitor average temperature per 30-second window and display results only if the average temperature is above 100 degrees.
-```sql
-SELECT
- System.Timestamp AS OutputTime,
- dspl AS SensorName,
- Avg(temp) AS AvgTemperature
-INTO
- Output
-FROM
- InputStream TIMESTAMP BY time
-GROUP BY TumblingWindow(second,30),dspl
-HAVING Avg(temp)>100
-```
+1. Update the query to:
+
+ ```sql
+ SELECT
+ System.Timestamp AS OutputTime,
+ dspl AS SensorName,
+ Avg(temp) AS AvgTemperature
+ INTO
+ youroutputalias
+ FROM
+ yourinputalias TIMESTAMP BY time
+ GROUP BY TumblingWindow(second,30),dspl
+ HAVING Avg(temp)>100
+ ```
+1. Select **Test query** to see the results of the query.
-![30-second filter query](./media/stream-analytics-get-started-with-iot-devices/stream-analytics-get-started-with-iot-devices-10.png)
+ :::image type="content" source="./media/stream-analytics-get-started-with-iot-devices/stream-analytics-get-started-with-iot-devices-10.png" alt-text="Screenshot that shows the query with a tumbling window.":::
-You should see results that contain only 245 rows and names of sensors where the average temperate is greater than 100. This query groups the stream of events by **dspl**, which is the sensor name, over a **Tumbling Window** of 30 seconds. Temporal queries must state how you want time to progress. By using the **TIMESTAMP BY** clause, you have specified the **OUTPUTTIME** column to associate times with all temporal calculations. For detailed information, read about [Time Management](/stream-analytics-query/time-management-azure-stream-analytics) and [Windowing functions](/stream-analytics-query/windowing-azure-stream-analytics).
+ You should see results that contain only 245 rows and names of sensors where the average temperate is greater than 100. This query groups the stream of events by **dspl**, which is the sensor name, over a **Tumbling Window** of 30 seconds. Temporal queries must state how you want time to progress. By using the **TIMESTAMP BY** clause, you have specified the **OUTPUTTIME** column to associate times with all temporal calculations. For detailed information, read about [Time Management](/stream-analytics-query/time-management-azure-stream-analytics) and [Windowing functions](/stream-analytics-query/windowing-azure-stream-analytics).
### Query: Detect absence of events
-How can we write a query to find a lack of input events? Let's find the last time that a sensor sent data and then did not send events for the next 5 seconds.
-
-```sql
-SELECT
- t1.time,
- t1.dspl AS SensorName
-INTO
- Output
-FROM
- InputStream t1 TIMESTAMP BY time
-LEFT OUTER JOIN InputStream t2 TIMESTAMP BY time
-ON
- t1.dspl=t2.dspl AND
- DATEDIFF(second,t1,t2) BETWEEN 1 and 5
-WHERE t2.dspl IS NULL
-```
+How can we write a query to find a lack of input events? Let's find the last time that a sensor sent data and then didn't send events for the next 5 seconds.
+
+1. Update the query to:
+
+ ```sql
+ SELECT
+ t1.time,
+ t1.dspl AS SensorName
+ INTO
+ youroutputalias
+ FROM
+ yourinputalias t1 TIMESTAMP BY time
+ LEFT OUTER JOIN yourinputalias t2 TIMESTAMP BY time
+ ON
+ t1.dspl=t2.dspl AND
+ DATEDIFF(second,t1,t2) BETWEEN 1 and 5
+ WHERE t2.dspl IS NULL
+ ```
+2. Select **Test query** to see the results of the query.
+
+ :::image type="content" source="./media/stream-analytics-get-started-with-iot-devices/stream-analytics-get-started-with-iot-devices-11.png" alt-text="Screenshot that shows the query that detects absence of events.":::
-![Detect absence of events](./media/stream-analytics-get-started-with-iot-devices/stream-analytics-get-started-with-iot-devices-11.png)
-Here we use a **LEFT OUTER** join to the same data stream (self-join). For an **INNER** join, a result is returned only when a match is found. For a **LEFT OUTER** join, if an event from the left side of the join is unmatched, a row that has NULL for all the columns of the right side is returned. This technique is very useful to find an absence of events. For more information, see [JOIN](/stream-analytics-query/join-azure-stream-analytics).
+ Here we use a **LEFT OUTER** join to the same data stream (self-join). For an **INNER** join, a result is returned only when a match is found. For a **LEFT OUTER** join, if an event from the left side of the join is unmatched, a row that has NULL for all the columns of the right side is returned. This technique is useful to find an absence of events. For more information, see [JOIN](/stream-analytics-query/join-azure-stream-analytics).
## Conclusion
-The purpose of this article is to demonstrate how to write different Stream Analytics Query Language queries and see results in the browser. However, this is just to get you started. Stream Analytics supports a variety of inputs and outputs and can even use functions in Azure Machine Learning to make it a robust tool for analyzing data streams. For more information about how to write queries, read the article about [common query patterns](stream-analytics-stream-analytics-query-patterns.md).
+The purpose of this article is to demonstrate how to write different Stream Analytics Query Language queries and see results in the browser. However, this article is just to get you started. Stream Analytics supports various inputs and outputs and can even use functions in Azure Machine Learning to make it a robust tool for analyzing data streams. For more information about how to write queries, read the article about [common query patterns](stream-analytics-stream-analytics-query-patterns.md).
stream-analytics Stream Analytics User Assigned Managed Identity Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-user-assigned-managed-identity-overview.md
Previously updated : 09/29/2022 Last updated : 08/15/2023 # User-assigned managed identities for Azure Stream Analytics
With support for both system-assigned identity and user-assigned identity, here
2. You can switch from an existing user-assigned identity to a newly created user-assigned identity. The previous identity is not removed from storage access control list. 3. You cannot add multiple identities to your stream analytics job. 4. Currently we do not support deleting an identity from a stream analytics job. You can replace it with another user-assigned or system-assigned identity.
+5. You cannot use user-assigned identity to authenticate via allow-trusted services.
## Next steps
synapse-analytics Synapse Workspace Synapse Rbac Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/synapse-workspace-synapse-rbac-roles.md
The following table describes the built-in roles and the scopes at which they ca
|Synapse Administrator |Full Synapse access to SQL pools, Data Explorer pools, Apache Spark pools, and Integration runtimes. Includes create, read, update, and delete access to all published code artifacts. Includes Compute Operator, Linked Data Manager, and Credential User permissions on the workspace system identity credential. Includes assigning Synapse RBAC roles. In addition to Synapse Administrator, Azure Owners can also assign Synapse RBAC roles. Azure permissions are required to create, delete, and manage compute resources. </br></br>_Can read and write artifacts</br> Can do all actions on Spark activities.</br> Can view Spark pool logs</br> Can view saved notebook and pipeline output </br> Can use the secrets stored by linked services or credentials</br>Can assign and revoke Synapse RBAC roles at current scope_|Workspace </br> Spark pool<br/>Integration runtime </br>Linked service</br>Credential | |Synapse Apache Spark Administrator</br>|Full Synapse access to Apache Spark Pools. Create, read, update, and delete access to published Spark job definitions, notebooks and their outputs, and to libraries, linked services, and credentials.  Includes read access to all other published code artifacts. Doesn't include permission to use credentials and run pipelines. Doesn't include granting access. </br></br>_Can do all actions on Spark artifacts</br>Can do all actions on Spark activities_|Workspace</br>Spark pool| |Synapse SQL Administrator|Full Synapse access to serverless SQL pools. Create, read, update, and delete access to published SQL scripts, credentials, and linked services.  Includes read access to all other published code artifacts.  Doesn't include permission to use credentials and run pipelines. Doesn't include granting access. </br></br>*Can do all actions on SQL scripts<br/>Can connect to SQL serverless endpoints with SQL `db_datareader`, `db_datawriter`, `connect`, and `grant` permissions*|Workspace|
-|Synapse Contributor|Full Synapse access to Apache Spark pools and Integration runtimes. Includes create, read, update, and delete access to all published code artifacts and their outputs, including credentials and linked services.  Includes compute operator permissions. Doesn't include permission to use credentials and run pipelines. Doesn't include granting access. </br></br>_Can read and write artifacts</br>Can view saved notebook and pipeline output</br>Can do all actions on Spark activities</br>Can view Spark pool logs_|Workspace </br> Spark pool<br/> Integration runtime|
-|Synapse Artifact Publisher|Create, read, update, and delete access to published code artifacts and their outputs. Doesn't include permission to run code or pipelines, or to grant access. </br></br>_Can read published artifacts and publish artifacts</br>Can view saved notebook, Spark job, and pipeline output_|Workspace
+|Synapse Contributor|Full Synapse access to Apache Spark pools and Integration runtimes. Includes create, read, update, and delete access to all published code artifacts and their outputs, including scheduled pipelines, credentials and linked services.  Includes compute operator permissions. Doesn't include permission to use credentials and run pipelines. Doesn't include granting access. </br></br>_Can read and write artifacts</br>Can view saved notebook and pipeline output</br>Can do all actions on Spark activities</br>Can view Spark pool logs_|Workspace </br> Spark pool<br/> Integration runtime|
+|Synapse Artifact Publisher|Create, read, update, and delete access to published code artifacts and their outputs, including scheduled pipelines. Doesn't include permission to run code or pipelines, or to grant access. </br></br>_Can read published artifacts and publish artifacts</br>Can view saved notebook, Spark job, and pipeline output_|Workspace
|Synapse Artifact User|Read access to published code artifacts and their outputs. Can create new artifacts but can't publish changes or run code without additional permissions.|Workspace |Synapse Compute Operator |Submit Spark jobs and notebooks and view logs.  Includes canceling Spark jobs submitted by any user. Requires additional use credential permissions on the workspace system identity to run pipelines, view pipeline runs and outputs. </br></br>_Can submit and cancel jobs, including jobs submitted by others</br>Can view Spark pool logs_|Workspace</br>Spark pool</br>Integration runtime| |Synapse Monitoring Operator |Read published code artifacts, including logs and outputs for pipeline runs and completed notebooks. Includes ability to list and view details of Apache Spark pools, Data Explorer pools, and Integration runtimes. Requires additional permissions to run/cancel pipelines, Spark notebooks, and Spark jobs.|Workspace |
synapse-analytics Synapse Workspace Understand What Role You Need https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/synapse-workspace-understand-what-role-you-need.md
You can pause or scale a dedicated SQL pool, configure a Spark pool, or an integ
With access to Synapse Studio, you can create new code artifacts, such as SQL scripts, KQL scripts, notebooks, spark jobs, linked services, pipelines, dataflows, triggers, and credentials. These artifacts can be published or saved with additional permissions.
-If you're a Synapse Artifact User, Synapse Artifact Publisher, Synapse Contributor, or Synapse Administrator you can list, open, and edit already published code artifacts.
+If you're a Synapse Artifact User, Synapse Artifact Publisher, Synapse Contributor, or Synapse Administrator you can list, open, and edit already published code artifacts, including scheduled pipelines.
### Execute your code
synapse-analytics Apache Spark Development Using Notebooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-development-using-notebooks.md
We provide rich operations to develop notebooks:
+ [Collapse a cell output](#collapse-a-cell-output) + [Notebook outline](#notebook-outline)
+> [!NOTE]
+>
+> In the notebooks, there is a SparkSession automatically created for you, stored in a variable called `spark`. Also there is a variable for SparkContext which is called `sc`. Users can access these variables directly and should not change the values of these variables.
++ <h3 id="add-a-cell">Add a cell</h3> There are multiple ways to add a new cell to your notebook.
Select the **Undo** / **Redo** button or press **Z** / **Shift+Z** to revoke the
![Screenshot of Synapse undo cells of aznb](./media/apache-spark-development-using-notebooks/synapse-undo-cells-aznb.png) Supported undo cell operations:
-+ Insert/Delete cell: You could revoke the delete operations by selecting **Undo**, the text content will be kept along with the cell.
++ Insert/Delete cell: You could revoke the delete operations by selecting **Undo**, the text content is kept along with the cell. + Reorder cell. + Toggle parameter. + Convert between Code cell and Markdown cell.
Select the **Cancel All** button to cancel the running cells or cells waiting in
### Notebook reference
-You can use ```%run <notebook path>``` magic command to reference another notebook within current notebook's context. All the variables defined in the reference notebook are available in the current notebook. ```%run``` magic command supports nested calls but not support recursive calls. You will receive an exception if the statement depth is larger than **five**.
+You can use ```%run <notebook path>``` magic command to reference another notebook within current notebook's context. All the variables defined in the reference notebook are available in the current notebook. ```%run``` magic command supports nested calls but not support recursive calls. You receive an exception if the statement depth is larger than **five**.
Example: ``` %run /<path>/Notebook1 { "parameterInt": 1, "parameterFloat": 2.5, "parameterBool": true, "parameterString": "abc" } ```.
Notebook reference works in both interactive mode and Synapse pipeline.
### Variable explorer
-Synapse notebook provides a built-in variables explorer for you to see the list of the variables name, type, length, and value in the current Spark session for PySpark (Python) cells. More variables will show up automatically as they are defined in the code cells. Clicking on each column header will sort the variables in the table.
+Synapse notebook provides a built-in variables explorer for you to see the list of the variables name, type, length, and value in the current Spark session for PySpark (Python) cells. More variables show up automatically as they are defined in the code cells. Clicking on each column header sorts the variables in the table.
You can select the **Variables** button on the notebook command bar to open or hide the variable explorer.
Parameterized session configuration allows you to replace the value in %%configu
} ```
-Notebook will use default value if run a notebook in interactive mode directly or no parameter that match "activityParameterName" is given from Pipeline Notebook activity.
+Notebook uses default value if run a notebook in interactive mode directly or no parameter that match "activityParameterName" is given from Pipeline Notebook activity.
During the pipeline run mode, you can configure pipeline Notebook activity settings as below: ![Screenshot of parameterized session configuration](./media/apache-spark-development-using-notebooks/parameterized-session-config.png)
You can access data in the primary storage account directly. There's no need to
## IPython Widgets
-Widgets are eventful Python objects that have a representation in the browser, often as a control like a slider, textbox etc. IPython Widgets only works in Python environment, it's not supported in other languages (e.g. Scala, SQL, C#) yet.
+Widgets are eventful Python objects that have a representation in the browser, often as a control like a slider, textbox etc. IPython Widgets only works in Python environment, it's not supported in other languages (for example, Scala, SQL, C#) yet.
### To use IPython Widget 1. You need to import `ipywidgets` module first to use the Jupyter Widget framework.
Widgets are eventful Python objects that have a representation in the browser, o
slider ```
-3. Run the cell, the widget will display at the output area.
+3. Run the cell, the widget displays at the output area.
![Screenshot of ipython widgets slider](./media/apache-spark-development-using-notebooks/ipython-widgets-slider.png)
-4. You can use multiple `display()` calls to render the same widget instance multiple times, but they will remain in sync with each other.
+4. You can use multiple `display()` calls to render the same widget instance multiple times, but they remain in sync with each other.
```python slider = widgets.IntSlider()
Widgets are eventful Python objects that have a representation in the browser, o
|`widgets.jslink()`|You can use `widgets.link()` function to link two similar widgets.| |`FileUpload` widget| Not support yet.|
-2. Global `display` function provided by Synapse does not support displaying multiple widgets in 1 call (i.e. `display(a, b)`), which is different from IPython `display` function.
+2. Global `display` function provided by Synapse does not support displaying multiple widgets in one call (that is, `display(a, b)`), which is different from IPython `display` function.
3. If you close a notebook that contains IPython Widget, you will not be able to see or interact with it until you execute the corresponding cell again.
Available cell magics:
<h2 id="reference-unpublished-notebook">Reference unpublished notebook</h2>
-Reference unpublished notebook is helpful when you want to debug "locally", when enabling this feature, notebook run will fetch the current content in web cache, if you run a cell including a reference notebooks statement, you will reference the presenting notebooks in the current notebook browser instead of a saved versions in cluster, that means the changes in your notebook editor can be referenced immediately by other notebooks without having to be published(Live mode) or committed(Git mode), by leveraging this approach you can easily avoid common libraries getting polluted during developing or debugging process.
+Reference unpublished notebook is helpful when you want to debug "locally", when enabling this feature, notebook run fetches the current content in web cache, if you run a cell including a reference notebooks statement, you reference the presenting notebooks in the current notebook browser instead of a saved versions in cluster, that means the changes in your notebook editor can be referenced immediately by other notebooks without having to be published(Live mode) or committed(Git mode), by leveraging this approach you can easily avoid common libraries getting polluted during developing or debugging process.
You can enable Reference unpublished notebook from Properties panel:
You can reuse your notebook sessions conveniently now without having to start ne
![Screenshot of notebook-manage-sessions](./media/apache-spark-development-using-notebooks/synapse-notebook-manage-sessions.png)
-In the **Active sessions** list you can see the session information and the corresponding notebook that is currently attached to the session. You can operate Detach with notebook, Stop the session, and View in monitoring from here. Moreover, you can easily connect your selected notebook to an active session in the list started from another notebook, the session will be detached from the previous notebook (if it's not idle) then attach to the current one.
+In the **Active sessions**, list you can see the session information and the corresponding notebook that is currently attached to the session. You can operate Detach with notebook, Stop the session, and View in monitoring from here. Moreover, you can easily connect your selected notebook to an active session in the list started from another notebook, the session is detached from the previous notebook (if it's not idle) then attach to the current one.
![Screenshot of notebook-sessions-list](./media/apache-spark-development-using-notebooks/synapse-notebook-sessions-list.png)
To parameterize your notebook, select the ellipses (...) to access the **more co
-Azure Data Factory looks for the parameters cell and treats this cell as defaults for the parameters passed in at execution time. The execution engine will add a new cell beneath the parameters cell with input parameters in order to overwrite the default values.
+Azure Data Factory looks for the parameters cell and treats this cell as defaults for the parameters passed in at execution time. The execution engine adds a new cell beneath the parameters cell with input parameters in order to overwrite the default values.
### Assign parameters values from a pipeline
update-center Deploy Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/deploy-updates.md
See the following sections for detailed information:
Update management center (preview) is available in all [Azure public regions](support-matrix.md#supported-regions).
+## Configure reboot settings
+
+The registry keys listed in [Configuring Automatic Updates by editing the registry](/windows/deployment/update/waas-wu-settings#configuring-automatic-updates-by-editing-the-registry) and [Registry keys used to manage restart](/windows/deployment/update/waas-restart#registry-keys-used-to-manage-restart) can cause your machines to reboot, even if you specify **Never Reboot** in the **Schedule** settings. Configure these registry keys to best suit your environment.
+ ## Install updates on single VM
To install one time updates on a single VM, follow these steps:
- In **Select resources**, choose the machine and select **Add**.
-1. In **Updates**, specify the updates to include in the deployment. For each product, select or deselect all supported update classifications and specify the ones to include in your update deployment. If your deployment is meant to apply only for a select set of updates, its necessary to deselect all the pre-selected update classifications when configuring the **Inclusion/exclusion** updates described below. This ensures only the updates you've specified to include in this deployment are installed on the target machine.
+1. In **Updates**, specify the updates to include in the deployment. For each product, select or deselect all supported update classifications and specify the ones to include in your update deployment. If your deployment is meant to apply only for a select set of updates, it's necessary to deselect all the pre-selected update classifications when configuring the **Inclusion/exclusion** updates described below. This ensures only the updates you've specified to include in this deployment are installed on the target machine.
> [!NOTE] > - Selected Updates shows a preview of OS updates which may be installed based on the last OS update assessment information available. If the OS update assessment information in update center management (preview) is obsolete, the actual updates installed would vary. Especially if you have chosen to install a specific update category, where the OS updates applicable may vary as new packages or KB Ids may be available for the category.
To install one time updates on a single VM, follow these steps:
:::image type="content" source="./media/deploy-updates/include-update-classification-inline.png" alt-text="Screenshot on including update classification." lightbox="./media/deploy-updates/include-update-classification-expanded.png":::
- - Select **Include KB ID/package** to include in the updates. Enter a comma-separated list of Knowledge Base article ID numbers to include or exclude for Windows updates. For example, `3103696, 3134815`. For Windows, you can refer to the [MSRC link](https://msrc.microsoft.com/update-guide/deployments) to get the details of the latest Knowledge Base released. For supported Linux distros, you specify a comma separated list of packages by the package name, and you can include wildcards. For example, `kernel*, glibc, libc=1.0.1`. Based on the options specified, update management center (preview) shows a preview of OS updates under the **Selected Updates** section.
+ - Select **Include KB ID/package** to include in the updates. Enter a comma separated list of Knowledge Base article ID numbers to include or exclude for Windows updates. For example, `3103696, 3134815`. For Windows, you can refer to the [MSRC link](https://msrc.microsoft.com/update-guide/deployments) to get the details of the latest Knowledge Base released. For supported Linux distros, you specify a comma separated list of packages by the package name, and you can include wildcards. For example, `kernel*, glibc, libc=1.0.1`. Based on the options specified, update management center (preview) shows a preview of OS updates under the **Selected Updates** section.
- To exclude updates that you don't want to install, select **Exclude KB ID/package**. We recommend checking this option because updates that are not displayed here might be installed, as newer updates might be available.
- - To ensure that the updates published are on or before a specific date, select **Include by maximum patch publish date** and in the Include by maximum patch publish date , choose the date and select **Add** and **Next**.
+ - To ensure that the updates published are on or before a specific date, select **Include by maximum patch publish date** and in the Include by maximum patch publish date, choose the date and select **Add** and **Next**.
:::image type="content" source="./media/deploy-updates/include-patch-publish-date-inline.png" alt-text="Screenshot on including patch publish date." lightbox="./media/deploy-updates/include-patch-publish-date-expanded.png":::
A notification appears to inform you the activity has started and another is cre
You can browse information about your Azure VMs and Arc-enabled servers across your Azure subscriptions. For more information, see [Update deployment history](manage-multiple-machines.md#update-deployment-history).
-After your scheduled deployment starts, you can see it's status on the **History** tab. It displays the total number of deployments including the successful and failed deployments.
+After your scheduled deployment starts, you can see its status on the **History** tab. It displays the total number of deployments including the successful and failed deployments.
:::image type="content" source="./media/deploy-updates/updates-history-inline.png" alt-text="Screenshot showing updates history." lightbox="./media/deploy-updates/updates-history-expanded.png":::
update-center Scheduled Patching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/scheduled-patching.md
Update management center (preview) uses maintenance control schedule instead of
1. All VMs in a common [availability set](../virtual-machines/availability-set-overview.md) aren't updated concurrently. 1. VMs in a common availability set are updated within Update Domain boundaries and, VMs across multiple Update Domains aren't updated concurrently.
+## Configure reboot settings
+
+The registry keys listed in [Configuring Automatic Updates by editing the registry](/windows/deployment/update/waas-wu-settings#configuring-automatic-updates-by-editing-the-registry) and [Registry keys used to manage restart](/windows/deployment/update/waas-restart#registry-keys-used-to-manage-restart) can cause your machines to reboot, even if you specify **Never Reboot** in the **Schedule** settings. Configure these registry keys to best suit your environment.
+ ## Service limits The following are the recommended limits for the mentioned indicators:
update-center Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/support-matrix.md
Use one of the following options to perform the settings change at scale:
**Linux**: If you include a specific third party software repository in the Linux package manager repository location, it is scanned when it performs software update operations. The package won't be available for assessment and installation if you remove it.
+> [!NOTE]
+> Update management center does not support managing the Microsoft Configuration Manager client.
+ ## Supported regions
update-center Updates Maintenance Schedules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/updates-maintenance-schedules.md
Update management center (preview) provides you the flexibility to take an immed
Update management center (preview) allows you to secure your machines immediately by installing updates on demand. To perform the on-demand updates, see [Check and install one time updates](deploy-updates.md#install-updates-on-single-vm). + ## Scheduled patching You can create a schedule on a daily, weekly or hourly cadence as per your requirement, specify the machines that must be updated as part of the schedule, and the updates that you must install. The schedule will then automatically install the updates as per the specifications.
update-center Whats Upcoming https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/whats-upcoming.md
Expanded support for [specialized images](../virtual-machines/linux/imaging.md#s
Update management center will be declared GA soon.
+## Prescript and postscript
+
+The prescript and post-script will be available soon.
+ ## Next steps - [Learn more](support-matrix.md) about supported regions.
virtual-desktop Deploy Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/deploy-diagnostics.md
Title: Deploy the diagnostics tool for Azure Virtual Desktop (classic) - Azure
description: How to deploy the diagnostics UX tool for Azure Virtual Desktop (classic). + Last updated 12/15/2020
You can also interact with users on the session host:
## Next steps - Learn how to monitor activity logs at [Use diagnostics with Log Analytics](diagnostics-log-analytics-2019.md).-- Read about common error scenarios and how to fix them at [Identify and diagnose issues](diagnostics-role-service-2019.md).
+- Read about common error scenarios and how to fix them at [Identify and diagnose issues](diagnostics-role-service-2019.md).
virtual-desktop Manage Resources Using Ui Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/manage-resources-using-ui-powershell.md
Last updated 03/30/2020 -+
virtual-machines Ebdsv5 Ebsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ebdsv5-ebsv5-series.md
The memory-optimized Ebsv5 and Ebdsv5 Azure virtual machine (VM) series deliver higher remote storage performance in each VM size than the [Ev4 series](ev4-esv4-series.md). The increased remote storage performance of the Ebsv5 and Ebdsv5 VMs is ideal for storage throughput-intensive workloads. For example, relational databases and data analytics applications.
-The Ebsv5 and Ebdsv5 VMs offer up to 260000 IOPS and 8000 MBps of remote disk storage throughput. Both series also include up to 672 GiB of RAM. The Ebdsv5 series has local SSD storage up to 3800 GiB. Both series provide a 3X increase in remote storage performance of data-intensive workloads compared to prior VM generations. You can use these series to consolidate existing workloads on fewer VMs or smaller VM sizes while achieving potential cost savings. The Ebdsv5 series comes with a local disk and Ebsv5 is without a local disk. Standard SSDs and Standard HDD disk storage aren't supported in the Ebv5 series.
+The Ebsv5 and Ebdsv5 VMs offer up to 260000 IOPS and 8000 MBps of remote disk storage throughput. Both series also include up to 672 GiB of RAM. The Ebdsv5 series has local SSD storage up to 3800 GiB. Both series provide a 3X increase in remote storage performance of data-intensive workloads compared to prior VM generations. You can use these series to consolidate existing workloads on fewer VMs or smaller VM sizes while achieving potential cost savings. The Ebdsv5 series comes with a local disk and Ebsv5 is without a local disk. We recommend choosing Premium SSD, Premium SSD v2 or Ultra disks to attain the published disk performance.
The Ebdsv5 and Ebsv5 series run on the Intel® Xeon® Platinum 8370C (Ice Lake) processors in a hyper-threaded configuration. The series are ideal for various memory-intensive enterprise applications. They feature:
Ebdsv5-series sizes run on the Intel® Xeon® Platinum 8370C (Ice Lake) processo
- SCSI Interface: Supported on Generation 1 and 2 VMs ## Ebdsv5 Series (SCSI)
-| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max temp storage throughput: IOPS / MBps | Max uncached Premium SSD and Standard SSD/HDD disk throughput: IOPS/MBps | Max burst uncached Premium SSD and Standard SSD/HDD disk throughput: IOPS/MBps | Max uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max burst uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max NICs | Network bandwidth |
+| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max temp storage throughput: IOPS / MBps | Max uncached Premium SSD disk throughput: IOPS/MBps | Max burst uncached Premium SSD disk throughput: IOPS/MBps | Max uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max burst uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max NICs | Network bandwidth |
||||||||||||| | Standard_E2bds_v5 | 2 | 16 | 75 | 4 | 9000/125 | 5500/156 | 10000/1200 | 7370/156 | 15000/1200 | 2 | 12500 | | Standard_E4bds_v5 | 4 | 32 | 150 | 8 | 19000/250 | 11000/350 | 20000/1200 | 14740/350|30000/1200 | 2 | 12500 | | Standard_E8bds_v5 | 8 | 64 | 300 | 16 | 38000/500 | 22000/625 | 40000/1200 |29480/625 |60000/1200 | 4 | 12500 |
-| Standard_E16bds_v5 | 16 | 128 | 600 | 32 | 75000/1000 | 44000/1250 | 64000/2000 |58960/1250 |96000/2000 | 4 | 12500 |
+| Standard_E16bds_v5 | 16 | 128 | 600 | 32 | 75000/1000 | 44000/1250 | 64000/2000 |58960/1250 |96000/2000 | 8 | 12500 |
| Standard_E32bds_v5 | 32 | 256 | 1200 | 32 | 150000/2000 | 88000/2500 | 120000/4000 | 117920/2500|160000/4000| 8 | 16000 | | Standard_E48bds_v5 | 48 | 384 | 1800 | 32 | 225000/3000 | 120000/4000 | 120000/4000 | 160000/4000|160000/4000 | 8 | 16000 | | Standard_E64bds_v5 | 64 | 512 | 2400 | 32 | 300000/4000 | 120000/4000 | 120000/4000 |160000/4000 | 160000/4000| 8 | 20000 | | Standard_E96bds_v5 | 96 | 672 | 3600 | 32 | 450000/4000 | 120000/4000 | 120000/4000 |160000/4000 | 160000/4000| 8 | 25000 | ## Ebdsv5 Series (NVMe)
-| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max temp storage throughput: IOPS / MBps | Max uncached Premium SSD and Standard SSD/HDD disk throughput: IOPS/MBps | Max burst uncached Premium SSD and Standard SSD/HDD disk throughput: IOPS/MBps | Max uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max burst uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max NICs | Network bandwidth |
+| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max temp storage throughput: IOPS / MBps | Max uncached Premium SSD disk throughput: IOPS/MBps | Max burst uncached Premium SSD disk throughput: IOPS/MBps | Max uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max burst uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max NICs | Network bandwidth |
||||||||||||| | Standard_E2bds_v5 | 2 | 16 | 75 | 4 | 9000/125 | 5500/156 | 10000/1200 | 7370/156 | 15000/1200 | 2 | 12500 | | Standard_E4bds_v5 | 4 | 32 | 150 | 8 | 19000/250 | 11000/350 | 20000/1200 | 14740/350|30000/1200 | 2 | 12500 |
Ebsv5-series sizes run on the Intel® Xeon® Platinum 8272CL (Ice Lake). These V
- NVMe Interface: Supported only on Generation 2 VMs - SCSI Interface: Supported on Generation 1 and Generation 2 VMs ## Ebsv5 Series (SCSI)
-| Size | vCPU | Memory: GiB | Max data disks | Max uncached Premium SSD and Standard SSD/HDD disk throughput: IOPS/MBps | Max burst uncached Premium SSD and Standard SSD/HDD disk throughput: IOPS/MBps | Max uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max burst uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max NICs | Network bandwidth |
+| Size | vCPU | Memory: GiB | Max data disks | Max uncached Premium SSD disk throughput: IOPS/MBps | Max burst uncached Premium SSD disk throughput: IOPS/MBps | Max uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max burst uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max NICs | Network bandwidth |
| | | | | | | | | | | | Standard_E2bs_v5 | 2 | 16 | 4 | 5500/156 | 10000/1200 | 7370/156|15000/1200 | 2 | 12500 | | Standard_E4bs_v5 | 4 | 32 | 8 | 11000/350 | 20000/1200 | 14740/350|30000/1200 | 2 | 12500 |
Ebsv5-series sizes run on the Intel® Xeon® Platinum 8272CL (Ice Lake). These V
| Standard_E96bs_v5 | 96 | 672 | 32 | 120000/4000 | 120000/4000 | 160000/4000|160000/4000 | 8 | 25000 | ## Ebsv5 Series (NVMe)
-| Size | vCPU | Memory: GiB | Max data disks | Max uncached Premium SSD and Standard SSD/HDD disk throughput: IOPS/MBps | Max burst uncached Premium SSD and Standard SSD/HDD disk throughput: IOPS/MBps | Max uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max burst uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max NICs | Network bandwidth |
+| Size | vCPU | Memory: GiB | Max data disks | Max uncached Premium SSD disk throughput: IOPS/MBps | Max burst uncached Premium SSD disk throughput: IOPS/MBps | Max uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max burst uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max NICs | Network bandwidth |
| | | | | | | | | | | | Standard_E2bs_v5 | 2 | 16 | 4 | 5500/156 | 10000/1200 | 7370/156|15000/1200 | 2 | 12500 | | Standard_E4bs_v5 | 4 | 32 | 8 | 11000/350 | 20000/1200 | 14740/350|30000/1200 | 2 | 12500 |
virtual-machines Disk Encryption Key Vault Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-key-vault-aad.md
Last updated 01/04/2023--+ # Creating and configuring a key vault for Azure Disk Encryption with Azure AD (previous release) for Linux VMs
If you would like to use certificate authentication and wrap the encryption key
## Next steps
-[Enable Azure Disk Encryption with Azure AD on Linux VMs (previous release)](disk-encryption-linux-aad.md)
+[Enable Azure Disk Encryption with Azure AD on Linux VMs (previous release)](disk-encryption-linux-aad.md)
virtual-machines Disks Upload Vhd To Managed Disk Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disks-upload-vhd-to-managed-disk-cli.md
description: Learn how to upload a VHD to an Azure managed disk and copy a manag
Previously updated : 01/03/2023 Last updated : 08/16/2023
sourceDiskSizeBytes=$(az disk show -g $sourceRG -n $sourceDiskName --query '[dis
az disk create -g $targetRG -n $targetDiskName -l $targetLocation --os-type $targetOS --for-upload --upload-size-bytes $(($sourceDiskSizeBytes+512)) --sku standard_lrs
-targetSASURI=$(az disk grant-access -n $targetDiskName -g $targetRG --access-level Write --duration-in-seconds 86400 -o tsv)
+targetSASURI=$(az disk grant-access -n $targetDiskName -g $targetRG --access-level Write --duration-in-seconds 86400 --query [accessSas] -o tsv)
sourceSASURI=$(az disk grant-access -n $sourceDiskName -g $sourceRG --duration-in-seconds 86400 --query [accessSas] -o tsv)
virtual-machines Disk Encryption Key Vault Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/disk-encryption-key-vault-aad.md
Last updated 01/04/2023---+ # Creating and configuring a key vault for Azure Disk Encryption with Azure AD (previous release)
If you would like to use certificate authentication and wrap the encryption key
## Next steps
-[Enable Azure Disk Encryption with Azure AD on Windows VMs (previous release)](disk-encryption-windows-aad.md)
+[Enable Azure Disk Encryption with Azure AD on Windows VMs (previous release)](disk-encryption-windows-aad.md)
virtual-machines Oracle Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/oracle-overview.md
You can also implement high availability and disaster recovery for Oracle Databa
We recommend placing the VMs in the same availability set to allow Azure to place them into separate fault domains and upgrade domains. If you want to have geo-redundancy, set up the two databases to replicate between two different regions and connect the two instances with a VPN Gateway. To walk through the basic setup procedure on Azure, see Implement Oracle Data Guard on an Azure Linux virtual machine.
-With Oracle Data Guard, you can achieve high availability with a primary database in one VM, a secondary (standby) database in another VM, and one-way replication set up between them. The result is read access to the copy of the database. With Oracle GoldenGate, you can configure bi-directional replication between the two databases. To learn how to set up a high-availability solution for your databases using these tools, see Active Data Guard and GoldenGate. If you need read-write access to the copy of the database, you can use Oracle Active Data Guard.
+With Oracle Active Data Guard, you can achieve high availability with a primary database in one VM, a secondary (standby) database in another VM, and one-way replication set up between them. The result is read access to the copy of the database. With Oracle GoldenGate, you can configure bi-directional replication between the two databases. To learn how to set up a high-availability solution for your databases using these tools, see [Active Data Guard and GoldenGate](https://www.oracle.com/docs/tech/database/oow14-con7715-adg-gg-bestpractices.pdf). If you need read-write access to the copy of the database, you can use Oracle Active Data Guard.
+ To walk through the basic setup procedure on Azure, see [Implement Oracle Golden Gate on an Azure Linux VM](configure-oracle-golden-gate.md). In addition to having a high availability and disaster recovery solution architected in Azure, you should have a backup strategy in place to restore your database.
Different [backup strategies](oracle-database-backup-strategies.md) are availabl
- Using [Azure backup](oracle-database-backup-azure-backup.md) - Using [Oracle RMAN Streaming data](oracle-rman-streaming-backup.md) backup ## Deploy Oracle applications on Azure
-Use Terraform templates to set up Azure infrastructure and install Oracle applications. For more information, see [Terraform on Azure](/azure/developer/terraform).
+Use Terraform templates, AZ CLI, or the Azure Portal to set up Azure infrastructure and install Oracle applications. You also use Ansible to configure DB inside the VM. For more information, see [Terraform on Azure](/azure/developer/terraform).
Oracle has certified the following applications to run in Azure when connecting to an Oracle database by using the Azure with Oracle Cloud interconnect solution: - E-Business Suite
You can deploy custom applications in Azure that connect with OCI and other Azur
According to Oracle Support, JD Edwards EnterpriseOne versions 9.2 and above are supported on any public cloud offering that meets their specific Minimum Technical Requirements (MTR). You need to create custom images that meet their MTR specifications for operating system and software application compatibility. For more information, see [Doc ID 2178595.1](https://support.oracle.com/knowledge/JD%20Edwards%20EnterpriseOne/2178595_1.html). ## Licensing Deployment of Oracle solutions in Azure is based on a bring-your-own-license model. This model assumes that you have licenses to use Oracle software and that you have a current support agreement in place with Oracle.
-Microsoft Azure is an authorized cloud environment for running Oracle Database. The Oracle Core Factor table isn't applicable when licensing Oracle databases in the cloud. Instead, when using VMs with Hyper-Threading Technology enabled for Enterprise Edition databases, count two vCPUs as equivalent to one Oracle Processor license if hyperthreading is enabled, as stated in the policy document. The policy details can be found at [Licensing Oracle Software in the Cloud Computing Environment](https://www.oracle.com/us/corporate/pricing/cloud-licensing-070579.pdf).
+Microsoft Azure is an authorized cloud environment for running Oracle Database. The Oracle Core Factor table isn't applicable when licensing Oracle databases in the cloud. For more information, see [Oracle Processor Core Factor Table](https://www.oracle.com/us/corporate/contracts/processor-core-factor-table-070634.pdf). Instead, when using VMs with Hyper-Threading Technology enabled for Enterprise Edition databases, count two vCPUs as equivalent to one Oracle Processor license if hyperthreading is enabled, as stated in the policy document. The policy details can be found at [Licensing Oracle Software in the Cloud Computing Environment](https://www.oracle.com/us/corporate/pricing/cloud-licensing-070579.pdf).
Oracle databases generally require higher memory and I/O. For this reason, we recommend [Memory Optimized VMs](/azure/virtual-machines/sizes-memory) for these workloads. To optimize your workloads further, we recommend [Constrained Core vCPUs](/azure/virtual-machines/constrained-vcpu) for Oracle Database workloads that require high memory, storage, and I/O bandwidth, but not a high core count. When you migrate Oracle software and workloads from on-premises to Microsoft Azure, Oracle provides license mobility as stated in [Oracle and Microsoft Strategic Partnership FAQ](https://www.oracle.com/cloud/azure/interconnect/faq/). ## Next steps
virtual-network-manager Concept Security Admins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-security-admins.md
Here are some scenarios where security admin rules can be used:
| **Enforcing application-level security** | Security admin rules can be used to enforce application-level security by blocking traffic to or from specific applications or services. | With Azure Virtual Network Manager, you have a centralized location to manage security admin rules. Centralization allows you to define security policies at scale and apply them to multiple virtual networks at once.+
+> [!NOTE]
+> Currently, security admin rules do not apply to private endpoints that fall under the scope of a managed virtual network.
+ ## How do security admin rules work? Security admin rules allow or deny traffic on specific ports, protocols, and source/destination IP prefixes in a specified direction. When you define a security admin rule, you specify the following conditions:
Security admin rules allow or deny traffic on specific ports, protocols, and sou
- The protocol to be used To enforce security policies across multiple virtual networks, you [create and deploy a security admin configuration](how-to-block-network-traffic-portal.md). This configuration contains a set of rule collections, and each rule collection contains one or more security admin rules. Once created, you associate the rule collection with the network groups requiring security admin rules. The rules are then applied to all virtual networks contained in the network groups when the configuration is deployed. A single configuration provides a centralized and scalable enforcement of security policies across multiple virtual networks.+ ### Evaluation of security admin rules and network security groups (NSGs) Security admin rules and network security groups (NSGs) can be used to enforce network security policies in Azure. However, they have different scopes and priorities.
virtual-network-manager Concept Virtual Network Flow Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-virtual-network-flow-logs.md
+
+ Title: Monitoring security admin rules with Virtual Network Flow Logs
+description: This article covers using Network Watcher and Virtual Network Flow Logs to monitor traffic through security admin rules in Azure Virtual Network Manager.
++++ Last updated : 08/11/2023++
+# Monitoring Azure Virtual Network Manager with VNet flow logs (Preview)
+
+Monitoring traffic is critical to understanding how your network is performing and to troubleshoot issues. Administrators can utilize VNet flow logs (Preview) to show whether traffic is flowing through or blocked on a VNet by a [security admin rule]. VNet flow logs (Preview) are a feature of Network Watcher.
+
+Learn more about [VNet flow logs (Preview)](../network-watcher/vnet-flow-logs-overview.md) including usage and how to enable.
+
+> [!IMPORTANT]
+> VNet flow logs is currently in PREVIEW. This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+> [!IMPORTANT]
+> Azure Virtual Network Manager is generally available for Virtual Network Manager and hub-and-spoke connectivity configurations. Mesh connectivity configurations and security admin rules remain in public preview.
+>
+> This preview version is provided without a service-level agreement, and we don't recommend it for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Enable VNet flow logs (Preview)
+
+Currently, you need to enable Virtual Network flow logs (Preview) on each VNet you want to monitor. You can enable Virtual Network Flow Logs on a VNet by using [PowerShell](../network-watcher/vnet-flow-logs-powershell.md) or the [Azure CLI](../network-watcher/vnet-flow-logs-cli.md).
+
+Here's an example of a flow log
+
+```json
+{
+ "records": [
+ {
+ "time": "2022-09-14T09:00:52.5625085Z",
+ "flowLogVersion": 4,
+ "flowLogGUID": "a1b2c3d4-e5f6-g7h8-i9j0-k1l2m3n4o5p6",
+ "macAddress": "00224871C205",
+ "category": "FlowLogFlowEvent",
+ "flowLogResourceID": "/SUBSCRIPTIONS/1a2b3c4d-5e6f-7g8h-9i0j-1k2l3m4n5o6p7/RESOURCEGROUPS/NETWORKWATCHERRG/PROVIDERS/MICROSOFT.NETWORK/NETWORKWATCHERS/NETWORKWATCHER_EASTUS2EUAP/FLOWLOGS/VNETFLOWLOG",
+ "targetResourceID": "/subscriptions/1a2b3c4d-5e6f-7g8h-9i0j-1k2l3m4n5o6p7/resourceGroups/myResourceGroup/providers/Microsoft.Network/virtualNetworks/myVNet01",
+ "operationName": "FlowLogFlowEvent",
+ "flowRecords": {
+ "flows": [
+ {
+ "aclID": "9a8b7c6d-5e4f-3g2h-1i0j-9k8l7m6n5o4p3",
+ "flowGroups": [
+ {
+ "rule": "DefaultRule_AllowInternetOutBound",
+ "flowTuples": [
+ "1663146003599,10.0.0.6,52.239.184.180,23956,443,6,O,B,NX,0,0,0,0",
+ "1663146003606,10.0.0.6,52.239.184.180,23956,443,6,O,E,NX,3,767,2,1580",
+ "1663146003637,10.0.0.6,40.74.146.17,22730,443,6,O,B,NX,0,0,0,0",
+ "1663146003640,10.0.0.6,40.74.146.17,22730,443,6,O,E,NX,3,705,4,4569",
+ "1663146004251,10.0.0.6,40.74.146.17,22732,443,6,O,B,NX,0,0,0,0",
+ "1663146004251,10.0.0.6,40.74.146.17,22732,443,6,O,E,NX,3,705,4,4569",
+ "1663146004622,10.0.0.6,40.74.146.17,22734,443,6,O,B,NX,0,0,0,0",
+ "1663146004622,10.0.0.6,40.74.146.17,22734,443,6,O,E,NX,2,134,1,108",
+ "1663146017343,10.0.0.6,104.16.218.84,36776,443,6,O,B,NX,0,0,0,0",
+ "1663146022793,10.0.0.6,104.16.218.84,36776,443,6,O,E,NX,22,2217,33,32466"
+ ]
+ }
+ ]
+ },
+ {
+ "aclID": "b1c2d3e4-f5g6-h7i8-j9k0-l1m2n3o4p5q6",
+ "flowGroups": [
+ {
+ "rule": "BlockHighRiskTCPPortsFromInternet",
+ "flowTuples": [
+ "1663145998065,101.33.218.153,10.0.0.6,55188,22,6,I,D,NX,0,0,0,0",
+ "1663146005503,192.241.200.164,10.0.0.6,35276,119,6,I,D,NX,0,0,0,0"
+ ]
+ },
+ {
+ "rule": "Internet",
+ "flowTuples": [
+ "1663145989563,20.106.221.10,10.0.0.6,50557,44357,6,I,D,NX,0,0,0,0",
+ "1663145989679,20.55.117.81,10.0.0.6,62797,35945,6,I,D,NX,0,0,0,0",
+ "1663145989709,20.55.113.5,10.0.0.6,51961,65515,6,I,D,NX,0,0,0,0",
+ "1663145990049,13.65.224.51,10.0.0.6,40497,40129,6,I,D,NX,0,0,0,0",
+ "1663145990145,20.55.117.81,10.0.0.6,62797,30472,6,I,D,NX,0,0,0,0",
+ "1663145990175,20.55.113.5,10.0.0.6,51961,28184,6,I,D,NX,0,0,0,0",
+ "1663146015545,20.106.221.10,10.0.0.6,50557,31244,6,I,D,NX,0,0,0,0"
+ ]
+ }
+ ]
+ }
+ ]
+ }
+ }
+ ]
+}
+
+```
++
+## Next steps
+> [!div class="nextstepaction"]
+> Learn more about [VNet Flow Logs](../network-watcher/vnet-flow-logs-overview.md) and how to use them.
+> Learn more about [Event log options for Azure Virtual Network Manager](concept-event-logs.md).
virtual-network-manager Create Virtual Network Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/create-virtual-network-manager-template.md
Title: 'Quickstart: Create a mesh network topology with Azure Virtual Network Manager using Azure Resource Manager template - ARM template'
-description: In this article, you create a mesh network topology with Azure Virtual Network Manager using Azure Resource Manager template, ARM template.
+ Title: 'Quickstart: Deploy a network topology with Azure Virtual Network Manager using Azure Resource Manager template - ARM template'
+description: In this article, you deploy various network topologies with Azure Virtual Network Manager using Azure Resource Manager template(ARM template).
-# Quickstart: Create a mesh network topology with Azure Virtual Network Manager using Azure Resource Manager template -ARM template
+# Quickstart: Deploy a network topology with Azure Virtual Network Manager using Azure Resource Manager template - ARM template
Get started with Azure Virtual Network Manager by using Azure Resource Manager templates to manage connectivity for all your virtual networks.
virtual-network-manager Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/faq.md
Yes,
In Azure, VNet peering and connected groups are two methods of establishing connectivity between virtual networks (VNets). While VNet peering works by creating a 1:1 mapping between each peered VNet, connected groups use a new construct that establishes connectivity without such a mapping. In a connected group, all virtual networks are connected without individual peering relationships. For example, if VNetA, VNetB, and VNetC are part of the same connected group, connectivity is enabled between each VNet without the need for individual peering relationships.
+### Do security admin rules apply to Azure Private Endpoints?
+
+Currently, security admin rules don't apply to Azure Private Endpoints that fall under the scope of a virtual network managed by Azure Virtual Network Manager.
### How can I explicitly allow Azure SQL Managed Instance traffic before having deny rules? Azure SQL Managed Instance has some network requirements. If your security admin rules can block the network requirements, you can use the below sample rules to allow SQLMI traffic with higher priority than the deny rules that can block the traffic of SQL Managed Instance.
virtual-wan Virtual Wan Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-faq.md
Virtual WAN comes in two flavors: Basic and Standard. In Basic Virtual WAN, hubs
### How are Availability Zones and resiliency handled in Virtual WAN?
-Virtual WAN is a collection of hubs and services made available inside the hub. The user can have as many Virtual WAN per their need. In a Virtual WAN hub, there are multiple services like VPN, ExpressRoute etc. Each of these services is automatically deployed across Availability Zones (except Azure Firewall), if the region supports Availability Zones. If a region becomes an Availability Zone after the initial deployment in the hub, the user can recreate the gateways, which will trigger an Availability Zone deployment. All gateways are provisioned in a hub as active-active, implying there is resiliency built in within a hub. Users can connect to multiple hubs if they want resiliency across regions.
+Virtual WAN is a collection of hubs and services made available inside the hub. The user can have as many Virtual WAN per their need. In a Virtual WAN hub, there are multiple services like VPN, ExpressRoute etc. Each of these services is automatically deployed across Availability Zones (except Azure Firewall), if the region supports Availability Zones. If a region becomes an Availability Zone after the initial deployment in the hub, the user can recreate the gateways, which will trigger an Availability Zone deployment. All gateways are provisioned in a hub as active-active, implying there's resiliency built in within a hub. Users can connect to multiple hubs if they want resiliency across regions.
-Currently, Azure Firewall can be deployed to support Availability Zones using Azure Firewall Manager Portal, [PowerShell](/powershell/module/az.network/new-azfirewall#example-6--create-a-firewall-with-no-rules-and-with-availability-zones) or CLI. There is currently no way to configure an existing Firewall to be deployed across availability zones. You'll need to delete and redeploy your Azure Firewall.
+Currently, Azure Firewall can be deployed to support Availability Zones using Azure Firewall Manager Portal, [PowerShell](/powershell/module/az.network/new-azfirewall#example-6--create-a-firewall-with-no-rules-and-with-availability-zones) or CLI. There's currently no way to configure an existing Firewall to be deployed across availability zones. You'll need to delete and redeploy your Azure Firewall.
While the concept of Virtual WAN is global, the actual Virtual WAN resource is Resource Manager-based and deployed regionally. If the virtual WAN region itself were to have an issue, all hubs in that virtual WAN will continue to function as is, but the user won't be able to create new hubs until the virtual WAN region is available.
If a virtual hub learns the same route from multiple remote hubs, the order in w
* **AS Path** 1. Prefer routes with the shortest BGP AS-Path length irrespective of the source of the route advertisements.
- Note: In vWANs with multiple remote virtual hubs, If there is a tie between remote routes and remote site-to-site VPN routes. Remote site-to-site VPN will be preferred.
+ Note: In vWANs with multiple remote virtual hubs, If there's a tie between remote routes and remote site-to-site VPN routes. Remote site-to-site VPN will be preferred.
2. Prefer routes from local virtual hub connections over routes learned from remote virtual hub. 3. If there are routes from both ExpressRoute and Site-to-site VPN connections:
Transit between ER-to-ER is always via Global reach. Virtual hub gateways are de
### Is there a concept of weight in Azure Virtual WAN ExpressRoute circuits or VPN connections
-When multiple ExpressRoute circuits are connected to a virtual hub, routing weight on the connection provides a mechanism for the ExpressRoute in the virtual hub to prefer one circuit over the other. There is no mechanism to set a weight on a VPN connection. Azure always prefers an ExpressRoute connection over a VPN connection within a single hub.
+When multiple ExpressRoute circuits are connected to a virtual hub, routing weight on the connection provides a mechanism for the ExpressRoute in the virtual hub to prefer one circuit over the other. There's no mechanism to set a weight on a VPN connection. Azure always prefers an ExpressRoute connection over a VPN connection within a single hub.
### Does Virtual WAN prefer ExpressRoute over VPN for traffic egressing Azure
The current behavior is to prefer the ExpressRoute circuit path over hub-to-hub
### When there's an ExpressRoute circuit connected as a bow-tie to a Virtual WAN hub and a non Virtual WAN VNet, what is the path for the non Virtual WAN VNet to reach the Virtual WAN hub?
-The current behavior is to prefer the ExpressRoute circuit path for non Virtual WAN VNet to Virtual WAN connectivity. It is recommended that the customer [create a Virtual Network connection](howto-connect-vnet-hub.md) to directly connect the non Virtual WAN VNet to the Virtual WAN hub. Afterwards, VNet to VNet traffic will traverse through the Virtual WAN router instead of the ExpressRoute path (which traverses through the Microsoft Enterprise Edge routers/MSEE).
+The current behavior is to prefer the ExpressRoute circuit path for non Virtual WAN VNet to Virtual WAN connectivity. It's recommended that the customer [create a Virtual Network connection](howto-connect-vnet-hub.md) to directly connect the non Virtual WAN VNet to the Virtual WAN hub. Afterwards, VNet to VNet traffic will traverse through the Virtual WAN router instead of the ExpressRoute path (which traverses through the Microsoft Enterprise Edge routers/MSEE).
### Can hubs be created in different resource groups in Virtual WAN?
Yes. For a list of Managed Service Provider (MSP) solutions enabled via Azure Ma
Both Azure Virtual WAN hub and Azure Route Server provide Border Gateway Protocol (BGP) peering capabilities that can be utilized by NVAs (Network Virtual Appliance) to advertise IP addresses from the NVA to the userΓÇÖs Azure virtual networks. The deployment options differ in the sense that Azure Route Server is typically deployed by a self-managed customer hub VNet whereas Azure Virtual WAN provides a zero-touch fully meshed hub service to which customers connect their various spokes end points (Azure VNet, on-premises branches with site-to-site VPN or SDWAN, remote users with point-to-site/Remote User VPN and Private connections with ExpressRoute) and enjoy BGP Peering for NVAs deployed in spoke VNet along with other vWAN capabilities such as transit connectivity for VNet-to-VNet, transit connectivity between VPN and ExpressRoute, custom/advanced routing, custom route association and propagation, routing intent/policies for no hassle inter-region security, Secure Hub/Azure firewall etc. For more details about Virtual WAN BGP Peering, please see [How to peer BGP with a virtual hub](scenario-bgp-peering-hub.md).
-### If I'm using a third-party security provider (Zscaler, iBoss or Checkpoint) to secure my internet traffic, why don't I see the VPN site associated to the third-party security provider in the Azure Portal?
+### If I'm using a third-party security provider (Zscaler, iBoss or Checkpoint) to secure my internet traffic, why don't I see the VPN site associated to the third-party security provider in the Azure portal?
-When you choose to deploy a security partner provider to protect Internet access for your users, the third-party security provider creates a VPN site on your behalf. Because the third-party security provider is created automatically by the provider and isn't a user-created VPN site, this VPN site won't show up in the Azure Portal.
+When you choose to deploy a security partner provider to protect Internet access for your users, the third-party security provider creates a VPN site on your behalf. Because the third-party security provider is created automatically by the provider and isn't a user-created VPN site, this VPN site won't show up in the Azure portal.
For more information regarding the available options third-party security providers and how to set this up, see [Deploy a security partner provider](../firewall-manager/deploy-trusted-security-partner.md).
Yes, BGP communities generated by on-premises will be preserved in Virtual WAN.
### <a name="why-am-i-seeing-a-message-and-button-called-update-router-to-latest-software-version-in-portal."></a>Why am I seeing a message and button called "Update router to latest software version" in portal?
-Azure-wide Cloud Services-based infrastructure is deprecating. As a result, the Virtual WAN team has been working on upgrading virtual routers from their current Cloud Services infrastructure to Virtual Machine Scale Sets based deployments. **All newly created Virtual Hubs will automatically be deployed on the latest Virtual Machine Scale Sets based infrastructure.** If you navigate to your Virtual WAN hub resource and see this message and button, then you can upgrade your router to the latest version by clicking on the button. If you would like to take advantage of new Virtual WAN features, such as [BGP peering with the hub](create-bgp-peering-hub-portal.md), you'll have to update your virtual hub router via Azure Portal. If the button is not visible, please open a support case.
+Azure-wide Cloud Services-based infrastructure is deprecating. As a result, the Virtual WAN team has been working on upgrading virtual routers from their current Cloud Services infrastructure to Virtual Machine Scale Sets based deployments. **All newly created Virtual Hubs will automatically be deployed on the latest Virtual Machine Scale Sets based infrastructure.** If you navigate to your Virtual WAN hub resource and see this message and button, then you can upgrade your router to the latest version by clicking on the button. If you would like to take advantage of new Virtual WAN features, such as [BGP peering with the hub](create-bgp-peering-hub-portal.md), you'll have to update your virtual hub router via Azure portal. If the button isn't visible, please open a support case.
-YouΓÇÖll only be able to update your virtual hub router if all the resources (gateways/route tables/VNet connections) in your hub are in a succeeded state. Please make sure all your spoke virtual networks are in active/enabled subscriptions and that your spoke virtual networks are not deleted. Additionally, as this operation requires deployment of new virtual machine scale sets based virtual hub routers, youΓÇÖll face an expected downtime of 1-2 minutes for VNet-to-VNet traffic through the same hub and 5-7 minutes for all other traffic flows through the hub. Within a single Virtual WAN resource, hubs should be updated one at a time instead of updating multiple at the same time. When the Router Version says ΓÇ£LatestΓÇ¥, then the hub is done updating. There will be no routing behavior changes after this update.
+YouΓÇÖll only be able to update your virtual hub router if all the resources (gateways/route tables/VNet connections) in your hub are in a succeeded state. Please make sure all your spoke virtual networks are in active/enabled subscriptions and that your spoke virtual networks aren't deleted. Additionally, as this operation requires deployment of new virtual machine scale sets based virtual hub routers, youΓÇÖll face an expected downtime of 1-2 minutes for VNet-to-VNet traffic through the same hub and 5-7 minutes for all other traffic flows through the hub. Within a single Virtual WAN resource, hubs should be updated one at a time instead of updating multiple at the same time. When the Router Version says ΓÇ£LatestΓÇ¥, then the hub is done updating. There will be no routing behavior changes after this update.
There are several limitations with the virtual hub router upgrade
-* If you have already configured BGP peering between your Virtual WAN hub and an NVA in a spoke VNet, then you will have to [delete and then recreate the BGP peer](create-bgp-peering-hub-portal.md). Since the virtual hub router's IP addresses change after the upgrade, you will also have to reconfigure your NVA to peer with the virtual hub router's new IP addresses. These IP addresses are represented as the "virtualRouterIps" field in the Virtual Hub's Resource JSON.
+* If you have already configured BGP peering between your Virtual WAN hub and an NVA in a spoke VNet, then you'll have to [delete and then recreate the BGP peer](create-bgp-peering-hub-portal.md). Since the virtual hub router's IP addresses change after the upgrade, you'll also have to reconfigure your NVA to peer with the virtual hub router's new IP addresses. These IP addresses are represented as the "virtualRouterIps" field in the Virtual Hub's Resource JSON.
-* If your Virtual WAN hub is connected to a combination of spoke virtual networks in the same region as the hub and a separate region than the hub, then you may experience a lack of connectivity to these respective spoke virtual networks. To resolve this and restore connectivity to these virtual networks, you can modify any of the virtual network connection properties (For example, you can modify the connection to propagate to a dummy label). We are actively working on removing this requirement.
-
-* Your Virtual WAN hub router can not currently be upgraded if you have a network virtual appliance in the virtual hub. We are actively working on removing this limitation.
+* Your Virtual WAN hub router can not currently be upgraded if you have a network virtual appliance in the virtual hub. We're actively working on removing this limitation.
* If your Virtual WAN hub is connected to more than 100 spoke virtual networks, then the upgrade may fail.
-If the update fails for any reason, your hub will be auto recovered to the old version to ensure there is still a working setup.
+If the update fails for any reason, your hub will be auto recovered to the old version to ensure there's still a working setup.
Additional things to note: * The user will need to have an **owner** or **contributor** role to see an accurate status of the hub router version. If a user is assigned a **reader** role to the Virtual WAN resource and subscription, then Azure portal will display to that user that the hub router needs to be upgraded to the latest version, even if the hub is already on the latest version.
-* If you change your spoke virtual network's subscription status from disabled to enabled and then upgrade the virtual hub, you will need to update your virtual network connection after the virtual hub upgrade (Ex: you can configure the virtual network connection to propagate to a dummy label).
+* If you change your spoke virtual network's subscription status from disabled to enabled and then upgrade the virtual hub, you'll need to update your virtual network connection after the virtual hub upgrade (Ex: you can configure the virtual network connection to propagate to a dummy label).
### Is there a route limit for OpenVPN clients connecting to an Azure P2S VPN gateway?
web-application-firewall Rate Limiting Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/rate-limiting-configure.md
+
+ Title: Create rate limiting custom rules for Application Gateway WAF v2 (preview)
+
+description: Learn how to configure rate limit custom rules for Application Gateway WAF v2.
+++ Last updated : 08/16/2023++++
+# Create rate limiting custom rules for Application Gateway WAF v2 (preview)
+
+> [!IMPORTANT]
+> Rate limiting for Web Application Firewall on Application Gateway is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+Rate limiting enables you to detect and block abnormally high levels of traffic destined for your application. Rate Limiting works by counting all traffic that that matches the configured Rate Limit rule and performing the configured action for traffic matching that rule which exceeds the configured threshold. For more information, see [Rate limiting overview](rate-limiting-overview.md).
+
+## Configure Rate Limit Custom Rules
+
+Use the following information to configure Rate Limit Rules for Application Gateway WAFv2.
+
+**Scenario One** - Create rule to rate-limit traffic by Client IP that exceed the configured threshold, matching all traffic.
+
+#### [Portal](#tab/browser)
+
+1. Open an existing Application Gateway WAF Policy
+1. Select Custom Rules
+1. Add Custom Rule
+1. Add Name for the Custom Rule
+1. Select the Rate limit Rule Type radio button
+1. Enter a Priority for the rule
+1. Choose 1 minute for Rate limit duration
+1. Enter 200 for Rate limit threshold (requests)
+1. Select Client address for Group rate limit traffic by
+1. Under Conditions, choose IP address for Match Type
+1. For Operation, select the Does not contain radio button
+1. For match condition, under IP address or range, enter 255.255.255.255/32
+1. Leave action setting to Deny traffic
+1. Select Add to add the custom rule to the policy
+1. Select Save to save the configuration and make the custom rule active for the WAF policy.
+
+#### [PowerShell](#tab/powershell)
+
+```azurepowershell
+$variable = New-AzApplicationGatewayFirewallMatchVariable -VariableName RemoteAddr
+$condition = New-AzApplicationGatewayFirewallCondition -MatchVariable $variable -Operator IPMatch -MatchValue 255.255.255.255/32 -NegationCondition $True
+$groupByVariable = New-AzApplicationGatewayFirewallCustomRuleGroupByVariable -VariableName ClientAddr
+$groupByUserSession = New-AzApplicationGatewayFirewallCustomRuleGroupByUserSession -GroupByVariable $groupByVariable
+$ratelimitrule = New-AzApplicationGatewayFirewallCustomRule -Name ClientIPRateLimitRule -Priority 90 -RateLimitDuration OneMin -RateLimitThreshold 100 -RuleType RateLimitRule -MatchCondition $condition -GroupByUserSession $groupByUserSession -Action Block -State Enabled
+```
+#### [CLI](#tab/cli)
+```azurecli
+az network application-gateway waf-policy custom-rule create --policy-name ExamplePolicy --resource-group ExampleRG --action Block --name ClientIPRateLimitRule --priority 90 --rule-type RateLimitRule --rate-limit-threshold 100 --group-by-user-session '[{'"groupByVariables"':[{'"variableName"':'"ClientAddr"'}]}]'
+az network application-gateway waf-policy custom-rule match-condition add --match-variables RemoteAddr --operator IPMatch --policy-name ExamplePolicy --name ClientIPRateLimitRule --resource-group ExampleRG --value 255.255.255.255/32 --negate true
+```
+* * *
+
+**Scenario Two** - Create Rate Limit Custom Rule to match all traffic except for traffic originating from the United States. Traffic will be grouped, counted and rate limited based on the GeoLocation of the Client Source IP address
+
+#### [Portal](#tab/browser)
+
+1. Open an existing Application Gateway WAF Policy
+1. Select Custom Rules
+1. Add Custom Rule
+1. Add Name for the Custom Rule
+1. Select the Rate limit Rule Type radio button
+1. Enter a Priority for the rule
+1. Choose 1 minute for Rate limit duration
+1. Enter 500 for Rate limit threshold (requests)
+1. Select Geo location for Group rate limit traffic by
+1. Under Conditions, choose Geo location for Match Type
+1. In the Match variables section, select RemoteAddr for Match variable
+1. Select the Is not radio button for operation
+1. Select United States for Country/Region
+1. Leave action setting to Deny traffic
+1. Select Add to add the custom rule to the policy
+1. Select Save to save the configuration and make the custom rule active for the WAF policy.
+
+#### [PowerShell](#tab/powershell)
+```azurepowershell
+$variable = New-AzApplicationGatewayFirewallMatchVariable -VariableName RemoteAddr
+$condition = New-AzApplicationGatewayFirewallCondition -MatchVariable $variable -Operator GeoMatch -MatchValue "US" -NegationCondition $True
+$groupByVariable = New-AzApplicationGatewayFirewallCustomRuleGroupByVariablde -VariableName GeoLocation
+$groupByUserSession = New-AzApplicationGatewayFirewallCustomRuleGroupByUserSession -GroupByVariable $groupByVariable
+$ratelimitrule = New-AzApplicationGatewayFirewallCustomRule -Name GeoRateLimitRule -Priority 95 -RateLimitDuration OneMin -RateLimitThreshold 500 -RuleType RateLimitRule -MatchCondition $condition -GroupByUserSession $groupByUserSession -Action Block -State Enabled
+```
+#### [CLI](#tab/cli)
+```azurecli
+az network application-gateway waf-policy custom-rule create --policy-name ExamplePolicy --resource-group ExampleRG --action Block --name GeoRateLimitRule --priority 95 --rule-type RateLimitRule --rate-limit-threshold 500 --group-by-user-session '[{'"groupByVariables"':[{'"variableName"':'"GeoLocation"'}]}]'
+az network application-gateway waf-policy custom-rule match-condition add --match-variables RemoteAddr --operator GeoMatch --policy-name ExamplePolicy --name GeoRateLimitRule --resource-group ExampleRG --value US --negate true
+```
+* * *
+
+**Scenario Three** - Create Rate Limit Custom Rule matching all traffic for the login page, and using the GroupBy None variable. This will group and count all traffic which matches the rule as one, and apply the action across all traffic matching the rule (/login).
+
+#### [Portal](#tab/browser)
+
+1. Open an existing Application Gateway WAF Policy
+1. Select Custom Rules
+1. Add Custom Rule
+1. Add Name for the Custom Rule
+1. Select the Rate limit Rule Type radio button
+1. Enter a Priority for the rule
+1. Choose 1 minute for Rate limit duration
+1. Enter 100 for Rate limit threshold (requests)
+1. Select None for Group rate limit traffic by
+1. Under Conditions, choose String for Match Type
+1. In the Match variables section, select RequestUri for Match variable
+1. Select the Is not radio button for operation
+1. For Operator select contains
+1. Enter Login page path for match Value. In this example we use /login
+1. Leave action setting to Deny traffic
+1. Select Add to add the custom rule to the policy
+1. Select Save to save the configuration and make the custom rule active for the WAF policy.
+
+#### [PowerShell](#tab/powershell)
+```azurepowershell
+$variable = New-AzApplicationGatewayFirewallMatchVariable -VariableName RequestUri
+$condition = New-AzApplicationGatewayFirewallCondition -MatchVariable $variable -Operator Contains -MatchValue "/login" -NegationCondition $True
+$groupByVariable = New-AzApplicationGatewayFirewallCustomRuleGroupByVariable -VariableName None
+$groupByUserSession = New-AzApplicationGatewayFirewallCustomRuleGroupByUserSession -GroupByVariable $groupByVariable
+$ratelimitrule = New-AzApplicationGatewayFirewallCustomRule -Name LoginRateLimitRule -Priority 99 -RateLimitDuration OneMin -RateLimitThreshold 100 -RuleType RateLimitRule -MatchCondition $condition -GroupByUserSession $groupByUserSession -Action Block -State Enabled
+```
+#### [CLI](#tab/cli)
+```azurecli
+az network application-gateway waf-policy custom-rule create --policy-name ExamplePolicy --resource-group ExampleRG --action Block --name LoginRateLimitRule --priority 99 --rule-type RateLimitRule --rate-limit-threshold 100 --group-by-user-session '[{'"groupByVariables"':[{'"variableName"':'"None"'}]}]'
+az network application-gateway waf-policy custom-rule match-condition add --match-variables RequestUri --operator Contains --policy-name ExamplePolicy --name LoginRateLimitRule --resource-group ExampleRG --value '/login'
+```
+* * *
+
+## Next steps
+
+[Customize web application firewall rules](application-gateway-customize-waf-rules-portal.md)
web-application-firewall Rate Limiting Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/rate-limiting-overview.md
+
+ Title: Azure Web Application Firewall (WAF) rate limiting (preview)
+description: This article is an overview of Azure Web Application Firewall (WAF) on Application Gateway rate limiting.
++++ Last updated : 08/16/2023+++
+# What is rate limiting for Web Application Firewall on Application Gateway (preview)?
+
+> [!IMPORTANT]
+> Rate limiting for Web Application Firewall on Application Gateway is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+Rate limiting for Web Application Firewall on Application Gateway (preview) allows you to detect and block abnormally high levels of traffic destined for your application. By using rate limiting on Application Gateway WAF_v2, you can mitigate many types of denial-of-service attacks, protect against clients that have accidentally been misconfigured to send large volumes of requests in a short time period, or control traffic rates to your site from specific geographies.
+
+## Rate limiting policies
+
+Rate limiting is configured using custom WAF rules in a policy.
+
+When you configure a rate limit rule, you must specify the threshold: the number of requests allowed within the specified time period. Rate limiting on Application Gateway WAF_v2 uses a sliding window algorithm to determine when traffic has breached the threshold and needs to be dropped. During the first window where the threshold for the rule is breached, any more traffic matching the rate limit rule is dropped. From the second window onwards, traffic up to the threshold within the window configured is allowed, producing a throttling effect.
+
+You must also specify a match condition, which tells the WAF when to activate the rate limit. You can configure multiple rate limit rules that match different variables and paths within your policy.
+
+Application Gateway WAF_v2 also introduces a *GroupByUserSession*, which must be configured. The *GroupByUserSession* specifies how requests are grouped and counted for a matching rate limit rule.
+
+The following three *GroupByVariables* are currently available:
+- *ClientAddr* ΓÇô This is the default setting and it means that each rate limit threshold and mitigation applies independently to every unique source IP address.
+- *GeoLocation* - Traffic is grouped by their geography based on a Geo-Match on the client IP address. So for a rate limit rule, traffic from the same geography is grouped together.
+- *None* - All traffic is grouped together and counted against the threshold of the Rate Limit rule. When the threshold is breached, the action triggers against all traffic matching the rule and doesn't maintain independent counters for each client IP address or geography. It's recommended to use *None* with specific match conditions such as a sign-in page or a list of suspicious User-Agents.
+
+## Rate limiting details
+
+The configured rate limit thresholds are counted and tracked independently for each endpoint the Web Application Firewall policy is attached to. For example, a single WAF policy attached to five different listeners maintains independent counters and threshold enforcement for each of the listeners.
+
+The rate limit thresholds aren't always enforced exactly as defined, so it shouldn't be used for fine-grain control of application traffic. Instead, it's recommended for mitigating anomalous rates of traffic and for maintaining application availability.
+
+The sliding window algorithm blocks all matching traffic for the first window in which the threshold is exceeded, and then throttles traffic in future windows. Use caution when defining thresholds for configuring wide-matching rules with either *GeoLocation* or *None* as the *GroupByVariables*. Incorrectly configured thresholds could lead to frequent short outages for matching traffic.
++
+## Next step
+
+- [Create rate limiting custom rules for Application Gateway WAF v2 (preview)](rate-limiting-configure.md)
web-application-firewall Waf Sensitive Data Protection Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/waf-sensitive-data-protection-configure.md
Previously updated : 06/13/2023 Last updated : 08/15/2023 # How to mask sensitive data on Azure Web Application Firewall
$logScrubbingRuleConfig = New-AzApplicationGatewayFirewallPolicyLogScrubbingConf
``` #### [CLI](#tab/cli)
-The Azure CLI commands to enable and configure Sensitive Data Protection are coming soon.
+Use the following Command Line Interface commands to [create and configure](/cli/azure/network/application-gateway/waf-policy/policy-setting) Log Scrubbing rules for Sensitive Data Protection:
+```CLI
+az network application-gateway waf-policy policy-setting update -g <MyResourceGroup> --policy-name <MyPolicySetting> --log-scrubbing-state <Enabled/Disabled> --scrubbing-rules "[{state:<Enabled/Disabled>,match-variable:<MatchVariable>,selector-match-operator:<Operator>,selector:<Selector>}]"
+```